Skip to content

How to deploy LoRA adapter on NIM based on checkpoint created during llm.finetune with NeMo Run? #12493

Closed Answered by mikolajMacioszczyk
mikolajMacioszczyk asked this question in Q&A
Discussion options

You must be logged in to vote

I found a soultion how to convert it to hf format, which is also supported by NIM with llm.peft.export_lora:

def configure_adapter_checkpoint_hf_conversion():
    return run.Partial(
        llm.peft.export_lora,
        lora_checkpoint_path=peft_ckpt_path,
        output_path=adapter_hf_outpout_path
    )

convert_ckpt_hf = configure_adapter_checkpoint_hf_conversion()
# define your executor
local_executor = run.LocalExecutor()

run.run(convert_ckpt_hf, executor=local_executor)

Replies: 1 comment

Comment options

You must be logged in to vote
0 replies
Answer selected by mikolajMacioszczyk
# for free to join this conversation on GitHub. Already have an account? # to comment
Category
Q&A
Labels
None yet
1 participant