Skip to content
New issue

Have a question about this project? # for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “#”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? # to your account

[PEFT] overriding an adapter #6510

Closed
sayakpaul opened this issue Jan 10, 2024 · 4 comments
Closed

[PEFT] overriding an adapter #6510

sayakpaul opened this issue Jan 10, 2024 · 4 comments

Comments

@sayakpaul
Copy link
Member

I was investigating a fix for #6442. All issues: https://github.com/huggingface/diffusers/issues?q=is%3Aissue+is%3Aopen+ValueError%3A+Attempting+to+unscale+FP16+gradients.+.

While resuming training from a checkpoint, we do: https://github.com/huggingface/diffusers/blob/main/examples/dreambooth/train_dreambooth_lora_sdxl.py#L1060C1-L1071C10.

This leads to a warning:

01/10/2024 02:12:51 - INFO - peft.tuners.tuners_utils - Already found a `peft_config` attribute in the model. This will lead to having multiple adapters in the model. Make sure to know what you are doing!

Now, when we try to load the final serialized LoRA ckpt, it leads to:

Loading adapter weights from state_dict led to unexpected keys not found in the model:  ['down_blocks.1.attentions.0.transformer_blocks.0.attn1.to_q.lora_A_1.default_0.weight', 'down_blocks.1.attentions.0.transformer_blocks.0.attn1.to_q.lora_B_1.default_0.weight'
...

So, I think having a way to override an existing peft_config would be nice. But if there's a better to way to do, please let me know.

Alternatives considered

  • Call disable_adapters() on the base model and then call add_adapter() with the LoRA configs we initialize at the beginning of the script. -- leads to error, complaining the "default" adapter is already in use.
  • Specify default_0 as the adapter name if we're resuming from training and so on. But this is very hacky way of getting around with this issue. We shouldn't do this IMO.

Cc: @younesbelkada @BenjaminBossan

@sayakpaul
Copy link
Member Author

@sayakpaul
Copy link
Member Author

sayakpaul commented Jan 10, 2024

Something strange.

Even when I first do:

del unet_.peft_config
unet_._hf_peft_config_loaded = False

And then do

LoraLoaderMixin.load_lora_into_unet(lora_state_dict, network_alphas=network_alphas, unet=unet_)

It doesn't set _hf_peft_config_loaded to True. Is that expected?

@sayakpaul
Copy link
Member Author

Okay seems like one can repurpose delete_adapters() from diffusers.utils.peft_utils for this purpose. Sorry for the false alarm.

@younesbelkada
Copy link
Contributor

AH ok, thanks for investigating @sayakpaul !

# for free to join this conversation on GitHub. Already have an account? # to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants