-
Notifications
You must be signed in to change notification settings - Fork 5.8k
New issue
Have a question about this project? # for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “#”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? # to your account
[PEFT] overriding an adapter #6510
Comments
Seems like we need |
Something strange. Even when I first do: del unet_.peft_config
unet_._hf_peft_config_loaded = False And then do LoraLoaderMixin.load_lora_into_unet(lora_state_dict, network_alphas=network_alphas, unet=unet_) It doesn't set |
Okay seems like one can repurpose |
AH ok, thanks for investigating @sayakpaul ! |
I was investigating a fix for #6442. All issues: https://github.com/huggingface/diffusers/issues?q=is%3Aissue+is%3Aopen+ValueError%3A+Attempting+to+unscale+FP16+gradients.+.
While resuming training from a checkpoint, we do: https://github.com/huggingface/diffusers/blob/main/examples/dreambooth/train_dreambooth_lora_sdxl.py#L1060C1-L1071C10.
This leads to a warning:
Now, when we try to load the final serialized LoRA ckpt, it leads to:
So, I think having a way to override an existing
peft_config
would be nice. But if there's a better to way to do, please let me know.Alternatives considered
disable_adapters()
on the base model and then calladd_adapter()
with the LoRA configs we initialize at the beginning of the script. -- leads to error, complaining the "default" adapter is already in use.default_0
as the adapter name if we're resuming from training and so on. But this is very hacky way of getting around with this issue. We shouldn't do this IMO.Cc: @younesbelkada @BenjaminBossan
The text was updated successfully, but these errors were encountered: