Skip to content
New issue

Have a question about this project? # for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “#”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? # to your account

use_safetensors parameter not passed to submodules when loading pipeline #9576

Closed
elismasilva opened this issue Oct 3, 2024 · 4 comments
Closed
Labels
bug Something isn't working

Comments

@elismasilva
Copy link
Contributor

elismasilva commented Oct 3, 2024

Describe the bug

When a model is loading from_pretrained, if the model is .bin and not .safetensors i got warnings like

"An error occurred while trying to fetch models/stablediffusionapi/yamermix-v8-vae: Error no file named diffusion_pytorch_model.safetensors found in directory models/stablediffusionapi/yamermix-v8-vae.
Defaulting to unsafe serialization. Pass allow_pickle=False to raise an error instead"

But is expected if i pass use_safetensors=False to supress this warning.

So i notice that after this line is need to pass

cached_folder=cached_folder,
use_safetensors variable

And after this line repass the parameter

loading_kwargs["variant"] = model_variants.pop(name, None)

Reproduction

from diffusers import StableDiffusionXLPipeline
import torch
pipe = StableDiffusionXLPipeline.from_pretrained('stablediffusionapi/yamermix-v8-vae', torch_dtype=torch.float16)

Logs

No response

System Info

  • 🤗 Diffusers version: 0.31.0.dev0
  • Platform: Windows-10-10.0.19045-SP0
  • Running on Google Colab?: No
  • Python version: 3.10.11
  • PyTorch version (GPU?): 2.4.0+cu121 (True)
  • Flax version (CPU?/GPU?/TPU?): not installed (NA)
  • Jax version: not installed
  • JaxLib version: not installed
  • Huggingface_hub version: 0.24.5
  • Transformers version: 4.40.1
  • Accelerate version: 0.29.3
  • PEFT version: 0.12.0
  • Bitsandbytes version: 0.43.1
  • Safetensors version: 0.4.4
  • xFormers version: 0.0.27.post2
  • Accelerator: NVIDIA GeForce RTX 3060 Ti, 8192 MiB
  • Using GPU in script?:
  • Using distributed or parallel set-up in script?:

Who can help?

No response

@elismasilva elismasilva added the bug Something isn't working label Oct 3, 2024
@a-r-r-o-w
Copy link
Member

Thank you for the detailed steps of repro and possible fix! The solution seems correct to me and we don't pass use_safetensors to ModelMixin which raises this warning. Would you like to open a PR? cc @yiyixuxu

@elismasilva
Copy link
Contributor Author

Thank you for the detailed steps of repro and possible fix! The solution seems correct to me and we don't pass use_safetensors to ModelMixin which raises this warning. Would you like to open a PR? cc @yiyixuxu

I will try do my first PR.

@elismasilva
Copy link
Contributor Author

Is it correct to issue this warning ""Defaulting to unsafe serialization. Pass allow_pickle=False to raise an error instead."" since the allow_pickle variable can only be manipulated directly by the code and not by the pipeline?

The scenarios I tested are the following:

  • If I do not pass use_safetensors in the pipeline, it will set use_safetensors to True and allow_pickle to True and then an alert will be issued if it does not find ".safetensors" in the directory, however it will continue loading the .bin files normally.

  • If I pass use_safetensors True in the pipeline, it will not issue the warning, it will simply give the error message that it did not find ".safetensor" files in the directory.

  • If I pass use_safetensors False in the pipeline (the scenario I'm fixing), then it means that it will directly search for the .bin files in the directory without throwing errors, except if the model is .safetensors, then it will throw an error saying that it didn't find any .bin files in the directory.

So I believe that the warning message for the first scenario where nothing is passed in the pipeline should be something like: "The models weights are being loaded with insecure serialization." right?

elismasilva added a commit to DEVAIEXP/diffusers that referenced this issue Oct 4, 2024

Verified

This commit was created on GitHub.com and signed with GitHub’s verified signature. The key has expired.
… submodels (huggingface#9576)
elismasilva added a commit to DEVAIEXP/diffusers that referenced this issue Oct 4, 2024
@elismasilva
Copy link
Contributor Author

@a-r-r-o-w i added @yiyixuxu and @asomoza in review but i dont now if it would be you.

elismasilva added a commit to DEVAIEXP/diffusers that referenced this issue Oct 7, 2024
yiyixuxu pushed a commit that referenced this issue Oct 7, 2024
… submodels (#9576) (#9587)

* Fix for use_safetensors parameters, allow use of parameter on loading submodels (#9576)
leisuzz pushed a commit to leisuzz/diffusers that referenced this issue Oct 11, 2024
… submodels (huggingface#9576) (huggingface#9587)

* Fix for use_safetensors parameters, allow use of parameter on loading submodels (huggingface#9576)
sayakpaul pushed a commit that referenced this issue Dec 23, 2024
… submodels (#9576) (#9587)

* Fix for use_safetensors parameters, allow use of parameter on loading submodels (#9576)
# for free to join this conversation on GitHub. Already have an account? # to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

2 participants