Skip to content

[bug]: FLUX model loading ERROR #7859

New issue

Have a question about this project? # for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “#”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? # to your account

Closed
1 task done
Neosettler opened this issue Mar 30, 2025 · 1 comment · Fixed by #7862
Closed
1 task done

[bug]: FLUX model loading ERROR #7859

Neosettler opened this issue Mar 30, 2025 · 1 comment · Fixed by #7862
Labels
bug Something isn't working

Comments

@Neosettler
Copy link

Is there an existing issue for this problem?

  • I have searched the existing issues

Operating system

Windows

GPU vendor

Nvidia (CUDA)

GPU model

3090

GPU VRAM

24

Version number

5.9

Browser

Brave

Python dependencies

No response

What happened

greetings, models that used to load without issues in 5.5 are now coughing in 5.9:

midjourneyReplica_flux1Dev.safetensors could not be migrated: 'img_in.weight'

https://civitai.com/models/885098?modelVersionId=990775

What you expected to happen

load model

How to reproduce the problem

No response

Additional context

No response

Discord username

No response

@Neosettler Neosettler added the bug Something isn't working label Mar 30, 2025
@Neosettler
Copy link
Author

duplicate: #7856

psychedelicious added a commit that referenced this issue Mar 30, 2025
Before FLUX Fill was merged, we didn't do any checks for the model variant. We always returned "normal".

To determine if a model is a FLUX Fill model, we need to check the state dict for a specific key. Initially, this logic was too strict and rejected quantized FLUX models. This issue was resolved, but it turns out there is another failure mode - some fine-tunes use a different key.

This change further reduces the strictness, handling the alternate key and also falling back to "normal" if we don't see either key. This effectively restores the previous probing behaviour for all FLUX models.

Closes #7856
Closes #7859
psychedelicious added a commit that referenced this issue Mar 30, 2025
Before FLUX Fill was merged, we didn't do any checks for the model variant. We always returned "normal".

To determine if a model is a FLUX Fill model, we need to check the state dict for a specific key. Initially, this logic was too strict and rejected quantized FLUX models. This issue was resolved, but it turns out there is another failure mode - some fine-tunes use a different key.

This change further reduces the strictness, handling the alternate key and also falling back to "normal" if we don't see either key. This effectively restores the previous probing behaviour for all FLUX models.

Closes #7856
Closes #7859
psychedelicious added a commit that referenced this issue Mar 31, 2025
Before FLUX Fill was merged, we didn't do any checks for the model variant. We always returned "normal".

To determine if a model is a FLUX Fill model, we need to check the state dict for a specific key. Initially, this logic was too strict and rejected quantized FLUX models. This issue was resolved, but it turns out there is another failure mode - some fine-tunes use a different key.

This change further reduces the strictness, handling the alternate key and also falling back to "normal" if we don't see either key. This effectively restores the previous probing behaviour for all FLUX models.

Closes #7856
Closes #7859
psychedelicious added a commit that referenced this issue Mar 31, 2025
Before FLUX Fill was merged, we didn't do any checks for the model variant. We always returned "normal".

To determine if a model is a FLUX Fill model, we need to check the state dict for a specific key. Initially, this logic was too strict and rejected quantized FLUX models. This issue was resolved, but it turns out there is another failure mode - some fine-tunes use a different key.

This change further reduces the strictness, handling the alternate key and also falling back to "normal" if we don't see either key. This effectively restores the previous probing behaviour for all FLUX models.

Closes #7856
Closes #7859
# for free to join this conversation on GitHub. Already have an account? # to comment
Labels
bug Something isn't working
Projects
None yet
Development

Successfully merging a pull request may close this issue.

1 participant