-
Notifications
You must be signed in to change notification settings - Fork 2.5k
[bug]: FLUX model loading ERROR #7859
New issue
Have a question about this project? # for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “#”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? # to your account
Labels
bug
Something isn't working
Comments
duplicate: #7856 |
psychedelicious
added a commit
that referenced
this issue
Mar 30, 2025
Before FLUX Fill was merged, we didn't do any checks for the model variant. We always returned "normal". To determine if a model is a FLUX Fill model, we need to check the state dict for a specific key. Initially, this logic was too strict and rejected quantized FLUX models. This issue was resolved, but it turns out there is another failure mode - some fine-tunes use a different key. This change further reduces the strictness, handling the alternate key and also falling back to "normal" if we don't see either key. This effectively restores the previous probing behaviour for all FLUX models. Closes #7856 Closes #7859
psychedelicious
added a commit
that referenced
this issue
Mar 30, 2025
Before FLUX Fill was merged, we didn't do any checks for the model variant. We always returned "normal". To determine if a model is a FLUX Fill model, we need to check the state dict for a specific key. Initially, this logic was too strict and rejected quantized FLUX models. This issue was resolved, but it turns out there is another failure mode - some fine-tunes use a different key. This change further reduces the strictness, handling the alternate key and also falling back to "normal" if we don't see either key. This effectively restores the previous probing behaviour for all FLUX models. Closes #7856 Closes #7859
psychedelicious
added a commit
that referenced
this issue
Mar 31, 2025
Before FLUX Fill was merged, we didn't do any checks for the model variant. We always returned "normal". To determine if a model is a FLUX Fill model, we need to check the state dict for a specific key. Initially, this logic was too strict and rejected quantized FLUX models. This issue was resolved, but it turns out there is another failure mode - some fine-tunes use a different key. This change further reduces the strictness, handling the alternate key and also falling back to "normal" if we don't see either key. This effectively restores the previous probing behaviour for all FLUX models. Closes #7856 Closes #7859
4 tasks
psychedelicious
added a commit
that referenced
this issue
Mar 31, 2025
Before FLUX Fill was merged, we didn't do any checks for the model variant. We always returned "normal". To determine if a model is a FLUX Fill model, we need to check the state dict for a specific key. Initially, this logic was too strict and rejected quantized FLUX models. This issue was resolved, but it turns out there is another failure mode - some fine-tunes use a different key. This change further reduces the strictness, handling the alternate key and also falling back to "normal" if we don't see either key. This effectively restores the previous probing behaviour for all FLUX models. Closes #7856 Closes #7859
# for free
to join this conversation on GitHub.
Already have an account?
# to comment
Is there an existing issue for this problem?
Operating system
Windows
GPU vendor
Nvidia (CUDA)
GPU model
3090
GPU VRAM
24
Version number
5.9
Browser
Brave
Python dependencies
No response
What happened
greetings, models that used to load without issues in 5.5 are now coughing in 5.9:
midjourneyReplica_flux1Dev.safetensors could not be migrated: 'img_in.weight'
https://civitai.com/models/885098?modelVersionId=990775
What you expected to happen
load model
How to reproduce the problem
No response
Additional context
No response
Discord username
No response
The text was updated successfully, but these errors were encountered: