-
Notifications
You must be signed in to change notification settings - Fork 11.5k
GPT4All: invalid model file (bad magic) #662
New issue
Have a question about this project? # for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “#”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? # to your account
Comments
Below shows that your file has wrong format. My file works with this repo so if the file is proper it will work from #103 (comment) . |
Hi @CoderRC, thanks for your reply. can you please share your gpt4all sha 256 sums?:
|
You need to use convert-gpt4all-to-ggml.py first and then migrate-ggml-2023-03-30-pr613.py. |
yes I'm getting the same issue as @doomguy I built llama using cmake and with no posix additions, could this be the source of the error ? I also seem to have the same error (bad file magic) when I attempt to quantize the 30B model. I've tried a few different copies and re-downlading but no luck. |
hmmm still having issues - Failed loading model |
Hi @leonardohn, thanks - that did the trick. That python script just went under my radar.
|
@doomguy I had the same issue yesterday, after they introduced the breaking change. It is still not in the README, but the changes seem to be for a good reason. @nlpander That instruction is for the GPT4All-7B model. I guess the 30B model is on a different version of ggml, so you could try using the other conversion scripts. |
The files [convert-gpt4all-to-ggml.py] and [migrate-ggml-2023-03-30-pr613.py] don't exist anymore.
|
@Freshbytes you can fetch them from a previous commit. I think the scripts are both self-contained, so the changes on the main project shouldn't affect them. |
Did you find a fix for this? I am trying to use gpt4all models (snoozy) |
Hi there, followed the instructions to get gpt4all running with llama.cpp, but was somehow unable to produce a valid model using the provided python conversion scripts:
Are just the magic bytes in the python script wrong or is it a completely different format?
Related issues: #647
The text was updated successfully, but these errors were encountered: