-
Notifications
You must be signed in to change notification settings - Fork 11.5k
convert the 7B model to ggml FP16 format fails on RPi 4B #138
New issue
Have a question about this project? # for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “#”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? # to your account
Comments
Ah. Got it. I think I need a bigger SD card. |
I have the same problem, the ram increases until it is completely filled and that's as far as it goes |
I'm pretty sure you need memory for both the original fp16 model + the converted model. A RPi 4 (even with 8 GB of RAM) isn't going to have enough. |
Would it be okay to share a torrent of converted models? Would be a smaller download |
There are indeed torrents in places like 4chan. I personally don't think model weights can even be copyrighted, but the repo maintainer probably doesn't want to risk it, otherwise there would likely already be a torrent link provided. |
convert the 7B model need 16G RAM. |
If you're having memory trouble you should make a swapfile. I only have 8GB of memory, and I'm going back and forth between tasks, so I just wrote a script that makes it easy to create and destroy swapfiles. https://github.com/apaz-cli/Scripts/blob/master/swapcreate Or you can find instructions online. |
you can convert it on any computer then just copy it over. |
This looks to be a RAM issue. Closing. |
Replace the rand() with a portable rng
Everything's OK until this step
python3 convert-pth-to-ggml.py models/7B/ 1
{'dim': 4096, 'multiple_of': 256, 'n_heads': 32, 'n_layers': 32, 'norm_eps': 1e-06, 'vocab_size': 32000}
n_parts = 1
Processing part 0
Killed
models/7B/ggml-model-f16.bin isn't created
The text was updated successfully, but these errors were encountered: