Skip to content

Fixed incorrect example of quantize in README.md #1248

New issue

Have a question about this project? # for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “#”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? # to your account

Closed
wants to merge 1 commit into from
Closed

Fixed incorrect example of quantize in README.md #1248

wants to merge 1 commit into from

Conversation

Aadniz
Copy link

@Aadniz Aadniz commented Apr 30, 2023

README.md showed an example of quantize that doesn't work anymore.

$ ./quantize ./models/30B/ggml-model-f16.bin ./models/30B/ggml-model-q4.bin q4_1
llama_model_quantize: failed to quantize: invalid quantization type 0

main: failed to quantize model from './models/30B/ggml-model-f16.bin'

Changed example in README.md to show the options 2 - q4_0 and 3 - q4_1

README.md showed an example of quantize that doesn't work anymore
@grencez
Copy link
Contributor

grencez commented Apr 30, 2023

Do you need to sync and rebuilt or something? I am able to run it as documented: ./quantize "path/to/ggml-model-f16.bin" "path/to/ggml-model-q4_0.bin" q4_0

@Aadniz
Copy link
Author

Aadniz commented Apr 30, 2023

My bad. did not run make after pulling

# for free to join this conversation on GitHub. Already have an account? # to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants