-
Notifications
You must be signed in to change notification settings - Fork 11.6k
llama.cpp runs WizardLM 7B #1402
New issue
Have a question about this project? # for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “#”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? # to your account
Labels
Comments
Feel free to add a reference in the README |
Can confirm. |
There's also Wizard-Vicuna, which is the best model ever. |
Pls send link. |
@rozek what exactly did you do to quantize? Which files do I need to download to do it myself? |
This issue was closed because it has been inactive for 14 days since being marked as stale. |
# for free
to join this conversation on GitHub.
Already have an account?
# to comment
Just for the records: the new WizardLM model seems to run fine with llama.cpp, just download file wizardLM-7B.ggml.q4_0.bin from Huggingface
The text was updated successfully, but these errors were encountered: