Skip to content
New issue

Have a question about this project? # for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “#”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? # to your account

llama.cpp: update submodule for "code" model crash workaround #2382

Merged
merged 1 commit into from
May 29, 2024

Conversation

cebtenzzre
Copy link
Member

This PR pulls in nomic-ai/llama.cpp@f67f465, which is a workaround for ggerganov/llama.cpp#7592 so that some models with "code" in their name won't crash at load time.

Without this change, this model crashes GPT4All if you try to load it (not 100% reliably since this is caused by UB - with -DCMAKE_BUILD_TYPE=Debug on Linux it will always crash if you have libstdc++). With this change, it loads successfully.

Fixes #2379

Signed-off-by: Jared Van Bortel <jared@nomic.ai>
@cebtenzzre cebtenzzre requested a review from manyoso May 28, 2024 17:41
@cebtenzzre cebtenzzre merged commit f047f38 into main May 29, 2024
6 of 19 checks passed
@cebtenzzre cebtenzzre deleted the work-around-codemodel-crash branch May 29, 2024 14:50
# for free to join this conversation on GitHub. Already have an account? # to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Certain models with "code" in their name crash GPT4All 2.8.0
2 participants