Skip to content

KeyError: 'model.embed_tokens.weight' when converting .safetensors to ggml #1000

New issue

Have a question about this project? # for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “#”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? # to your account

Closed
Jake36921 opened this issue Apr 15, 2023 · 2 comments
Closed
Labels

Comments

@Jake36921
Copy link

(base) PS E:\Games\llama.cpp> python3 convert.py OPT-13B-Erebus-4bit-128g.safetensors --outtype q4_1 --outfile 4ggml.bin
Loading model file OPT-13B-Erebus-4bit-128g.safetensors
Loading vocab file tokenizer.model
Traceback (most recent call last):
File "E:\Games\llama.cpp\convert.py", line 1147, in
main()
File "E:\Games\llama.cpp\convert.py", line 1137, in main
model = do_necessary_conversions(model)
File "E:\Games\llama.cpp\convert.py", line 983, in do_necessary_conversions
model = convert_transformers_to_orig(model)
File "E:\Games\llama.cpp\convert.py", line 588, in convert_transformers_to_orig
out["tok_embeddings.weight"] = model["model.embed_tokens.weight"]
KeyError: 'model.embed_tokens.weight'
(base) PS E:\Games\llama.cpp>

Model is from here: https://huggingface.co/notstoic/OPT-13B-Erebus-4bit-128g

@jon-chuang
Copy link
Contributor

I don't think OPT 13B is currently supported.

@github-actions github-actions bot added the stale label Mar 25, 2024
Copy link
Contributor

github-actions bot commented Apr 9, 2024

This issue was closed because it has been inactive for 14 days since being marked as stale.

# for free to join this conversation on GitHub. Already have an account? # to comment
Labels
Projects
None yet
Development

No branches or pull requests

2 participants