You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Processing variable: model.embed_tokens.weight with shape: torch.Size([32001, 4096]) and type: torch.float32
Processing variable: model.layers.0.self_attn.q_proj.weight with shape: torch.Size([4096, 4096]) and type: torch.float32
Processing variable: model.layers.0.self_attn.k_proj.weight with shape: torch.Size([4096, 4096]) and type: torch.float32
Processing variable: model.layers.0.self_attn.v_proj.weight with shape: torch.Size([4096, 4096]) and type: torch.float32
Processing variable: model.layers.0.self_attn.o_proj.weight with shape: torch.Size([4096, 4096]) and type: torch.float32
Processing variable: model.layers.0.self_attn.rotary_emb.inv_freq with shape: torch.Size([64]) and type: torch.bfloat16
Traceback (most recent call last):
File "/Users/domeie/projects/llama.cpp/convert-pth-to-ggml.py", line 157, in
main()
File "/Users/domeie/projects/llama.cpp/convert-pth-to-ggml.py", line 151, in main
process_and_write_variables(fout, model, ftype)
File "/Users/domeie/projects/llama.cpp/convert-pth-to-ggml.py", line 109, in process_and_write_variables
data = datao.numpy().squeeze()
TypeError: Got unsupported ScalarType BFloat16
The text was updated successfully, but these errors were encountered:
austinchau
changed the title
convert-pth-to-ggml.py error with "Got unsupported ScalarType BFloat16" running on "chavinlo/alpaca-native" alpaca native model
convert-pth-to-ggml.py error with "Got unsupported ScalarType BFloat16"
Mar 22, 2023
Trying to convert "chavinlo/alpaca-native" alpaca native model's (https://huggingface.co/chavinlo/alpaca-native) weights to ggml but got this error -
Processing part 0
Processing variable: model.embed_tokens.weight with shape: torch.Size([32001, 4096]) and type: torch.float32
Processing variable: model.layers.0.self_attn.q_proj.weight with shape: torch.Size([4096, 4096]) and type: torch.float32
Processing variable: model.layers.0.self_attn.k_proj.weight with shape: torch.Size([4096, 4096]) and type: torch.float32
Processing variable: model.layers.0.self_attn.v_proj.weight with shape: torch.Size([4096, 4096]) and type: torch.float32
Processing variable: model.layers.0.self_attn.o_proj.weight with shape: torch.Size([4096, 4096]) and type: torch.float32
Processing variable: model.layers.0.self_attn.rotary_emb.inv_freq with shape: torch.Size([64]) and type: torch.bfloat16
Traceback (most recent call last):
File "/Users/domeie/projects/llama.cpp/convert-pth-to-ggml.py", line 157, in
main()
File "/Users/domeie/projects/llama.cpp/convert-pth-to-ggml.py", line 151, in main
process_and_write_variables(fout, model, ftype)
File "/Users/domeie/projects/llama.cpp/convert-pth-to-ggml.py", line 109, in process_and_write_variables
data = datao.numpy().squeeze()
TypeError: Got unsupported ScalarType BFloat16
The text was updated successfully, but these errors were encountered: