Skip to content

convert-pth-to-ggml.py how to handle torch.view_as_complex #225

New issue

Have a question about this project? # for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “#”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? # to your account

Closed
haolongzhangm opened this issue Mar 17, 2023 · 3 comments
Closed

convert-pth-to-ggml.py how to handle torch.view_as_complex #225

haolongzhangm opened this issue Mar 17, 2023 · 3 comments
Labels
need more info The OP should provide more details about the issue question Further information is requested

Comments

@haolongzhangm
Copy link

llama code block include view_as_real: https://github.com/facebookresearch/llama/blob/main/llama/model.py#L68

how to convert-pth-to-ggml.py handle this part of weight

@gjmulder gjmulder added question Further information is requested need more info The OP should provide more details about the issue labels Mar 17, 2023
@gjmulder
Copy link
Collaborator

Please improve your question with more text and examples so it is easier to understand what you are asking.

@nullhook
Copy link

If you are asking about applying rotary embeddings, then that is done in the llama.cpp file and not during conversion

@haolongzhangm
Copy link
Author

@nullhook ths for u info

Deadsg pushed a commit to Deadsg/llama.cpp that referenced this issue Dec 19, 2023
Fixed CUBLAS DLL load issues on Windows
# for free to join this conversation on GitHub. Already have an account? # to comment
Labels
need more info The OP should provide more details about the issue question Further information is requested
Projects
None yet
Development

No branches or pull requests

3 participants