Skip to content
New issue

Have a question about this project? # for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “#”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? # to your account

gguf / mlx format? #34

Open
alexander-potemkin opened this issue Jan 13, 2024 · 0 comments
Open

gguf / mlx format? #34

alexander-potemkin opened this issue Jan 13, 2024 · 0 comments

Comments

@alexander-potemkin
Copy link

Hello and thanks for open-sourcing the model!

As it doesn't seem to be any ready to use gguf or mlx formats (for llama.cpp and macos respectively) - is there any chance you can give a hint on how to convert YaLM there?

It would be of real help to enable model to run on non-Nvidia enabled HW, like any modern pc and mobile.

Thanks in advance!

# for free to join this conversation on GitHub. Already have an account? # to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant