Skip to content
New issue

Have a question about this project? # for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “#”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? # to your account

add support for llama.cpp local server #1

Open
nischalj10 opened this issue May 29, 2024 · 1 comment
Open

add support for llama.cpp local server #1

nischalj10 opened this issue May 29, 2024 · 1 comment

Comments

@nischalj10
Copy link
Owner

llama.cpp is much faster than ollama. it also provides an open ai api compatible local server. this should be a much better way to package local models in desktop apps and would be a great addition to the repo.

@nischalj10
Copy link
Owner Author

nischalj10 commented Jul 23, 2024

update - llama.cpp has stopped supporting VLMs on its server. thus, not integrating it currently ggml-org/llama.cpp#5882

# for free to join this conversation on GitHub. Already have an account? # to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant