Skip to content

discussion: expanding the use-case of llama.cpp - embedded LLM toolchain #930

New issue

Have a question about this project? # for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “#”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? # to your account

Closed
jon-chuang opened this issue Apr 13, 2023 · 1 comment
Closed
Labels

Comments

@jon-chuang
Copy link
Contributor

jon-chuang commented Apr 13, 2023

For instance, should llama.cpp:

  1. support an embedded vector-similarity knowledge base?
  2. support other models for multimodality (similar to GPT4). (See e.g. CLIP-based)

It doesn't have be part of the compilation of main, it could be in a llama.embed toolchain

Copy link
Contributor

This issue was closed because it has been inactive for 14 days since being marked as stale.

# for free to join this conversation on GitHub. Already have an account? # to comment
Labels
Projects
None yet
Development

No branches or pull requests

1 participant