Skip to content
New issue

Have a question about this project? # for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “#”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? # to your account

llama : support quantum K cache #4312

Merged
merged 14 commits into from
Dec 6, 2023

llama : remove memory_f16 and kv_f16 flags

af99c6f
Select commit
Loading
Failed to load commit list.
Merged

llama : support quantum K cache #4312

llama : remove memory_f16 and kv_f16 flags
af99c6f
Select commit
Loading
Failed to load commit list.

Workflow runs completed with no jobs