Skip to content
New issue

Have a question about this project? # for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “#”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? # to your account

readme : link Starcoder and mistral models #3399

Merged
merged 1 commit into from
Sep 29, 2023
Merged

readme : link Starcoder and mistral models #3399

merged 1 commit into from
Sep 29, 2023

Conversation

BarfingLemurs
Copy link
Contributor

closes #1901

@BarfingLemurs BarfingLemurs changed the title readme: link Starcoder and mistral models readme : link Starcoder and mistral models Sep 29, 2023
@ggerganov ggerganov merged commit 0a4a4a0 into ggml-org:master Sep 29, 2023
joelkuiper added a commit to vortext/llama.cpp that referenced this pull request Oct 2, 2023
…example

* 'master' of github.com:ggerganov/llama.cpp:
  ggml-cuda : perform cublas mat mul of quantized types as f16 (ggml-org#3412)
  llama.cpp : add documentation about rope_freq_base and scale values (ggml-org#3401)
  train : fix KQ_pos allocation (ggml-org#3392)
  llama : quantize up to 31% faster on Linux and Windows with mmap (ggml-org#3206)
  readme : update hot topics + model links (ggml-org#3399)
  readme : add link to grammars app (ggml-org#3388)
  swift : fix build on xcode 15 (ggml-org#3387)
  build : enable more non-default compiler warnings (ggml-org#3200)
  ggml_tensor: update the structure comments. (ggml-org#3283)
  ggml : release the requested thread pool resource (ggml-org#3292)
  llama.cpp : split llama_context_params into model and context params (ggml-org#3301)
  ci : multithreaded builds (ggml-org#3311)
  train : finetune LORA (ggml-org#2632)
  gguf : basic type checking in gguf_get_* (ggml-org#3346)
  gguf : make token scores and types optional (ggml-org#3347)
  ci : disable freeBSD builds due to lack of VMs (ggml-org#3381)
  llama : custom attention mask + parallel decoding + no context swaps (ggml-org#3228)
  docs : mark code as Bash (ggml-org#3375)
  readme : add Mistral AI release 0.1 (ggml-org#3362)
  ggml-cuda : perform cublas fp16 matrix multiplication as fp16 (ggml-org#3370)
yusiwen pushed a commit to yusiwen/llama.cpp that referenced this pull request Oct 7, 2023
# for free to join this conversation on GitHub. Already have an account? # to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Wizard Coder 15b Support?
2 participants