Skip to content

README: add bloom model #3570

New issue

Have a question about this project? # for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “#”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? # to your account

Merged
merged 1 commit into from
Oct 10, 2023
Merged

README: add bloom model #3570

merged 1 commit into from
Oct 10, 2023

Conversation

xingchensong
Copy link
Contributor

@xingchensong xingchensong commented Oct 10, 2023

follow-up PR from #3553

@ggerganov ggerganov merged commit c5b4936 into ggml-org:master Oct 10, 2023
joelkuiper added a commit to vortext/llama.cpp that referenced this pull request Oct 12, 2023
…example

* 'master' of github.com:ggerganov/llama.cpp: (34 commits)
  examples: support LLaVA v1.5 (multimodal model) (ggml-org#3436)
  docs : fix typo GOMP_CPU_AFFINITY (ggml-org#3597)
  cmake : fix add_compile_options on macOS
  typo : it is `--n-gpu-layers` not `--gpu-layers` (ggml-org#3592)
  ci : check if there is enough VRAM (ggml-org#3596)
  server : add completion mode (no chat) (ggml-org#3582)
  prompts : add mnemonics.txt
  server : fix kv cache management (ggml-org#3588)
  main : fix session loading bug (ggml-org#3400)
  server : add parameter -tb N, --threads-batch N (ggml-org#3584)
  common : fix mirostat state when using multiple sequences (ggml-org#3543)
  batched : add bench tool (ggml-org#3545)
  examples : add batched.swift + improve CI for swift (ggml-org#3562)
  Add MPT model to supported models in README.md (ggml-org#3574)
  Minor improvements in GPT2 tokenizer (ggml-org#3567)
  readme : add bloom (ggml-org#3570)
  llm : add bloom models (ggml-org#3553)
  swift : improvements and fixes (ggml-org#3564)
  llm : add MPT support (ggml-org#3417)
  infill. : fix tokenization (ggml-org#3508)
  ...
# for free to join this conversation on GitHub. Already have an account? # to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants