Skip to content
New issue

Have a question about this project? # for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “#”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? # to your account

ggml_tensor: update the structure comments. #3283

Merged
merged 3 commits into from
Sep 28, 2023
Merged

Conversation

huajsj
Copy link
Contributor

@huajsj huajsj commented Sep 20, 2023

No description provided.

Co-authored-by: slaren <slarengh@gmail.com>
@huajsj
Copy link
Contributor Author

huajsj commented Sep 22, 2023

add minor change to trigger the build.

@ggerganov ggerganov merged commit 0ccfc62 into ggml-org:master Sep 28, 2023
joelkuiper added a commit to vortext/llama.cpp that referenced this pull request Oct 2, 2023
…example

* 'master' of github.com:ggerganov/llama.cpp:
  ggml-cuda : perform cublas mat mul of quantized types as f16 (ggml-org#3412)
  llama.cpp : add documentation about rope_freq_base and scale values (ggml-org#3401)
  train : fix KQ_pos allocation (ggml-org#3392)
  llama : quantize up to 31% faster on Linux and Windows with mmap (ggml-org#3206)
  readme : update hot topics + model links (ggml-org#3399)
  readme : add link to grammars app (ggml-org#3388)
  swift : fix build on xcode 15 (ggml-org#3387)
  build : enable more non-default compiler warnings (ggml-org#3200)
  ggml_tensor: update the structure comments. (ggml-org#3283)
  ggml : release the requested thread pool resource (ggml-org#3292)
  llama.cpp : split llama_context_params into model and context params (ggml-org#3301)
  ci : multithreaded builds (ggml-org#3311)
  train : finetune LORA (ggml-org#2632)
  gguf : basic type checking in gguf_get_* (ggml-org#3346)
  gguf : make token scores and types optional (ggml-org#3347)
  ci : disable freeBSD builds due to lack of VMs (ggml-org#3381)
  llama : custom attention mask + parallel decoding + no context swaps (ggml-org#3228)
  docs : mark code as Bash (ggml-org#3375)
  readme : add Mistral AI release 0.1 (ggml-org#3362)
  ggml-cuda : perform cublas fp16 matrix multiplication as fp16 (ggml-org#3370)
yusiwen pushed a commit to yusiwen/llama.cpp that referenced this pull request Oct 7, 2023
* ggml_tensor: update the structure comments.

* remove semicolon

Co-authored-by: slaren <slarengh@gmail.com>

* Update ggml.h

---------

Co-authored-by: Cebtenzzre <cebtenzzre@gmail.com>
Co-authored-by: slaren <slarengh@gmail.com>
# for free to join this conversation on GitHub. Already have an account? # to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants