Skip to content
New issue

Have a question about this project? # for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “#”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? # to your account

Consistent prefix/suffix coloring #3425

Merged

Conversation

h-h-h-h
Copy link
Contributor

@h-h-h-h h-h-h-h commented Oct 1, 2023

This PR removes any coloring from the --in-prefix text. Previously, all --in-suffix texts as well as only the first --in-prefix text weren't colored. So, now, strictly the actually typed user input is colored.

Potentially, the --in-prefix and --in-suffix texts could also be colored yellow like the initial prompt you can take from a file. (Not done by this PR.)

The `--in-prefix` text was inconsistently colored. Now, it's never colored, just like the `--in-suffix` text.
@h-h-h-h h-h-h-h force-pushed the h-h-h-h-bugfix-no-in-prefix-color branch from 8a778c1 to 178d0dd Compare October 2, 2023 19:57
@ggerganov ggerganov merged commit 8186242 into ggml-org:master Oct 3, 2023
@h-h-h-h h-h-h-h deleted the h-h-h-h-bugfix-no-in-prefix-color branch October 4, 2023 12:31
joelkuiper added a commit to vortext/llama.cpp that referenced this pull request Oct 5, 2023
…example

* 'master' of github.com:ggerganov/llama.cpp: (24 commits)
  convert : fix Baichuan2 models by using vocab size in config.json (ggml-org#3299)
  readme : add project status link
  ggml : fix build after ggml-org#3329
  llm : add Refact model (ggml-org#3329)
  sync : ggml (conv 1d + 2d updates, UB fixes) (ggml-org#3468)
  finetune : readme fix typo (ggml-org#3465)
  ggml : add RISC-V Vector Support for K-Quants and improved the existing intrinsics (ggml-org#3453)
  main : consistent prefix/suffix coloring (ggml-org#3425)
  llama : fix session saving/loading (ggml-org#3400)
  llama : expose model's rope_freq_scale in the API (ggml-org#3418)
  metal : alibi for arbitrary number of heads (ggml-org#3426)
  cmake : make LLAMA_NATIVE flag actually use the instructions supported by the processor (ggml-org#3273)
  Work on the BPE tokenizer (ggml-org#3252)
  convert : fix vocab size when not defined in hparams (ggml-org#3421)
  cmake : increase minimum version for add_link_options (ggml-org#3444)
  CLBlast: Add broadcast support for matrix multiplication (ggml-org#3402)
  gguf : add BERT, MPT, and GPT-J arch info (ggml-org#3408)
  gguf : general usability improvements (ggml-org#3409)
  cmake : make CUDA flags more similar to the Makefile (ggml-org#3420)
  finetune : fix ggml-org#3404 (ggml-org#3437)
  ...
yusiwen pushed a commit to yusiwen/llama.cpp that referenced this pull request Oct 7, 2023
* Typo

* No `--in-prefix` coloring

The `--in-prefix` text was inconsistently colored. Now, it's never colored, just like the `--in-suffix` text.
# for free to join this conversation on GitHub. Already have an account? # to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants