Skip to content

Ignores Cyrillic under Win10 #831

New issue

Have a question about this project? # for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “#”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? # to your account

Closed
OllanTaytambur opened this issue Apr 7, 2023 · 2 comments
Closed

Ignores Cyrillic under Win10 #831

OllanTaytambur opened this issue Apr 7, 2023 · 2 comments

Comments

@OllanTaytambur
Copy link

The main.exe program simply does not see ru Cyrillic in the standard windows 10 environment, both in cmd.exe and in powershell

main.exe of https://github.com/ggerganov/llama.cpp/releases/download/master-cc9cee8/llama-master-cc9cee8-bin-win-avx2-x64.zip
model of https://huggingface.co/IlyaGusev/llama_13b_ru_turbo_alpaca_lora_llamacpp/tree/main/13B

Expected Behavior

Вопрос: Почему трава зеленая?
Выход: Трава зеленой из-за того, что она содержит хлорофиллы, пигменты, которые помогают ей фотосинтезировать энергию из солнечного света. Хлорофилл способен перерабатывать углекислый газ и воду в органические вещества, такие как углеводы, аминокислоты и жиры, которые необходимы растениям для их роста и развития.

Current Behavior

PS N:\NLP_MODEL> .\llama-master-cc9cee8-bin-win-avx2-x64\main.exe -m .\llama_13b_ru_turbo_alpaca_lora_llamacpp\ggml-model-q4_0.bin -p "Вопрос: Почему трава зеленая? Ответ:" -n 512 --temp 0.1
main: seed = 1680869612
llama_model_load: loading model from '.\llama_13b_ru_turbo_alpaca_lora_llamacpp\ggml-model-q4_0.bin' - please wait ...
llama_model_load: n_vocab = 32000
llama_model_load: n_ctx = 512
llama_model_load: n_embd = 5120
llama_model_load: n_mult = 256
llama_model_load: n_head = 40
llama_model_load: n_layer = 40
llama_model_load: n_rot = 128
llama_model_load: f16 = 2
llama_model_load: n_ff = 13824
llama_model_load: n_parts = 2
llama_model_load: type = 2
llama_model_load: ggml map size = 7759.83 MB
llama_model_load: ggml ctx size = 101.25 KB
llama_model_load: mem required = 9807.93 MB (+ 1608.00 MB per state)
llama_model_load: loading tensors from '.\llama_13b_ru_turbo_alpaca_lora_llamacpp\ggml-model-q4_0.bin'
llama_model_load: model size = 7759.39 MB / num tensors = 363
llama_init_from_file: kv self size = 400.00 MB

system_info: n_threads = 4 / 16 | AVX = 1 | AVX2 = 1 | AVX512 = 0 | FMA = 1 | NEON = 0 | ARM_FMA = 0 | F16C = 1 | FP16_VA = 0 | WASM_SIMD = 0 | BLAS = 0 | SSE3 = 1 | VSX = 0 |
sampling: temp = 0.100000, top_k = 40, top_p = 0.950000, repeat_last_n = 64, repeat_penalty = 1.100000
generate: n_ctx = 512, n_batch = 8, n_predict = 512, n_keep = 0

: ? : 🏃

The answer is "Yes". [end of text]

llama_print_timings: load time = 6288.63 ms
llama_print_timings: sample time = 7.36 ms / 14 runs ( 0.53 ms per run)
llama_print_timings: prompt eval time = 16411.67 ms / 38 tokens ( 431.89 ms per token)
llama_print_timings: eval time = 6213.46 ms / 13 runs ( 477.96 ms per run)
llama_print_timings: total time = 24098.30 ms

Environment and Context

OS: Win10
CPU: XEON E5-2640v3
Memory: 16GB

@prusnak
Copy link
Collaborator

prusnak commented Apr 7, 2023

Duplicate of #646

@prusnak prusnak marked this as a duplicate of #646 Apr 7, 2023
@prusnak prusnak closed this as completed Apr 7, 2023
@prusnak
Copy link
Collaborator

prusnak commented Apr 8, 2023

#840 has been merged - try pulling the latest master and please test whether this fixed your issue

# for free to join this conversation on GitHub. Already have an account? # to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants