Replies: 2 comments
-
Hello, is this problem solved? |
Beta Was this translation helpful? Give feedback.
0 replies
-
me too, the same problem, can't dump any log info, try any -log parameters. |
Beta Was this translation helpful? Give feedback.
0 replies
# for free
to join this conversation on GitHub.
Already have an account?
# to comment
-
Hello all,
This issue baffles me for several hours, looks like I need help here. I am trying to configure llama.cpp server to run in a Docker container, which in and of itself is not very difficult to do. However, there does not seem to be a way to make it output the logs anywhere but stdout
I am starting the server as
llama-server --model /models/L3-8B-Stheno-v3.2-Q5_K_M-imat.gguf --logdir /logs --log-file llama --ctx-size 4096 --n-gpu-layers 99 --host 0.0.0.0 --log-enable --log-new --log-append
so you can see I have thrown every log related setting on ON, still nothing. the
/logs
directory is mapped in docker-compose.yml to a local directory as such:I know this isn't a Docker permission issue because I am running a script in the same container that downloads models, and is able to save them to
/models
, which is mapped in the exact same way right there. So the issue seems to be the configuration?Am I not calling the server correctly?
Thanks for your ideas!
P.S. Is there a way to capture prompts and outputs in a log? Would love to have that feature for prompt tuning!
Beta Was this translation helpful? Give feedback.
All reactions