You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I have compiled llama.cpp with the LLAMA_CUDA option and I notice that running an edge model does not use the GPU at all. Is there something I should look for in my config?
Also, would it be possible to download models other than the LIBERTY - EDGE models? I assume that I could get more inference earnings if I had a more popular model, too.
(Running on Ubuntu Linux with proprietary nvidia drivers)
The text was updated successfully, but these errors were encountered:
Can you try and share what arguments do you use so you can enable GPU acceleration?
I also don't know how to compile the one yet on Windows... If any information, steps, commands is so thankful.
I have compiled
llama.cpp
with theLLAMA_CUDA
option and I notice that running an edge model does not use the GPU at all. Is there something I should look for in my config?Also, would it be possible to download models other than the
LIBERTY - EDGE
models? I assume that I could get more inference earnings if I had a more popular model, too.(Running on Ubuntu Linux with proprietary nvidia drivers)
The text was updated successfully, but these errors were encountered: