You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I ran llama.cpp on my android phone which has 8 threads and 8GB of ram in which around 7.16 GB is available, that is more than enough to run the 7B Alpaca model on it. But when i run it, it just repeats the question that i provided to it. I am using the ./examples/chat.sh file. Why does it do that? How do i solve it?
The text was updated successfully, but these errors were encountered:
Just guessing: after prompt was processed there can be a noticeable delay until the completions start.
Also there are interactive modes that wait for return/enter.
I ran llama.cpp on my android phone which has 8 threads and 8GB of ram in which around 7.16 GB is available, that is more than enough to run the 7B Alpaca model on it. But when i run it, it just repeats the question that i provided to it. I am using the
./examples/chat.sh
file. Why does it do that? How do i solve it?The text was updated successfully, but these errors were encountered: