-
Notifications
You must be signed in to change notification settings - Fork 7.7k
Troubleshooting
Sometimes things don't go as planned, and that's where we're here to help you troubleshoot any issues you may be experiencing with our chat interface. Below are some common problems users encounter and their solutions.
It is running but I can't find the chat window!
- Windows: There are a few ways to find a "lost" application window.
- Shift + Right click the icon in the taskbar and choose Maximize.
- In the Task Manager find the running application and right click it, then choose one "Switch to, Bring to front, Maximize".
It is crashing...
...when I load a model in the chat window!- Ensure you have not exceeded the available memory for your device.
- If you are using a GPU then lower the GPU Layers until the model works.
- If you have changed the context length it may be too large.
- It's possible there's a bug or problem with your specific setup. Please check issues, discussions or talk to us on Discord.
...when I'm embedding localdocs!
- Ensure you aren't overheating your PC. Keep an eye on that CPU temperature!
- Use CUDA if you have an Nvidia GPU this will run embeddings much faster and won't use the slow CPU.
It is running but I get a network error.
The screenshot below is from a fresh install while offline. The "Add Models" page uses an internet connection.- You are offline or there is a firewall blocking your internet connection.
- Many third-party firewalls will block an "untrusted" program.
- You will want a firewall rule set to "Allow" for this program to access the internet if you wish to use any network features.
- Public firewalls may block access to this feature.
- If you live in China you may find this affects you.
- Many third-party firewalls will block an "untrusted" program.
This software is installed on your computer and does not require internet to run, though you will need to retrieve the files somehow.
See this entry in the WIKI about "Sideloading" and configuring custom models.
Error Loading Models
It is possible you are trying to load a model from HuggingFace whose weights are not compatible with the llama.cpp backend.
Supported models as mentioned by llama.cpp:
(Typically finetunes of the base models below are supported as well.)
- LLaMA 🦙
- LLaMA 2 🦙🦙
- LLaMA 3 🦙🦙🦙
- Mistral 7B
- Mixtral MoE
- DBRX
- Falcon
- Chinese LLaMA / Alpaca and Chinese LLaMA-2 / Alpaca-2
- Vigogne (French)
- BERT
- Koala
- Baichuan 1 & 2 + derivations
- Aquila 1 & 2
- Starcoder models
- Refact
- MPT
- Bloom
- Yi models
- StableLM models
- Deepseek models
- Qwen models
- PLaMo-13B
- Phi models
- GPT-2
- Orion 14B
- InternLM2
- CodeShell
- Gemma
- Mamba
- Grok-1
- Xverse
- Command-R models
- SEA-LION
- GritLM-7B + GritLM-8x7B
- OLMo
- GPT-NeoX + Pythia
- ChatGLM3-6b + ChatGLM4-9b
If the problem persists, please share your experience on our Discord.
Bad Responses
Use one of the example chats to double check that your system is implementing models correctly.
Responses Incoherent
If you are seeing something not at all resembling the example chats or if the responses you are seeing look nonsensical, ensure you have applied the prompt template supplied by the models creator.
Responses Incorrect
LLMs can be unreliable. It's helpful to know what their training data was. They are less likely to be correct when asking about data they were not trained on unless you give the necessary information in the prompt as context.
Giving LLMs additional context, like chatting using LocalDocs, can help merge the language model's ability to understand text with the files that you trust to contain the information you need.
Including information in a prompt is not a guarantee that it will be used correctly, but the more clear and concise your prompts, and the more relevant your prompts are to your files, the better.