Skip to content

Troubleshooting

3Simplex edited this page Aug 22, 2024 · 9 revisions

Sometimes things don't go as planned, and that's where we're here to help you troubleshoot any issues you may be experiencing with our chat interface. Below are some common problems users encounter and their solutions.

GPT4All-Chat User Interface

It is running but I can't find the chat window!
  • Windows: There are a few ways to find a "lost" application window.
    • Shift + Right click the icon in the taskbar and choose Maximize.
    • In the Task Manager find the running application and right click it, then choose one "Switch to, Bring to front, Maximize".
It is crashing... ...when I load a model in the chat window!
  • Ensure you have not exceeded the available memory for your device.
    • If you are using a GPU then lower the GPU Layers until the model works.
    • If you have changed the context length it may be too large.
  • It's possible there's a bug or problem with your specific setup. Please check issues, discussions or talk to us on Discord.

...when I'm embedding localdocs!

  • Ensure you aren't overheating your PC. Keep an eye on that CPU temperature!
  • Use CUDA if you have an Nvidia GPU this will run embeddings much faster and won't use the slow CPU.
It is running but I get a network error. The screenshot below is from a fresh install while offline. The "Add Models" page uses an internet connection.

Screenshot 2024-08-20 163658

  • You are offline or there is a firewall blocking your internet connection.
    • Many third-party firewalls will block an "untrusted" program.
      • You will want a firewall rule set to "Allow" for this program to access the internet if you wish to use any network features.
    • Public firewalls may block access to this feature.
      • If you live in China you may find this affects you.

This software is installed on your computer and does not require internet to run, though you will need to retrieve the files somehow.
See this entry in the WIKI about "Sideloading" and configuring custom models.

Error Loading Models

It is possible you are trying to load a model from HuggingFace whose weights are not compatible with the llama.cpp backend.

Supported models as mentioned by llama.cpp:
(Typically finetunes of the base models below are supported as well.)

If the problem persists, please share your experience on our Discord.

Bad Responses

Use one of the example chats to double check that your system is implementing models correctly.

Responses Incoherent

If you are seeing something not at all resembling the example chats or if the responses you are seeing look nonsensical, ensure you have applied the prompt template supplied by the models creator.

Responses Incorrect

LLMs can be unreliable. It's helpful to know what their training data was. They are less likely to be correct when asking about data they were not trained on unless you give the necessary information in the prompt as context.

Giving LLMs additional context, like chatting using LocalDocs, can help merge the language model's ability to understand text with the files that you trust to contain the information you need.

Including information in a prompt is not a guarantee that it will be used correctly, but the more clear and concise your prompts, and the more relevant your prompts are to your files, the better.

LocalDocs Issues

The response is not quite what you asked for? Occasionally a model, particularly a smaller or overall weaker LLM, may not use the relevant text snippets from the files that were referenced via LocalDocs. If you are seeing this, it can help to use phrases like "in the docs" or "from the provided files" when prompting your model.