-
Notifications
You must be signed in to change notification settings - Fork 5.7k
New issue
Have a question about this project? # for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “#”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? # to your account
openai问题 #5194
Comments
Hey @shilv520, I'm here to help you with your coding conundrums! Let's squash those bugs and unravel the mysteries of code together. 🤖 A similar issue has been discussed in an open thread where enabling detailed logging was suggested as a way to diagnose the problem further. You can enable verbose logging by using the chatchat-config basic --verbose true Additionally, check the logs for any errors reported by xinference, as they might provide more insights into the issue [1]. Regarding your question about not calling the OpenAI API, Langchain-Chatchat can use local models as an alternative to the OpenAI API for generating completions. It supports various local model deployment frameworks like Xinference, Ollama, LocalAI, and FastChat, which allow you to run models locally without relying on external APIs [2][3]. To continue talking to Dosu, mention @dosu. Help Dosu learn! Give it feedback: Great Response | Irrelevant Answer | Incorrect Sources | Too Verbose | Hallucination | Other |
这个问题已经被标记为 |
您好请问您解决了吗,我也遇到了这个问题,谢谢! |
An error occurred during streaming
INFO: 127.0.0.1:57108 - "POST /chat/chat/completions HTTP/1.1" 200 OK
2025-01-16 17:26:43,528 httpx 3854 INFO HTTP Request: POST http://127.0.0.1:7861/chat/chat/completions "HTTP/1.1 200 OK"
2025-01-16 17:26:43,536 httpx 3812 INFO HTTP Request: POST http://127.0.0.1:9997/v1/chat/completions "HTTP/1.1 200 OK"
2025-01-16 17:26:43.547 | ERROR | chatchat.server.api_server.openai_routes:generator:105 - openai request error: An error occurred during streaming
这个怎么解决啊,可以不调用openai吗
The text was updated successfully, but these errors were encountered: