Skip to content
New issue

Have a question about this project? # for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “#”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? # to your account

我是小白,请问是否能够将deepseek r1 模型本地部署,然后ballonstranslator来使用r1翻译 #758

Open
ptdgzs opened this issue Feb 2, 2025 · 4 comments

Comments

@ptdgzs
Copy link

ptdgzs commented Feb 2, 2025

No description provided.

@aqssxlzc
Copy link

aqssxlzc commented Feb 2, 2025 via email

@ptdgzs
Copy link
Author

ptdgzs commented Feb 2, 2025

可以啊,用ollama运行 ollama run deepseek-r1:32b
然后在GPT中写好模型名称deepseek-r1:32b和本地接口http://127.0.0.1:11434/v1即可

ptdgzs @.***> 于2025年2月2日周日 12:24写道:

本地接口是每个人都还不同的吗

@gsxgmpkt9020
Copy link

gsxgmpkt9020 commented Feb 2, 2025

本地接口

他不是写清楚了固定端口的地址吗?
还有你的显卡顶配吗?两种程序都是烧显卡的。有这买显卡的钱不如直接付费

@mayako21126
Copy link
Contributor

不如直接api+system prompt破限,ds的api兼容chatgpt,而且ds再低配也是很吃资源的

# for free to join this conversation on GitHub. Already have an account? # to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants