-
Notifications
You must be signed in to change notification settings - Fork 2.1k
New issue
Have a question about this project? # for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “#”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? # to your account
Cannot setup Deepseek model using Azure Foundry #3902
Comments
same problem here. It would be nice to have a way to define the complete endpoint in the config file directly... |
Same issue, cannot add deepseek-r1 model from azure. |
Try with |
strangely it works but only with "hello". As soon as I ask another prompt, I get return code 400... |
it seems that it works and the errors are from backend side not from continue. Thanks. |
Sometimes it works, sometimes it doesn't. It would randomly send 404 or 400 error and sometimes it would just keep running without actually generating any text. |
You need to suffix the url with
|
hey @vladiliescu - I try the solution you provided but am still facing the below error. I also tried adding the /v1 endpoint, but it did not work. Could you provide the current continue version you are using? Extension Host] Error handling webview message: { Error: HTTP 404 Not Found from https://deepseek-r1-model01.eastus.models.ai.azure.com/models/chat/completions This may mean that you forgot to add '/v1' to the end of your 'apiBase' in config.json. |
@CodesBySivaSankar what's the If it helps, I've documented my approach for both DeepSeek-R1 and o3-mini here |
Yes, @vladiliescu , I believe deepseek-r1-model01 is a deployment name. Could you check the attached screenshot to confirm if I have picked the deployment name from the correct location? |
@CodesBySivaSankar That's the model name -- you need to use the Endpoint Target URI on the right of that screen |
@vladiliescu - I used the Target URI itself, and deepseek-r1-model01 is part of the URI path, but I'm still getting a 404 error. Could you confirm if the deployment name is supposed to be in the URI path on your end? Also, has anyone else managed to resolve this issue? |
This config worked for me yesterday (2025-02-05): {
"apiKey": "REDACTED",
"apiBase": "https://<<REDACTED>>.services.ai.azure.com/models",
"apiType": "openai",
"model": "DeepSeek-R1",
"title": "AZURE deepseek R1",
"apiVersion": "2024-05-01-preview",
"provider": "azure"
} It stopped working today (2025-02-06). Logs show:
I think azure is overloaded. |
It worked for me even without adding the /models endpoint, but I ran into a content filtering issue (jailbreak) on Azure. Error
,"detected":true},"self_harm":{"filtered":false,"severity":"safe"},"sexual":{"filtered":false,"severity":"safe"},"violence":{"filtered":false,"severity":"safe"}}}}}` The above error indicated that my prompt triggered the jailbreak filter, so I adjusted the DeepSeek prompts mentioned here by removing certain strict terms like "only" and "refuse," which seemed to override AI safety mechanisms. After making these changes, everything worked fine. I'm also curious about how things behave when we specify the /models endpoint in the URI. @achaphiv I am also experienced the azure models' slow performances. |
Before submitting your bug report
Relevant environment info
Description
I am not able to use Deepseek-R1 model deployed using azure foundry.
When i try sending a message, following error occurs in logs:
Error: HTTP 404 Not Found from https://deepseek-r1-hwgxs.eastus2.models.ai.azure.com/openai/deployments/Deepseek-R1/chat/completions?api-version=2023-07-01-preview
To reproduce
No response
Log output
The text was updated successfully, but these errors were encountered: