Skip to content

Errors from custom model providers dont handled #380

New issue

Have a question about this project? # for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “#”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? # to your account

Open
TypeHintsFun opened this issue Mar 28, 2025 · 0 comments
Open

Errors from custom model providers dont handled #380

TypeHintsFun opened this issue Mar 28, 2025 · 0 comments
Labels
bug Something isn't working

Comments

@TypeHintsFun
Copy link

When custom model provider sends an error message (e.g. rate limit exceeded), agents falls with error

...\agents\models\openai_chatcompletions.py", line 134, in get_response
    f"LLM resp:\n{json.dumps(response.choices[0].message.model_dump(), indent=2)}\n"
                             ~~~~~~~~~~~~~~~~^^^
TypeError: 'NoneType' object is not subscriptable

Im use an gemini model through openrouter, and answer looks like this:

{
  "id": null,
  "choices": null,
  "created": null,
  "model": null,
  "object": null,
  "service_tier": null,
  "system_fingerprint": null,
  "usage": null,
  "error": {
    "message": "Rate limit exceeded: google/gemini-2.0-flash-exp/...",
    "code": 429,
    "metadata": {
      "headers": {
        "X-RateLimit-Limit": "4",
        "X-RateLimit-Remaining": "0",
        "X-RateLimit-Reset": "1743142560000"
      },
      "provider_name": "Google AI Studio"
    }
  },
  "user_id": "..."
}

As i can see, agents dont have any provider errors handling mechanism (including errors from OpenAI), so it would be nice to add it

# for free to join this conversation on GitHub. Already have an account? # to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

1 participant