Skip to content

Conversation

rm-openai
Copy link
Collaborator

@rm-openai rm-openai commented Apr 17, 2025

Summary

This replaces the default model provider with a MultiProvider, which has the logic:

  • if the model name starts with openai/ or doesn't contain "/", use OpenAI
  • if the model name starts with litellm/, use LiteLLM to use the appropriate model provider.

It's also extensible, so users can create their own mappings. I also imagine that if we natively supported Anthropic/Gemini etc, we can add it to MultiProvider to make it work.

The goal is that it should be really easy to use any model provider. Today if you pass model="gpt-4.1", it works great. But model="claude-sonnet-3.7" doesn't. If we can make it that easy, it's a win for devx.

I'm not entirely sure if this is a good idea - is it too magical? Is the API too reliant on litellm? Comments welcome.

Test plan

For now, the example. Will add unit tests if we agree its worth mergin.

Copy link
Member

@seratch seratch left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I personally really like this idea!

from ...models.interface import Model, ModelProvider
from .litellm_model import LitellmModel

DEFAULT_MODEL: str = "gpt-4.1"
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Would importing https://github.com/openai/openai-agents-python/blob/v0.0.11/src/agents/models/openai_provider.py#L11 instead be better? Also, huge 👍 to switching from gtp-4o to gpt-4.1

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Feels like a minor breaking change. Though probably fine! Good call.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@rm-openai Perhaps you're already aware of this, but switching to gpt-4.1 might break existing CUA apps (the tool is not yet available with 4.1 while web_search_preview works with 4.1), so indeed switching the default model could be a breaking change


def _create_fallback_provider(self, prefix: str) -> ModelProvider:
if prefix == "litellm":
from ..extensions.models.litellm_provider import LitellmProvider
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I like this dynamic import here!

nice-to-have: on the LitellmProvider side, having try/except clause for loading litellm module and raising a more user-friendly error message than exposing the missing litellm could make dev experience better

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@seratch - LitellmProvider will try to import LitellmModel, which will display the nice error message: https://github.com/openai/openai-agents-python/blob/main/src/agents/extensions/models/litellm_model.py#L16-L19

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nice, you’re already ahead!

from .openai_provider import OpenAIProvider


class MultiProviderMap:
Copy link

@yihuang yihuang Apr 18, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

do we need this class to encapsulate the simple operations on plain dict?

Co-authored-by: Steven Heidel <steven@heidel.ca>
@rm-openai rm-openai merged commit a0254b0 into main Apr 21, 2025
5 checks passed
@rm-openai rm-openai deleted the rm/pr534 branch April 21, 2025 19:03
# for free to join this conversation on GitHub. Already have an account? # to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants