You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I have tried muiltiple model: gpt-4o/gemini-2.0-flash-exp/gemini-2.0-flash-thinking-exp
all failed.
But cursor can fix this very easily with all the models above.
(litellm) [mac.new]➜ litellm git:(main) ✗ aider --model gemini/gemini-2.0-flash-exp
────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
Aider v0.69.1
Model: gemini/gemini-2.0-flash-exp with diff edit format
Git repo: .git with 1 files
Repo-map: using 1024 tokens, auto refresh
────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
/add config.yaml
Added config.yaml to the chat
────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
config.yaml
/run litellm -c config.yaml
INFO: Started server process [11559]
INFO: Waiting for application startup.
LiteLLM: Proxy initialized with Config, Set models:
claude-3
ERROR: Traceback (most recent call last):
File "/Users/yonghuizuo/miniconda3/lib/python3.12/site-packages/starlette/routing.py", line 732, in lifespan
async with self.lifespan_context(app) as maybe_state:
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/yonghuizuo/miniconda3/lib/python3.12/site-packages/starlette/routing.py", line 608, in aenter
await self._router.startup()
File "/Users/yonghuizuo/miniconda3/lib/python3.12/site-packages/starlette/routing.py", line 709, in startup
await handler()
File "/Users/yonghuizuo/miniconda3/lib/python3.12/site-packages/litellm/proxy/proxy_server.py", line 3068, in startup_event
await initialize(**worker_config)
File "/Users/yonghuizuo/miniconda3/lib/python3.12/site-packages/litellm/proxy/proxy_server.py", line 2500, in initialize
) = await proxy_config.load_config(router=llm_router, config_file_path=config)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/yonghuizuo/miniconda3/lib/python3.12/site-packages/litellm/proxy/proxy_server.py", line 1936, in load_config
router = litellm.Router(
^^^^^^^^^^^^^^^
File "/Users/yonghuizuo/miniconda3/lib/python3.12/site-packages/litellm/router.py", line 361, in init
self.set_model_list(model_list)
File "/Users/yonghuizuo/miniconda3/lib/python3.12/site-packages/litellm/router.py", line 3910, in set_model_list
self._create_deployment(
File "/Users/yonghuizuo/miniconda3/lib/python3.12/site-packages/litellm/router.py", line 3830, in _create_deployment
deployment = self._add_deployment(deployment=deployment)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/yonghuizuo/miniconda3/lib/python3.12/site-packages/litellm/router.py", line 3949, in _add_deployment
) = litellm.get_llm_provider(
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/yonghuizuo/miniconda3/lib/python3.12/site-packages/litellm/litellm_core_utils/get_llm_provider_logic.py", line 353, in get_llm_provider
raise e
File "/Users/yonghuizuo/miniconda3/lib/python3.12/site-packages/litellm/litellm_core_utils/get_llm_provider_logic.py", line 330, in get_llm_provider
raise litellm.exceptions.BadRequestError( # type: ignore
litellm.exceptions.BadRequestError: litellm.BadRequestError: LLM Provider NOT provided. Pass in the LLM provider you are trying to call. You passed model=claude-3.5-sonnet
Pass model as E.g. For 'Huggingface' inference endpoints pass in completion(model='huggingface/starcoder',..) Learn more: https://docs.litellm.ai/docs/providers
ERROR: Application startup failed. Exiting.
Add command output to the chat? (Y)es/(N)o [Yes]: y
Added 53 lines of output to the chat.
────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
config.yaml
Fix that
The error message indicates that the litellm library requires the provider to be specified in the litellm_params. The config.yaml file is missing the provider key under litellm_params.
To fix this, I will add provider: "anthropic" to the litellm_params section of the config.yaml file.
After this change, you should be able to run litellm -c config.yaml without the BadRequestError.
Tokens: 3.4k sent, 202 received.
Committing config.yaml before applying edits.
Commit 3cb8f7c fix: Update claude model name to claude-3.5-sonnet
Applied edit to config.yaml
Commit 709d93e fix: Add provider to litellm_params in config
You can use /undo to undo and discard each aider commit.
────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
config.yaml
LiteLLM: Proxy initialized with Config, Set models:
claude-3
ERROR: Traceback (most recent call last):
File "/Users/yonghuizuo/miniconda3/lib/python3.12/site-packages/starlette/routing.py", line 732, in lifespan
async with self.lifespan_context(app) as maybe_state:
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/yonghuizuo/miniconda3/lib/python3.12/site-packages/starlette/routing.py", line 608, in aenter
await self._router.startup()
File "/Users/yonghuizuo/miniconda3/lib/python3.12/site-packages/starlette/routing.py", line 709, in startup
await handler()
File "/Users/yonghuizuo/miniconda3/lib/python3.12/site-packages/litellm/proxy/proxy_server.py", line 3068, in startup_event
await initialize(**worker_config)
File "/Users/yonghuizuo/miniconda3/lib/python3.12/site-packages/litellm/proxy/proxy_server.py", line 2500, in initialize
) = await proxy_config.load_config(router=llm_router, config_file_path=config)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/yonghuizuo/miniconda3/lib/python3.12/site-packages/litellm/proxy/proxy_server.py", line 1936, in load_config
router = litellm.Router(
^^^^^^^^^^^^^^^
File "/Users/yonghuizuo/miniconda3/lib/python3.12/site-packages/litellm/router.py", line 361, in init
self.set_model_list(model_list)
File "/Users/yonghuizuo/miniconda3/lib/python3.12/site-packages/litellm/router.py", line 3910, in set_model_list
self._create_deployment(
File "/Users/yonghuizuo/miniconda3/lib/python3.12/site-packages/litellm/router.py", line 3830, in _create_deployment
deployment = self._add_deployment(deployment=deployment)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/yonghuizuo/miniconda3/lib/python3.12/site-packages/litellm/router.py", line 3949, in _add_deployment
) = litellm.get_llm_provider(
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/yonghuizuo/miniconda3/lib/python3.12/site-packages/litellm/litellm_core_utils/get_llm_provider_logic.py", line 353, in get_llm_provider
raise e
File "/Users/yonghuizuo/miniconda3/lib/python3.12/site-packages/litellm/litellm_core_utils/get_llm_provider_logic.py", line 330, in get_llm_provider
raise litellm.exceptions.BadRequestError( # type: ignore
litellm.exceptions.BadRequestError: litellm.BadRequestError: LLM Provider NOT provided. Pass in the LLM provider you are trying to call. You passed model=claude-3.5-sonnet
Pass model as E.g. For 'Huggingface' inference endpoints pass in completion(model='huggingface/starcoder',..) Learn more: https://docs.litellm.ai/docs/providers
ERROR: Application startup failed. Exiting.
Add command output to the chat? (Y)es/(N)o [Yes]:
Version and model info
Aider: v0.69.1
The text was updated successfully, but these errors were encountered:
Issue
I hava a wrong litellm config.yaml
i want aider help me to solve this, it failed
I have tried muiltiple model: gpt-4o/gemini-2.0-flash-exp/gemini-2.0-flash-thinking-exp
all failed.
But cursor can fix this very easily with all the models above.
(litellm) [mac.new]➜ litellm git:(main) ✗ aider --model gemini/gemini-2.0-flash-exp
────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
Aider v0.69.1
Model: gemini/gemini-2.0-flash-exp with diff edit format
Git repo: .git with 1 files
Repo-map: using 1024 tokens, auto refresh
────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
Added config.yaml to the chat
────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
config.yaml
INFO: Started server process [11559]
INFO: Waiting for application startup.
#------------------------------------------------------------#
'It would help me if you could add...'
https://github.com/BerriAI/litellm/issues/new
#------------------------------------------------------------#
Thank you for using LiteLLM! - Krrish & Ishaan
Give Feedback / Get Help: https://github.com/BerriAI/litellm/issues/new
LiteLLM: Proxy initialized with Config, Set models:
claude-3
ERROR: Traceback (most recent call last):
File "/Users/yonghuizuo/miniconda3/lib/python3.12/site-packages/starlette/routing.py", line 732, in lifespan
async with self.lifespan_context(app) as maybe_state:
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/yonghuizuo/miniconda3/lib/python3.12/site-packages/starlette/routing.py", line 608, in aenter
await self._router.startup()
File "/Users/yonghuizuo/miniconda3/lib/python3.12/site-packages/starlette/routing.py", line 709, in startup
await handler()
File "/Users/yonghuizuo/miniconda3/lib/python3.12/site-packages/litellm/proxy/proxy_server.py", line 3068, in startup_event
await initialize(**worker_config)
File "/Users/yonghuizuo/miniconda3/lib/python3.12/site-packages/litellm/proxy/proxy_server.py", line 2500, in initialize
) = await proxy_config.load_config(router=llm_router, config_file_path=config)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/yonghuizuo/miniconda3/lib/python3.12/site-packages/litellm/proxy/proxy_server.py", line 1936, in load_config
router = litellm.Router(
^^^^^^^^^^^^^^^
File "/Users/yonghuizuo/miniconda3/lib/python3.12/site-packages/litellm/router.py", line 361, in init
self.set_model_list(model_list)
File "/Users/yonghuizuo/miniconda3/lib/python3.12/site-packages/litellm/router.py", line 3910, in set_model_list
self._create_deployment(
File "/Users/yonghuizuo/miniconda3/lib/python3.12/site-packages/litellm/router.py", line 3830, in _create_deployment
deployment = self._add_deployment(deployment=deployment)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/yonghuizuo/miniconda3/lib/python3.12/site-packages/litellm/router.py", line 3949, in _add_deployment
) = litellm.get_llm_provider(
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/yonghuizuo/miniconda3/lib/python3.12/site-packages/litellm/litellm_core_utils/get_llm_provider_logic.py", line 353, in get_llm_provider
raise e
File "/Users/yonghuizuo/miniconda3/lib/python3.12/site-packages/litellm/litellm_core_utils/get_llm_provider_logic.py", line 330, in get_llm_provider
raise litellm.exceptions.BadRequestError( # type: ignore
litellm.exceptions.BadRequestError: litellm.BadRequestError: LLM Provider NOT provided. Pass in the LLM provider you are trying to call. You passed model=claude-3.5-sonnet
Pass model as E.g. For 'Huggingface' inference endpoints pass in
completion(model='huggingface/starcoder',..)
Learn more: https://docs.litellm.ai/docs/providersERROR: Application startup failed. Exiting.
Add command output to the chat? (Y)es/(N)o [Yes]: y
Added 53 lines of output to the chat.
────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
config.yaml
The error message indicates that the litellm library requires the provider to be specified in the litellm_params. The config.yaml file is missing the provider key under litellm_params.
To fix this, I will add provider: "anthropic" to the litellm_params section of the config.yaml file.
config.yaml
<<<<<<< SEARCH
model: "claude-3.5-sonnet"
api_key: ${ANTHROPIC_API_KEY}
model_info:
provider: "anthropic"
After this change, you should be able to run litellm -c config.yaml without the BadRequestError.
Tokens: 3.4k sent, 202 received.
Committing config.yaml before applying edits.
Commit 3cb8f7c fix: Update claude model name to claude-3.5-sonnet
Applied edit to config.yaml
Commit 709d93e fix: Add provider to litellm_params in config
You can use /undo to undo and discard each aider commit.
────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
config.yaml
────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
config.yaml
INFO: Started server process [11807]
INFO: Waiting for application startup.
#------------------------------------------------------------#
'The worst thing about this product is...'
https://github.com/BerriAI/litellm/issues/new
#------------------------------------------------------------#
Thank you for using LiteLLM! - Krrish & Ishaan
Give Feedback / Get Help: https://github.com/BerriAI/litellm/issues/new
LiteLLM: Proxy initialized with Config, Set models:
claude-3
ERROR: Traceback (most recent call last):
File "/Users/yonghuizuo/miniconda3/lib/python3.12/site-packages/starlette/routing.py", line 732, in lifespan
async with self.lifespan_context(app) as maybe_state:
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/yonghuizuo/miniconda3/lib/python3.12/site-packages/starlette/routing.py", line 608, in aenter
await self._router.startup()
File "/Users/yonghuizuo/miniconda3/lib/python3.12/site-packages/starlette/routing.py", line 709, in startup
await handler()
File "/Users/yonghuizuo/miniconda3/lib/python3.12/site-packages/litellm/proxy/proxy_server.py", line 3068, in startup_event
await initialize(**worker_config)
File "/Users/yonghuizuo/miniconda3/lib/python3.12/site-packages/litellm/proxy/proxy_server.py", line 2500, in initialize
) = await proxy_config.load_config(router=llm_router, config_file_path=config)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/yonghuizuo/miniconda3/lib/python3.12/site-packages/litellm/proxy/proxy_server.py", line 1936, in load_config
router = litellm.Router(
^^^^^^^^^^^^^^^
File "/Users/yonghuizuo/miniconda3/lib/python3.12/site-packages/litellm/router.py", line 361, in init
self.set_model_list(model_list)
File "/Users/yonghuizuo/miniconda3/lib/python3.12/site-packages/litellm/router.py", line 3910, in set_model_list
self._create_deployment(
File "/Users/yonghuizuo/miniconda3/lib/python3.12/site-packages/litellm/router.py", line 3830, in _create_deployment
deployment = self._add_deployment(deployment=deployment)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/yonghuizuo/miniconda3/lib/python3.12/site-packages/litellm/router.py", line 3949, in _add_deployment
) = litellm.get_llm_provider(
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/yonghuizuo/miniconda3/lib/python3.12/site-packages/litellm/litellm_core_utils/get_llm_provider_logic.py", line 353, in get_llm_provider
raise e
File "/Users/yonghuizuo/miniconda3/lib/python3.12/site-packages/litellm/litellm_core_utils/get_llm_provider_logic.py", line 330, in get_llm_provider
raise litellm.exceptions.BadRequestError( # type: ignore
litellm.exceptions.BadRequestError: litellm.BadRequestError: LLM Provider NOT provided. Pass in the LLM provider you are trying to call. You passed model=claude-3.5-sonnet
Pass model as E.g. For 'Huggingface' inference endpoints pass in
completion(model='huggingface/starcoder',..)
Learn more: https://docs.litellm.ai/docs/providersERROR: Application startup failed. Exiting.
Add command output to the chat? (Y)es/(N)o [Yes]:
Version and model info
Aider: v0.69.1
The text was updated successfully, but these errors were encountered: