Skip to content
New issue

Have a question about this project? # for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “#”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? # to your account

[Bug]: litellm_proxy provider is not supported for embeddings #8077

Open
aguadoenzo opened this issue Jan 29, 2025 · 0 comments
Open

[Bug]: litellm_proxy provider is not supported for embeddings #8077

aguadoenzo opened this issue Jan 29, 2025 · 0 comments
Labels
bug Something isn't working

Comments

@aguadoenzo
Copy link

aguadoenzo commented Jan 29, 2025

What happened?

As per the documentation for embeddings and LiteLLM Proxy, I'm using the following code to generate embeddings.

response = embedding(input=query, model=f"litellm_proxy/{model}")

(where model is nomic-embed-text which is correctly defined in my proxy).

However, because the litellm_proxy provider is never accounted for in the embedding method (see: https://github.com/BerriAI/litellm/blob/main/litellm/main.py#L3300-L3784), I get the following error:

raise ValueError(f"No valid embedding model args passed in - {args}")

One workaround is to ignore the litellm_proxy and pass openai as a custom llm provider, i.e.

response = embedding(input=query, model=model, custom_llm_provider="openai")

while this works, if feels extremely nasty, and it's weird that communication between the litellm client and proxy is not supported. Is this intentional ?

Relevant log output

Are you a ML Ops Team?

No

What LiteLLM version are you on ?

v1.59.8

Twitter / LinkedIn details

No response

@aguadoenzo aguadoenzo added the bug Something isn't working label Jan 29, 2025
# for free to join this conversation on GitHub. Already have an account? # to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

1 participant