Skip to content

Commit

Permalink
add FAQ doc under 'serving' (vllm-project#5946)
Browse files Browse the repository at this point in the history
  • Loading branch information
llmpros authored Jul 1, 2024
1 parent 12a5995 commit 83bdcb6
Show file tree
Hide file tree
Showing 2 changed files with 13 additions and 0 deletions.
1 change: 1 addition & 0 deletions docs/source/index.rst
Original file line number Diff line number Diff line change
Expand Up @@ -84,6 +84,7 @@ Documentation
serving/usage_stats
serving/integrations
serving/tensorizer
serving/faq

.. toctree::
:maxdepth: 1
Expand Down
12 changes: 12 additions & 0 deletions docs/source/serving/faq.rst
Original file line number Diff line number Diff line change
@@ -0,0 +1,12 @@
Frequently Asked Questions
========================

Q: How can I serve multiple models on a single port using the OpenAI API?

A: Assuming that you're referring to using OpenAI compatible server to serve multiple models at once, that is not currently supported, you can run multiple instances of the server (each serving a different model) at the same time, and have another layer to route the incoming request to the correct server accordingly.

----------------------------------------

Q: Which model to use for offline inference embedding?

A: If you want to use an embedding model, try: https://huggingface.co/intfloat/e5-mistral-7b-instruct. Instead models, such as Llama-3-8b, Mistral-7B-Instruct-v0.3, are generation models rather than an embedding model

0 comments on commit 83bdcb6

Please # to comment.