diff --git a/docs/source/index.rst b/docs/source/index.rst index 8fd25ce828839..e99a0a9a13899 100644 --- a/docs/source/index.rst +++ b/docs/source/index.rst @@ -84,6 +84,7 @@ Documentation serving/usage_stats serving/integrations serving/tensorizer + serving/faq .. toctree:: :maxdepth: 1 diff --git a/docs/source/serving/faq.rst b/docs/source/serving/faq.rst new file mode 100644 index 0000000000000..daa81d7c8c9c3 --- /dev/null +++ b/docs/source/serving/faq.rst @@ -0,0 +1,12 @@ +Frequently Asked Questions +======================== + + Q: How can I serve multiple models on a single port using the OpenAI API? + +A: Assuming that you're referring to using OpenAI compatible server to serve multiple models at once, that is not currently supported, you can run multiple instances of the server (each serving a different model) at the same time, and have another layer to route the incoming request to the correct server accordingly. + +---------------------------------------- + + Q: Which model to use for offline inference embedding? + +A: If you want to use an embedding model, try: https://huggingface.co/intfloat/e5-mistral-7b-instruct. Instead models, such as Llama-3-8b, Mistral-7B-Instruct-v0.3, are generation models rather than an embedding model \ No newline at end of file