diff --git a/docs/source/serving/faq.rst b/docs/source/serving/faq.rst index daa81d7c8c9c3..7b0374be8adff 100644 --- a/docs/source/serving/faq.rst +++ b/docs/source/serving/faq.rst @@ -1,5 +1,5 @@ Frequently Asked Questions -======================== +=========================== Q: How can I serve multiple models on a single port using the OpenAI API? @@ -9,4 +9,4 @@ A: Assuming that you're referring to using OpenAI compatible server to serve mul Q: Which model to use for offline inference embedding? -A: If you want to use an embedding model, try: https://huggingface.co/intfloat/e5-mistral-7b-instruct. Instead models, such as Llama-3-8b, Mistral-7B-Instruct-v0.3, are generation models rather than an embedding model \ No newline at end of file +A: If you want to use an embedding model, try: https://huggingface.co/intfloat/e5-mistral-7b-instruct. Instead models, such as Llama-3-8b, Mistral-7B-Instruct-v0.3, are generation models rather than an embedding model