From d769e6ab03db4ee7061d65c5fb71f661f4047e74 Mon Sep 17 00:00:00 2001 From: Roger Wang Date: Mon, 1 Jul 2024 14:57:54 -0700 Subject: [PATCH] fix --- docs/source/serving/faq.rst | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/docs/source/serving/faq.rst b/docs/source/serving/faq.rst index daa81d7c8c9c3..7b0374be8adff 100644 --- a/docs/source/serving/faq.rst +++ b/docs/source/serving/faq.rst @@ -1,5 +1,5 @@ Frequently Asked Questions -======================== +=========================== Q: How can I serve multiple models on a single port using the OpenAI API? @@ -9,4 +9,4 @@ A: Assuming that you're referring to using OpenAI compatible server to serve mul Q: Which model to use for offline inference embedding? -A: If you want to use an embedding model, try: https://huggingface.co/intfloat/e5-mistral-7b-instruct. Instead models, such as Llama-3-8b, Mistral-7B-Instruct-v0.3, are generation models rather than an embedding model \ No newline at end of file +A: If you want to use an embedding model, try: https://huggingface.co/intfloat/e5-mistral-7b-instruct. Instead models, such as Llama-3-8b, Mistral-7B-Instruct-v0.3, are generation models rather than an embedding model