From 8e0817c262da5c104f651a0ce4ac9ee0cd76f4ce Mon Sep 17 00:00:00 2001 From: Roger Wang <136131678+ywang96@users.noreply.github.com> Date: Mon, 1 Jul 2024 15:09:11 -0700 Subject: [PATCH] [Bugfix][Doc] Fix Doc Formatting (#6048) --- docs/source/serving/faq.rst | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/docs/source/serving/faq.rst b/docs/source/serving/faq.rst index daa81d7c8c9c3..7b0374be8adff 100644 --- a/docs/source/serving/faq.rst +++ b/docs/source/serving/faq.rst @@ -1,5 +1,5 @@ Frequently Asked Questions -======================== +=========================== Q: How can I serve multiple models on a single port using the OpenAI API? @@ -9,4 +9,4 @@ A: Assuming that you're referring to using OpenAI compatible server to serve mul Q: Which model to use for offline inference embedding? -A: If you want to use an embedding model, try: https://huggingface.co/intfloat/e5-mistral-7b-instruct. Instead models, such as Llama-3-8b, Mistral-7B-Instruct-v0.3, are generation models rather than an embedding model \ No newline at end of file +A: If you want to use an embedding model, try: https://huggingface.co/intfloat/e5-mistral-7b-instruct. Instead models, such as Llama-3-8b, Mistral-7B-Instruct-v0.3, are generation models rather than an embedding model