File tree 1 file changed +2
-2
lines changed
1 file changed +2
-2
lines changed Original file line number Diff line number Diff line change @@ -45,7 +45,7 @@ You'll first need to download one of the available function calling models in GG
45
45
Then when you run the server you'll need to also specify the ` functionary-7b-v1 ` chat_format
46
46
47
47
``` bash
48
- python3 -m llama_cpp.server --model < model_path> --chat-format functionary
48
+ python3 -m llama_cpp.server --model < model_path> --chat_format functionary
49
49
```
50
50
51
51
### Multimodal Models
@@ -61,7 +61,7 @@ You'll first need to download one of the available multi-modal models in GGUF fo
61
61
Then when you run the server you'll need to also specify the path to the clip model used for image embedding and the ` llava-1-5 ` chat_format
62
62
63
63
``` bash
64
- python3 -m llama_cpp.server --model < model_path> --clip-model-path < clip_model_path> --chat-format llava-1-5
64
+ python3 -m llama_cpp.server --model < model_path> --clip_model_path < clip_model_path> --chat_format llava-1-5
65
65
```
66
66
67
67
Then you can just use the OpenAI API as normal
You can’t perform that action at this time.
0 commit comments