-
-
Notifications
You must be signed in to change notification settings - Fork 403
Use Local Running Custom Models
Shx Guo edited this page Feb 6, 2024
·
1 revision
Copilot for Xcode supports using chat/embedding models that have an OpenAI-compatible web API. To use these models, it's as easy as adding the model to the app and changing the chat/embedding provider to the model.
It's recommended to use LM Studio to run models locally.
To use it, you can
- Download the latest version from their website.
- Download a model you want from either the home tab or search tab.
- Go to local server tab, select the model you just downloaded to load it.
- Click "Start Server" to start it. You can find the base url in the "Client Code Example", it looks like
http://localhost:1234
.

- Setup the model in "Service - Chat Models". The model name field can be any value.

- Change the chat feature provider to the new model in "Feature - Chat"