LLM providers offer libraries for the most popular programming languages, so you can build code that interacts with their API. Generally, those are wrappers around HTTPS requests with a mechanism to handle API responses (e.g., using callbacks).
To the best of my knowledge, if you want to build a plugin for Neovim that uses LLM, you have to explicitly make requests using a library like curl
and take care of requests and responses parsing yourself. This results in a lot of boilerplate code that can be abstracted away.
ai.nvim
is an experimental library that can be used to build Neovim plugins that interact with LLM providers: it crafts requests, parses responses, invokes callbacks, and handles errors.
- Neovim ≥ 0.9
- Curl
- Access to LLM Provider
Read the documentation with :help ai-nvim
Plugins built with ai.nvim
:
S1M0N38/dante.nvim
✎ A basic writing tool powered by LLMS1M0N38/chatml.nvim
⇋ Convert markdown to LLM JSON requests- PR your plugin here ...
There are many providers that offer LLM models exposing OpenAI-compatible API. The following is an incomplete list of providers that I have experimented with:
-
Local models (LM Studio, Ollama, llama-cpp, vLLM, ...)
- base url:
http://localhost:[PORT]
- models: every model supported by the local provider
- note: free and private.
- base url:
-
- base url:
https://api.githubcopilot.com
- models:
gpt-4o-mini
,gpt-4o
,o1
,o3-mini
,gemini-2.0-flash-001
,claude-3.5-sonnet
,claude-3.7-sonnet
,claude-3.7-sonnet-thought
- note: access to SOTA models with GitHub Copilot subscription (free for students)
- base url:
-
- base url:
https://api.openai.com/v1
- models:
gpt-4o
,gpt-4o-mini
,o1
,o1-mini
- note: access to SOTA models (no free tier)
- base url:
-
- base url:
https://api.groq.com/openai/v1
- models:
gemma2-9b-it
,llama-3.3-70b-versatile
,llama-3.1-8b-instant
,mixtral-8x7b-32768
,qwen-2.5-coder-32b
,qwen-2.5-32b
,deepseek-r1-distill-qwen-32b
,deepseek-r1-distill-llama-70b-specdec
,deepseek-r1-distill-llama-70b
,llama-3.3-70b-specdec
- note: crazy fast inference for open source models (free tier)
- base url:
-
- base url:
https://api.mistral.ai/v1
- models:
mistral-large-latest
,ministral-3b-latest
,ministral-8b-latest
,mistral-small-latest
,open-mistral-nemo
,mistral-saba-latest
,codestral-latest
- note: access to Mistral models (free tier)
- base url:
-
Codestral (Mistral)
- base url:
https://codestral.mistral.ai/v1
- models:
codestral-latest
- note: access to Mistral code models (free for now)
- base url:
If you want to use other providers that do not expose OpenAI-compatible API (e.g., Anthropic, Cohere, ...), you can try liteLLM proxy service.
There is no future plan to support other API standards besides OpenAI-compatible API.
- base.nvim template.
- mrcjkb's blog posts about Neovim, Luarocks, and Busted.
- mrcjkb and vhyrro repos' for GitHub Actions workflows.
- codecompanion.nvim for Copilot token validation logic.