demo.mp4
Ask OpenAI for help in vim's command line!
New to using neovim? Need to learn the ropes a bit better? Use this as a crutch so you don't run away screaming and instead take the time to learn your way around. Use OpenAI's gpt-4o (et al) to recall that pesky command that evades recollection.
:save all files<C-b>
" turns into:
:wall
:run ls command and copy output into buffer<C-b>
" turns into:
:r !ls
:shut it down... screw my work, I'm done with this POS<C-b>
:qa!
:insert a UUID<C-b>
:r!uuidgen
:what is the current filetype?<C-b>
:echo &filetype
:how do I wrap text<C-b>
:set wrap
:what key in normal mode copies 2 lines<C-b>
:normal 2yy
:what clear search highlights<C-b>
:nohlsearch
:test<C-b>
:echo "hello world"
" don't forget, there's often more than one way to do something
Let me know if there are other ways you'd like to ask for help, beyond the command line. And no, I'm not talking about vscode.
This works with any plugin manager. The plugin repo name g0t4/ask-openai.nvim
is all you need to use this.
{
"g0t4/ask-openai.nvim",
-- include one of the following:
-- 1. set opts, empty = defaults
opts = { },
-- 2. call setup
config = function()
require("ask-openai").setup { }
end,
dependencies = { "nvim-lua/plenary.nvim" },
event = { "CmdlineEnter" }, -- optional, for startup speed
-- FYI most of the initial performance hit doesn't happen until the first use
}
use {
"g0t4/ask-openai.nvim",
config = function()
require("ask-openai").setup { } -- empty == default options
end,
requires = { "nvim-lua/plenary.nvim" },
event = { "CmdlineEnter" }, -- optional
}
📌 Tip: check config.lua for all options
If you pass an empty opts table { }
then copilot will be used.
opts = {
provider = "copilot", -- default
}
-- must authenticate once with copilot.vim/lua before ask-openai will work
-- does not directly depend on github/copilot.vim plugin
⚠️ ollama support is early, and I may change how it works, especially if people have issues
opts = {
provider = "keyless",
model = "llama3.2-vision:11b",
use_api_ollama = true, -- use ollama default, OR:
-- api_url = "http://localhost:11434/api/chat" -- override default for ollama
}
💨 groq is insanely fast and FREE right now!
opts = {
provider = function()
-- use any logic you want, this is just an example:
return require("ask-openai.config")
.get_key_from_stdout("security find-generic-password -s groq -a ask -w" )
end,
model = "llama-3.2-90b-text-preview",
use_api_groq = true,
}
# FYI, to set keychain password
security add-generic-password -a ask -s groq -w
# provide password when prompted
opts = {
provider = function()
return os.getenv("OPENAI_API_KEY")
end,
}
# FYI, test env var from keychain
export OPENAI_API_KEY=$(security find-generic-password -s openai -a ask -w )
opts = {
keymaps = {
cmdline_ask = "<C-b>", -- default
-- or:
cmdline_ask = false, -- disable it, see init.lua how it's set
},
}
Enable verbose logging:
opts = {
verbose = true,
}
Then, make a request, then check messages for verbose logs:
:messages
Don't forget health checks:
:checkhealth ask-openai
" FYI verbose mode adds extra health check info
And help:
:help ask-openai<Tab>
" Lazy plugin manager turns this README.md into helptags. If your using a different plugin manager, you might not see these help docs.
- ollama has /v1/chat/completions too (see my single.py in fish ask openai), use that instead of that custom thing I did