A desktop application that provides a clean interface for interacting with Ollama's AI models locally. Chat with AI models without needing internet connectivity after initial setup.
- Fully offline AI chat capabilities
- Clean, modern interface
- Dark/Light mode support
- Multiple AI model support through Ollama
- Real-time responses
- Local data storage
- Install Ollama:
- Visit Ollama's website
- Download and install for your system
- Open terminal and verify installation:
ollama --version
- Pull and run your first model:
ollama pull llama2 ollama run llama2
- Check out my blog post for more information on how to get started with Ollama.
-
Clone the repository
git clone https://github.com/yourusername/offline-chatbot.git cd offline-chatbot
-
Install dependencies
npm install
-
Create a
.env
file in the root directory:VITE_PORT=3030
Start both frontend and backend servers:
npm start
This will run:
- Frontend:
http://localhost:5173
- Backend:
http://localhost:3030
Run frontend only:
npm run dev
Run backend only:
npm run server
offline-chatbot/
├── src/ # Frontend source code
├── server/ # Backend server code
└── public/ # Static assets
Variable | Description | Required |
---|---|---|
VITE_PORT | Backend server port | Yes |
- React + Vite
- Express.js
- Ollama API
- TailwindCSS
- Node.js
MIT