Skip to content

dbpunk-labs/octogen

Repository files navigation

GitHub Workflow Status (with event) PyPI - Version PyPI - Downloads Gitter

中文

Octogen

an open-source code interpreter
一款开源可本地部署的代码解释器

News

  • 2023-10-24 🎉 Octogen v0.5.0 has been released 🎉
octogen_demo.mp4
Supported OSs Supported Interpreters Supported Dev Enviroment

Getting Started

Requirement

  • python 3.10 and above
  • pip
  • docker 24.0.0 and above, or podman

To deploy Octogen, the user needs permission to run Docker commands.
To use codellama, your host must have at least 8 CPUs and 16 GB of RAM.

Install the octogen on your local computer

  1. Install og_up
pip install og_up
  1. Set up the Octogen service
og_up

You have the following options to select

  • OpenAI , recommanded for daily use
  • Azure OpenAI
  • CodeLlama,
  • Octogen agent services powered by GPT4 and Codellama 34B

The default is using docker as container engine. use podman with flag --use_podman

  1. Execute the command og, you will see the following output
Welcome to use octogen❤️ . To ask a programming question, simply type your question and press esc + enter
You can use /help to look for help

[1]🎧>

Development

Prepare the environment

git clone https://github.com/dbpunk-labs/octogen.git
cd octogen
python3 -m venv octogen_venv
source octogen_venv/bin/activate
pip install -r requirements.txt

Run the sandbox including Agent with mock model and Kernel

$ bash start_sandbox.sh
$ og

Welcome to use octogen❤️ . To ask a programming question, simply type your question and press esc + 
enter
Use /help for help

[1]🎧>hello
╭─ 🐙Octogen ─────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                     │
│  0 🧠 how can I help you today?                                                                     │
│                                                                                                     │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────╯
[2]🎧>

  • To use openai for development, just update the config in the start_sandbox.sh with the example of openai-env.example
  • To use azure openai for development, just update the config in the start_sandbox.sh with the example of azure-env.example
  • To use codellama for development, just update the config in the start_sandbox.sh with the example of codellama-env.example

Supported API Service

name type status installation
Openai GPT 3.5/4 LLM ✅ fully supported use og_up then choose the OpenAI
Azure Openai GPT 3.5/4 LLM ✅ fully supported use og_up then choose the Azure OpenAI
LLama.cpp Server LLM ✔️ supported use og_up then choose the CodeLlama
Octopus Agent Service Code Interpreter ✅ supported apply api key from octogen.dev then use og_up then choose the Octogen

The internal of local deployment

octogen-internal

  • Octogen Kernel: The code execution engine, based on notebook kernels.
  • Octogen Agent: Manages client requests, uses ReAct to process complex tasks, and stores user-assembled applications.
  • Octogen Terminal Cli: Accepts user requests, sends them to the Agent, and renders rich results. Currently supports Discord, iTerm2, and Kitty terminals.

Features

  • Automatically execute AI-generated code in a Docker environment.
  • Experiment feature, render images in iTerm2 and kitty.
  • Upload files with the /up command and you can use it in your prompt
  • Experiment feature, assemble code blocks into an application and you can run the code directly by /run command
  • Support copying output to the clipboard with /cc command
  • Support prompt histories stored in the octopus cli

if you have any feature suggestion. please create a discuession to talk about it

Roadmap