A Slack bot that lets you choose your preferred LLM using LiteLLM. Pronounced the same as "Colombo".
Collmbo supports multiple LLMs, but let's begin with OpenAI's gpt-4o model for a quick setup.
Create a Slack app and obtain the required tokens:
- App-level token (
xapp-1-...
) - Bot token (
xoxb-...
)
Save your credentials in a .env
file:
SLACK_APP_TOKEN=xapp-1-...
SLACK_BOT_TOKEN=xoxb-...
LITELLM_MODEL=gpt-4o
OPENAI_API_KEY=sk-...
Start the bot using Docker:
docker run -it --env-file .env ghcr.io/iwamot/collmbo:latest-slim
Collmbo provides two official Docker image flavors:
Flavor | Description |
---|---|
slim |
A minimal image with only essential dependencies |
full |
A full-featured image with additional libraries (e.g., boto3 for Amazon Bedrock) |
You must specify a flavor explicitly. If you want to use the latest image, use latest-slim
or latest-full
.
Additionally, you can specify a versioned tag like x.x.x-slim
. For more details, please check the list of available tags.
Mention the bot in Slack and start chatting:
@Collmbo hello!
Collmbo should respond in channels, threads, and DMs.
First, pick your favorite LLM from LiteLLM supported providers.
To use it, update the relevant environment variables in your .env
file and restart the container.
Here are some examples:
SLACK_APP_TOKEN=xapp-1-...
SLACK_BOT_TOKEN=xoxb-...
LITELLM_MODEL=gemini/gemini-2.0-flash-001
GEMINI_API_KEY=...
SLACK_APP_TOKEN=xapp-1-...
SLACK_BOT_TOKEN=xoxb-...
LITELLM_MODEL=azure/<your_deployment_name>
# Specify the model type to grab details like max input tokens
LITELLM_MODEL_TYPE=azure/gpt-4o
AZURE_API_KEY=...
AZURE_API_BASE=...
AZURE_API_VERSION=...
SLACK_APP_TOKEN=...
SLACK_BOT_TOKEN=...
LITELLM_MODEL=bedrock/us.anthropic.claude-3-7-sonnet-20250219-v1:0
# You can specify a Bedrock region if it's different from your default AWS region
AWS_REGION_NAME=us-west-2
# You can use your access key for authentication, but IAM roles are recommended
AWS_ACCESS_KEY_ID=...
AWS_SECRET_ACCESS_KEY=...
When using Amazon Bedrock, as mentioned earlier, you need to use the full
flavor image:
docker run -it --env-file .env ghcr.io/iwamot/collmbo:latest-full
Collmbo does not serve endpoints and can run in any environment with internet access.
- Tools (Function Calling) – Extends functionality with function calling.
- Custom callbacks – Hooks into requests and responses for custom processing.
- Redaction – Masks sensitive information before sending requests.
- Slack-friendly formatting – Formats messages for better readability in Slack.
- Image input – Enables AI models to analyze uploaded images.
- PDF input – Enables AI models to analyze uploaded PDFs.
Collmbo runs with default settings, but you can customize its behavior by setting optional environment variables.
Contributions are welcome! Feel free to open an issue or submit a pull request.
Before opening a PR, please run:
./validate.sh
This helps maintain code quality.
- seratch/ChatGPT-in-Slack – The original project by @seratch.
The code in this repository is licensed under the MIT License.
The Collmbo icon (assets/icon.png
) is licensed under CC BY-NC-SA 4.0. For example, you may use it as a Slack profile icon.