Skip to content

luboszima/private-ai

Repository files navigation

private-ai

private AI runnning on Hetzner Infra

prequisites

setup

Install dependencies

make dependencies

note: script dependencies.sh was originally written for macos, you might need to adjust it for your system

Create a new project in the Hetzner Cloud Console and create an API token. Maybe you'll need new project: https://console.hetzner.cloud/projects, and in project go to Security->API Tokens and create new token.

Create new .env.yaml file used by terragrunt.

cp .env.yaml.example .env.yaml

and change values of hetzner token and your pub ssh key.

secrets:
  hetzner: "token"
  pub_ssh_key: "ssh key token"

also you can set which models you want pull and run on the server. Change this lines in .env.yaml file. All models can be found at ollama library: https://ollama.com/library

ai:
  models: "llama2-uncensored, llama2:13b"
  1. Then you can run terragrunt init
make tg-init
  1. and Terragrunt apply
make tg-apply

This should create a server at hetzner cloud and install all necessary software on it.

After that you can ssh to the server and run ollama run llama2-uncensored model, which is already pulled.

make ssh
# on server
ollama run llama2-uncensored

BE CAREFUL

This is just a simple example of how to run private AI on Hetzner Cloud. It's not production ready and should be used only for educational purposes.

Also don't forget destroy your infrastructure after you finish your work, because server which we using is not free and you will be charged for it. CPX51 which we are using cost 65USD per month, better to use it temporarily and destroy it after you finish your work.

make tg-destroy

About

private AI runnning on Hetzner Infra

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published