Skip to content

Chat Application using Streamlit and local models, but running on K8s

Notifications You must be signed in to change notification settings

arthur-r-oliveira/chat_application_k8s

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

5 Commits
 
 
 
 
 
 
 
 

Repository files navigation

Using Ollama UBI as internal k8s service

https://github.com/williamcaban/ollama-ubi

with Ollama UBI pod, pull the required model

This command will download the mistral model for ollama to use

$ ollama pull mistral

This command will run mistral at http://localhost:11434/v1

ollama run mistral

Build chat image

$ podman build -f Containerfile -t quay.io/rhn_support_arolivei/chat

deploy on k8s/microshift/openshift

oc apply -f manifests/02-chat-serve-deployment.yaml
oc apply -f manifests/03-chat-svc.yaml
oc apply -f manifests/04-chat-route.yaml

About

Chat Application using Streamlit and local models, but running on K8s

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 73.8%
  • Dockerfile 26.2%