running_pods
is a Gauge metric, and shows the total # of running pods in Kubernetes. It is started here in a separate thread, leverages a Python library to interrogate the Kubernetes API, and then filters by Status to return a count of running pods in your Kubernetes cluster.app_hello_world
is a Counter metric, and increments each time thehttps://localhost:5000
endpoint is hit. It's value is incremented here.
You can experiment with other metrics using the same Python library, if you like.
First, we have to make it possible to view the Prometheus server's UI.
Run this command:
# Port forward to the Prometheus UI
kubectl port-forward svc/metrics-prometheus-server 8082:80 &
View both metrics in Prometheus. Follow the instructions below to inspect and test each metric.
Assuming you opened the Prometheus server's UI (above), you should see that both metrics have the same value.
kubelet_running_pods
is a built-in metric that you get out-of-the-box with Prometheus. It should have the same value as our custom running_pods
metric.
Scale the Python app to go up by one:
kubectl scale deployment python-with-prometheus --replicas=2
Inspect running_pods
and kubelet_running_pods
to see the change in Prometheus.
First, we have to make it possible to call the Python app.
Run this command:
# Port forward to the Python app
kubectl port-forward svc/slytherin-svc 5000:5000 &
Then, call the Python app's API however many times you like using curl
:
# This will call our API and return "Hello, World!"
curl http://localhost:5000
Assuming you opened the Prometheus server's UI (above), you should see the count for app_hello_world_total
go up however many times you called the API.
Prometheus reads text based metrics.
The text it actually ingests for this lab can be viewed here, assuming you still have the port-forward setup on port 5000. For background, the app is setup serve Prometheus metrics here.
Prometheus is installed with a Helm chart in this lab, and the chart supports annotations. Essentially, Prometheus will watch for pods that have these annotations, configure itself to use them, and then ingest their metrics.
First, we have to get access to loki-stack:
# make it accessible
kubectl port-forward service/loki-stack-grafana 3000:80 &
# export its admin password
GRAFANA=$(kubectl get secret loki-stack-grafana -o jsonpath="{.data.admin-password}" | base64 --decode ; echo)
# copy this password
echo $GRAFANA
Then, you can view logs for the Python app here, after logging in of course.
# delete the Kubernetes cluster
k3d cluster delete
# point back at your original kubernetes context
kubectx $CURRENT_CONTEXT
# close the terminal
exit
No local machines were (hopefully) harmed in the making of this lab.