- Setting Up a Kubernetes Cluster
- Build and Containerize a Spring Boot Application
- Quickstart With Kubernetes
- Organizing YAML with Kustomize
- Developer Workflow with Skaffold
- Application Metadata with Kapp
- Building an Image with Pack
- Building in the Cluster with Kpack
- Build and Deploy with Project Riff
- Metrics Server
- Autoscaler
- Basic Observability with Prometheus
There are many choices for how to run kubernetes. One of the simplest and least resource intensive is to use the kind
tool from Kubernetes SIG, which runs a really slim cluster in a docker image.
$ kind create cluster
Cluster creation complete. You can now use the cluster with:
export KUBECONFIG="$(kind get kubeconfig-path --name="kind")"
kubectl cluster-info
$
You can use that KUBECONFIG
environment variable per the suggestion, but then you lose the context if you switch terminals. Or you can merge the kind cluster config with your existing config:
$ mkdir -p ~/.kube
$ KUBECONFIG="$(kind get kubeconfig-path)":~/.kube/config kubectl config view --merge --flatten > config.yaml
$ mv config.yaml ~/.kube/config
Then just use the context defined by kind
:
$ kubectl config use-context kubernetes-admin@kind
Switched to context "kubernetes-admin@kind".
$ kubectl get all
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/kubernetes ClusterIP 10.43.0.1 <none> 443/TCP 7m13s
Most of what follows would work just fine if you create your cluster in some other way (e.g. on GCP or AWS). There are also other options for local clusters, like Minikube or K3d.
Note
|
If you have limited resources, but have an internet connection, you can get a nice playground for trying out kubernetes using the Spring Kubernetes Guides or the Kubernetes bootcamp tutorial. You get a kubernetes cluster and a bash terminal with kubectl, docker, java and git. All or most of the examples here should work there, if you don’t want to use kind .
|
Since Kind runs in Docker, it is convenient also to run a Docker registry as a Docker container locally. That way you don’t have to figure out how to authenticate and push to a remote registry, just to play around and test basic behaviour.
First run the registry:
$ docker run -d --restart=always -p "5000:5000" --name registry registry:2
Then start the cluster. You need some configuration to tell it to use the local registry. Example:
$ reg_ip=$(docker inspect -f '{{.NetworkSettings.IPAddress}}' registry)
$ cat <<EOF | kind create cluster --config=-
kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
containerdConfigPatches:
- |-
[plugins."io.containerd.grpc.v1.cri".registry.mirrors."localhost:5000"]
endpoint = ["http://${reg_ip}:5000"]
EOF
Then you can use image labels like localhost:5000/demo
on the host to push the image, and also in the Kubernetes manifests to pull and run it.
If you are using kind to run your cluster, then the kubernetes API is exposed on localhost on a random port. It is also exposed on the local docker network on port 6443. On Linux (not sure about Mac and Windows) this means that you can connect to it from another container using that address, and kind
will even tell you how.
First make a copy of the kube config for the internal address:
$ kind get kubeconfig --internal > ~/.kube/kind-config-internal
Then if you have a container myorg/mydevcontainer
with docker and kubectl installed, you can run it like this:
$ docker run --mount type=bind,source=$HOME/.kube,target=/home/vscode/.kube -e KUBECONFIG=/home/vscode/.kube/kind-config-internal -v /var/run/docker.sock:/var/run/docker.sock -ti myorg/mydevcontainer /bin/bash
#/ kubectl cluster-info
Kubernetes master is running at https://172.17.0.3:6443
KubeDNS is running at https://172.17.0.3:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
Using this you can launch VSCode on a project using the remote container extension and then build and operate it from within the container.
Note
|
Rancher has another dockerized kubernetes cluster called K3s which works in a similar way to Kind. They don’t have the "internal" flag though, so it’s harder to use from another container |
Create a new application using https://start.spring.io or re-use an existing one. We will assume that you have an app that listens on port 8080 and has an HTTP endpoint, e.g.
@SpringBootApplication
@RestController
public class DemoApplication {
@GetMapping("/")
public String home() {
return "Hello World!";
}
public static void main(String[] args) {
SpringApplication.run(DemoApplication.class, args);
}
}
Build and push a docker image from your app. For example, using Maven or Gradle, you can quickly create an image using the jib
plugin. From Maven:
$ ./mvnw com.google.cloud.tools:jib-maven-plugin:build -Dimage=myorg/demo
This command creates an image and pushes it to Dockerhub at myorg/demo
(so your local docker config has to have permission to push to myorg
). Any way you can get a docker image into a registry will work, but remember that the kubernetes cluster will need to be able to pull the images, so a public registry is easiest to work with.
Sometimes a Dockerfile
is easier to work with. Here’s one that builds and deploys the application above (use it with buildkit and remember to set DOCKER_BUILDKIT=1
in the shell where you call docker
):
# syntax=docker/dockerfile:experimental
FROM openjdk:8-jdk-alpine as build
WORKDIR /workspace/app
COPY mvnw .
COPY .mvn .mvn
COPY pom.xml .
COPY src src
RUN --mount=type=cache,target=/root/.m2 ./mvnw install -DskipTests
RUN mkdir -p target/dependency && (cd target/dependency; jar -xf ../*.jar)
FROM openjdk:8-jdk-alpine
RUN addgroup -S demo && adduser -S demo -G demo
VOLUME /tmp
ARG DEPENDENCY=/workspace/app/target/dependency
COPY --from=build ${DEPENDENCY}/BOOT-INF/lib /app/lib
COPY --from=build ${DEPENDENCY}/META-INF /app/META-INF
COPY --from=build ${DEPENDENCY}/BOOT-INF/classes /app
RUN chown -R demo:demo /app
USER demo
ENTRYPOINT ["sh", "-c", "java -noverify -cp /app:/app/lib/ \
com.example.demo.DemoApplication ${0} ${@}"]
A nice quick way to deploy the application to kubernetes is to generate a YAML descriptor using kubectl --dry-run
. We need a deployment and a service:
$ kubectl create deployment demo --image=myorg/demo --dry-run -o=yaml > deployment.yaml
$ echo --- >> deployment.yaml
$ kubectl create service clusterip demo --tcp=80:8080 --dry-run -o=yaml >> deployment.yaml
You can edit the YAML at this point if you need to (e.g. you can remove the redundant status and created date entries). Or you can just apply it, as it is:
$ kubectl apply -f deployment.yaml
You can check that the app is running:
$ kubectl get all
NAME READY STATUS RESTARTS AGE
pod/demo-658b7f4997-qfw9l 1/1 Running 0 146m
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/kubernetes ClusterIP 10.43.0.1 <none> 443/TCP 2d18h
service/demo ClusterIP 10.43.138.213 <none> 80/TCP 21h
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/demo 1/1 1 1 21h
NAME DESIRED CURRENT READY AGE
replicaset.apps/demo-658b7f4997 1 1 1 21h
d
There is a deployment and a service, per the YAML we created above. The deployment has spawned a replicaset and a pod, which is running. The service is listening on port 80 on an internal cluster IP address - use port 80 so that service discovery via DNS works inside the cluster.
The application will have logged a normal Spring Boot startup to its console on the pod listed above. E.g.
$ kubctl logs demo-658b7f4997-qfw9l
. ____ _ __ _ _
/\\ / ___'_ __ _ _(_)_ __ __ _ \ \ \ \
( ( )\___ | '_ | '_| | '_ \/ _` | \ \ \ \
\\/ ___)| |_)| | | | | || (_| | ) ) ) )
' |____| .__|_| |_|_| |_\__, | / / / /
=========|_|==============|___/=/_/_/_/
:: Spring Boot :: (v2.2.0.RELEASE)
2019-10-18 08:52:37.932 WARN 1 --- [ main] pertySourceApplicationContextInitializer : Skipping 'cloud' property source addition because not in a cloud
2019-10-18 08:52:37.935 WARN 1 --- [ main] nfigurationApplicationContextInitializer : Skipping reconfiguration because not in a cloud
2019-10-18 08:52:37.943 INFO 1 --- [ main] com.example.demo.DemoApplication : Starting DemoApplication on 66675bec6ec8 with PID 1 (/workspace/BOOT-INF/classes started by cnb in /workspace)
2019-10-18 08:52:37.943 INFO 1 --- [ main] com.example.demo.DemoApplication : No active profile set, falling back to default profiles: default
2019-10-18 08:52:38.917 INFO 1 --- [ main] o.s.b.a.e.web.EndpointLinksResolver : Exposing 2 endpoint(s) beneath base path '/actuator'
2019-10-18 08:52:39.283 INFO 1 --- [ main] o.s.b.web.embedded.netty.NettyWebServer : Netty started on port(s): 8080
2019-10-18 08:52:39.287 INFO 1 --- [ main] com.example.demo.DemoApplication : Started DemoApplication in 1.638 seconds (JVM running for 2.087)
The service was created with type ClusterIP
so it is only accessible from within the cluster. Once the app is running you can use kubectl
to punch through to the service and check that the endpoint is working:
$ kubectl port-forward svc/demo 8080:80
$ curl localhost:8080
Hello World!
Note
|
The Service was set up to listen on port 80. This makes it easy to use DNS for service discovery - you never need to know the port because it is just the default for HTTP. Note also that when the service was created the kubectl command had 80:8080 whereas when the port-forward was created, it get transposed to 8080:80 so that port 80 is not used on the host (can be confusing).
|
One of the benefits of having a YAML descriptor of your application in source control is that you can use it to trigger an upgrade. The workflow would be something like
-
Make a change to the app.
-
Build the container:
mvn install && docker build -t myorg/myapp .
-
Push it to the registry:
docker push myorg/myapp
-
Apply the kubernetes configuration:
kubectl apply -f deployment.yaml
The deployment notices that it has a new image to install, so it creates a new pod, given that it has the default ImagePullPolicy: Always
. Once the new pod is up and running it shuts down the old one. (Steps 2 and 3 above would be combined into one if you used jib instead of docker.)
If you use kubectl port-forward
to create an SSH tunnel to the service you can only access it from localhost. If, instead, you want to share the app on the internet or LAN, you can get something up and running really quickly with ngrok
. Example:
kubectl run --restart=Never -t -i --rm ngrok --image=gcr.io/kuar-demo/ngrok -- http demo:80
When ngrok
starts it announces on the console a public http and https service that connects to your "demo" service. E.g.
ngrok by @inconshreveable (Ctrl+C to quit)
Session Status online
Session Expires 7 hours, 50 minutes
Version 2.1.18
Region United States (us)
Web Interface http://127.0.0.1:4040
Forwarding http://9ef2c03b.ngrok.io -> demo:80
Forwarding https://9ef2c03b.ngrok.io -> demo:80
Connections ttl opn rt1 rt5 p50 p90
1 0 0.00 0.00 0.41 0.41
HTTP Requests
-------------
GET / 404 Not Found
You can connect to the dashboard on port 4040 if you expose it as a service:
$ kubectl expose pod/ngrok --port 4040
$ kubectl port-forward svc/ngrok 4040:4040
Note
|
A global tunnel on ngrok is certainly not recommended for production apps, but is quite handy at development time.
|
Not really ingress in the Kubernetes sense. This is a bit like port forward, since it works at the tcp level, but more stable (the "tunnel" survives a restart of the service pods). Define this function in your shell:
function socat() {
service=$1
port=$2
local_port=$3
node_port=$(kubectl get service $service -o=jsonpath="{.spec.ports[?(@.port == ${port})].nodePort}")
docker run -d --name kind-proxy-${local_port} \
--publish 127.0.0.1:${local_port}:${port} \
--link kind-control-plane:target \
alpine/socat -dd \
tcp-listen:${port},fork,reuseaddr tcp-connect:target:${node_port}
}
and then change the service declarations for the services you need to expose to type: NodePort
. E.g.
apiVersion: v1
kind: Service
metadata:
name: ui
spec:
type: NodePort
...
then you will see it in kubectl
along with the ephemeral port assigned on the node:
$ kubectl get service
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
ui NodePort 10.109.157.132 <none> 80:31207/TCP 3h57m
and you can expose it on localhost using socat ui 80 8080
and then curl localhost:8080
to reach it.
If your kubernetes cluster is on bare metal (like the default one at katacoda.com) you can run socat on the host. Expose your service as type: NodePort
and then run socat on the node:
$ port=80
$ service=demo
$ node_port=$(kubectl get service $service -o=jsonpath="{.spec.ports[?(@.port == ${port})].nodePort}")
$ socat -dd tcp-listen:8080,fork,reuseaddr tcp-connect:127.0.0.1:${node_port}
Then you can connect in an other terminal to localhost:8080
.
As soon as you need to deploy your application to more than one cluster (e.g. local, test and production environments), it becomes challenging to maintain all the different options in YAML. Ideally you want to be able to create all the options and commit them to source control. There are many options to maintain and organize YAML files, many of which involve templating. Templating means replacing placeholders in files that you create with different values at deployment time. The problem with this that the template files tend not to be valid on their own, and they are hard to read, test and maintain.
Kustomize is a template-free solution to this problem. It works by merging YAML "patches" into a "base" configuration. A patch is just the bits that change, which can be additions or replacements. Kustomize is actually built into the kubectl
CLI (type kubectl kustomize --help
for details) but currently pegged to an old version that doesn’t have some interesting features that we want to use (from version 3).
To get started you need a base configuration, for which we can use the deployment.yaml
that we already created, and then we add a really basic kustomization.yaml
:
$ mkdir -p k8s/base
$ mv deployment.yaml k8s/base
$ cat > k8s/base/kustomization.yaml
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- deployment.yaml
With this configuration we can test that it works:
$ kustomize build k8s/base/
apiVersion: v1
kind: Service
metadata:
name: demo
...
The merged YAML is trivial in this case - it is just a copy of the deployment.yaml
. It is echoed to standard out, so it can be applied to the cluster with
$ kustomize build k8s/base/ | kubectl apply -f -
The deployment.yaml
that we have is fine, but it’s not very portable - you can only use it once in the same namespace because of the hard-coded labels and selectors. Kustomize has a feature that lifts that restriction, and simplifies the YAML. We can use this kustomization.yaml
(note the addition of the commonLabels
):
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- deployment.yaml
commonLabels:
app: demo
with the labels and selectors removed from deployment.yaml
:
apiVersion: apps/v1
kind: Deployment
metadata:
name: app
spec:
replicas: 1
template:
spec:
containers:
- name: app
image: myorg/myapp
---
apiVersion: v1
kind: Service
metadata:
name: app
spec:
ports:
- protocol: TCP
port: 8080
targetPort: 80
name: http
We can actually leave the labels and selectors in there if we want, and then the deployment.yaml
is usable as a standalone manifest. Kustomize replaces them if we ask it to, but doesn’t break if we don’t.
The image can also be overridden in a special way in kustomization.yaml
:
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- deployment.yaml
commonLabels:
app: demo
images:
- name: myorg/myapp
newName: myorg/demo
To add a new environment we just create a patch and a new kustomization.yaml
:
$ mkdir -p k8s/prod
$ cd $_
$ touch kustomization.yaml
$ kustomize edit add base ../base
$ touch patch.yaml
$ kustomize edit add patch patch.yaml
$ cat kustomization.yaml
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- ../base
patchesStrategicMerge:
- patch.yaml
$ cd ../..
The patch.yaml
is still empty so if you create a merged deployment using kustomize build k8s/prod
it will be identical to the base set. Let’s add some configuration to the deployment for probes, as would be typical for an app using Spring Boot actuators:
$ cat > k8s/prod/patch.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: demo
spec:
template:
spec:
containers:
- name: demo
livenessProbe:
httpGet:
path: /actuator/info
port: 8080
initialDelaySeconds: 10
periodSeconds: 3
timeoutSeconds: 5
readinessProbe:
initialDelaySeconds: 20
periodSeconds: 10
httpGet:
path: /actuator/health
port: 8080
Note
|
Sometimes network issues bounce the liveness probe for no reason on startup so we extended the timeout to 5 seconds. A startup probe might be a good idea in some cases. |
When we create the merged configuration:
$ kustomize build k8s/prod
kustomize
matches the kind
and metadata.name
in the patch with the deployment in the base, adding the probes. You could also change the container image, port mapping, volume mounts, etc. (anything that might change between environments).
Empirically, when all pods are unhealthy you get "Failed to connect" for requests inside the cluster. For requests through port-forward you seem to get 200 responses, so that’s not helpful. A port-forward routes to a single pod when established, bypassing the service. So all traffic on that port will “ignore” the ready probe, since that’s managed in the service. Fortunately nobody would use a port forward in production, so even an app exposed to the outside through a load balancer or ingress would fail to connect if all pods were unhealthy.
A useful customization is to add a config map with a file called application.properties
so that Spring Boot can consume it easily. The config map isn’t in the base deployment, so we add it as a resource:
$ kubectl create configmap demo-config --dry-run -o yaml > k8s/local/config.yaml
$ (cd k8s/local; kustomize edit add resource config.yaml)
Then we add the properties file
$ touch k8s/local/application.properties
$ (cd k8s/local; kustomize edit add configmap demo-config --from-file application.properties)
$ cat >> k8s/local/config.yaml
behavior: merge
You can edit the properties file to add Spring Boot configuration, e.g.
info.name=demo
Then we mount the config map in the pod:
$ touch k8s/local/mount.yaml
$ (cd k8s/local; kustomize edit add patch mount.yaml)
$ cat > k8s/local/mount.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: demo
spec:
spec:
template:
spec:
containers:
- image: dsyer/demo
name: demo
volumeMounts:
- name: demo-config
mountPath: /workspace/config/
volumes:
- name: demo-config
configMap:
name: demo-config
The file application.properties
will be present inside the mounted volume /workspace/config/
. Since jib
created the application with a working directory of /workspace
, this means that Spring Boot will automatically load the properties file for us on startup.
To update the application deployment and test the change (assuming Spring Boot actuators are on the classpath):
$ kustomize build k8s/local | kubectl apply -f -
$ kubectl port-forward svc/demo 8080:80
$ curl localhost:8080/actuator/info
{"name":"demo"}
Development and deployment can be a series of awkward, unconnected steps. Skaffold provides a way to stitch them together and take out some of the toil. A basic configuration file for the demo project could look like this:
apiVersion: skaffold/v2beta5
kind: Config
build:
artifacts:
- image: dsyer/demo
context: ./demo
local:
useBuildkit: true
deploy:
kustomize:
paths:
- ./layers/samples/simple
It will build the ./demo
app, using docker (but other options are available) and deploy it using the "simple" kustomization. You can add a command line option to also forward a port and report it on the command line:
$ skaffold dev --port-forward
...
Starting deploy...
- service/demo unchanged
- deployment.apps/demo unchanged
Port forwarding service/app in namespace default, remote port 80 -> address 127.0.0.1 port 4503
Watching for changes...
...
If you make a change to one of the inputs to the docker build, it will kick off again, and re-deploy, bumping the image label automatically, so forcing Kubernetes to do a rolling upgrade. You can also do a skaffold delete
to tear down the app in one line, but if skaffold dev
exits normally it will tear down the app automatically.
Skaffold supports the notion of "profiles", so you can build and deploy slightly differently in different environments. This makes it a useful building block for continuous delivery. You can also use profiles to deploy multiple services and applications from the same codebase.
Spring Boot devtools monitors the compiled application code and restarts Spring if it sees changes in "significant" places (like .classs
files and .properties
files). Skaffold has a neat "hot sync" feature where it can be configured to skip the build step when source files change, and just copy them into the running container in Kubernetes.
As an example, consider working with Spring Boot 2.3 and the buildpack support for building images. First we need devtools as a dependency:
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-devtools</artifactId>
<scope>runtime</scope>
</dependency>
The application build needs to know about the devtools, so they don’t get excluded from the image. And we need to parameterize the image name:
<properties>
<docker.image>dsyer/demo</docker.image>
</properties>
<build>
<plugins>
<plugin>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-maven-plugin</artifactId>
<configuration>
<excludeDevtools>false</excludeDevtools>

</configuration>
</plugin>
</plugins>
</build>
and then you can use Spring Boot to build the image via a custom builder in skaffold.yaml
:
apiVersion: skaffold/v2beta5
kind: Config
build:
artifacts:
- image: dsyer/demo
context: ./demo
custom:
buildCommand: ./mvnw spring-boot:build-image -D docker.image=$IMAGE && docker push $IMAGE
dependencies:
paths:
- pom.xml
- src/main/resources
- target/classes
sync:
manual:
- src: "src/main/resources/**/*"
dest: /workspace/BOOT-INF/classes
strip: src/main/resources/
- src: "target/classes/**/*"
dest: /workspace/BOOT-INF/classes
strip: target/classes/
deploy:
kustomize:
paths:
- "layers/samples/simple"
The "sync" resource paths have to match something in the "dependencies" otherwise a change will trigger a build instead of a sync.
There are quite a few tools available that manage a set of kubernetes resources, applying a label to them, and allowing users to adjust the resources as a group. The lightest weight of these tools is probably kapp
(from k14s). It works without admin privileges and does not use custom CRDs, so you can use it as a regular user in any namespace you have access to.
You can deploy a directory (containing multiple YAML files) and dub it an application called "demo", e.g.
$ kapp deploy -a demo -f k8s/demo/
Changes
Namespace Name Kind Conds. Age Op Wait to Rs Ri
default demo-app Deployment - - create reconcile - -
^ demo-app Service - - create reconcile - -
Op: 7 create, 0 delete, 0 update, 0 noop
Wait to: 7 reconcile, 0 delete, 0 noop
Continue? [yN]: y
10:10:36AM: ---- applying 2 changes [0/2 done] ----
10:10:36AM: create service/demo-app (v1) namespace: default
10:10:36AM: create deployment/demo-app (apps/v1) namespace: default
10:10:37AM: ---- waiting on 2 changes [0/2 done] ----
10:10:37AM: ok: reconcile service/demo-app (v1) namespace: default
10:10:37AM: ongoing: reconcile deployment/demo-app (apps/v1) namespace: default
10:10:37AM: ^ Waiting for 1 unavailable replicas
10:10:37AM: L ok: waiting on replicaset/demo-app-66ddc7584c (apps/v1) namespace: default
10:10:37AM: L ongoing: waiting on pod/demo-app-66ddc7584c-8rwgv (v1) namespace: default
10:10:37AM: ^ Pending: ContainerCreating
10:10:41AM: ok: reconcile deployment/demo-app (apps/v1) namespace: default
10:10:41AM: ---- applying complete [2/2 done] ----
10:10:41AM: ---- waiting complete [2/2 done] ----
Succeeded
If you apply the same manifest twice it’s a no-op:
$ kapp deploy -a demo -f k8s/demo/
Changes
Namespace Name Kind Conds. Age Op Wait to Rs Ri
Op: 0 create, 0 delete, 0 update, 0 noop
Wait to: 0 reconcile, 0 delete, 0 noop
Succeeded
Using kapp deploy
is like kubectl apply
but with more features. It looks at what you want to apply and summarizes, then asks you (by default) if you want to proceed. Then it waits until all the changes are applied and reconciled, so at the end all your application pods are running and connected to each other. It adds metadata to the application objects, and stores its own state in a config map called <appname>-change-<hash>
.
You can tail the logs from all of an application’s pods:
$ kapp logs -f -a demo
...
demo-app-66ddc7584c-8rwgv > app | 2019-11-06 10:11:09.655 INFO 1 --- [ main] o.s.b.w.embedded.tomcat.TomcatWebServer : Tomcat started on port(s): 8080 (http) with context path ''
demo-app-66ddc7584c-8rwgv > app | 2019-11-06 10:11:09.657 INFO 1 --- [ main] DemoApplication : Started DemoApplication in 4.895 seconds (JVM running for 5.277)
You can use kapp
with kustomize
in a one-liner like this:
$ kapp deploy -a demo -f <(kustomize build k8s/demo)
...
(A pipe doesn’t work because of the [yN]
prompt.)
The Pack CLI can be used to build a container image with Cloud Native Buildpacks (as an alternative to jib
, or docker). There are many advantages to using Cloud Native Buildpacks, most of which are related to the ability in principle to patch images without rebuilding the app or even changing the application code.
Download the CLI and set it up:
$ pack set-default-builder cloudfoundry/cnb:bionic
Then you can build your app (from the top-level source directory) and create an image in one line:
$ pack build myorg/demo -p .
Pulling image index.docker.io/cloudfoundry/cnb:bionic
bionic: Pulling from cloudfoundry/cnb
...
===> DETECTING
[detector] ======== Results ========
[detector] skip: org.cloudfoundry.archiveexpanding@v1.0.68
[detector] pass: org.cloudfoundry.openjdk@v1.0.36
[detector] pass: org.cloudfoundry.buildsystem@v1.0.86
[detector] pass: org.cloudfoundry.jvmapplication@v1.0.52
[detector] pass: org.cloudfoundry.tomcat@v1.0.86
[detector] pass: org.cloudfoundry.springboot@v1.0.70
[detector] pass: org.cloudfoundry.distzip@v1.0.69
[detector] skip: org.cloudfoundry.procfile@v1.0.28
[detector] skip: org.cloudfoundry.azureapplicationinsights@v1.0.73
[detector] skip: org.cloudfoundry.debug@v1.0.73
[detector] skip: org.cloudfoundry.googlestackdriver@v1.0.22
[detector] skip: org.cloudfoundry.jdbc@v1.0.72
[detector] skip: org.cloudfoundry.jmx@v1.0.70
[detector] skip: org.cloudfoundry.springautoreconfiguration@v1.0.79
[detector] Resolving plan... (try #1)
[detector] Success! (6)
...
===> BUILDING
[builder]
[builder] Cloud Foundry OpenJDK Buildpack v1.0.36
[builder] OpenJDK JDK 11.0.4: Reusing cached layer
[builder] OpenJDK JRE 11.0.4: Reusing cached layer
...
[builder] [INFO] BUILD SUCCESS
[builder] [INFO] ------------------------------------------------------------------------
[builder] [INFO] Total time: 01:23 min
[builder] [INFO] Finished at: 2019-10-18T12:16:46Z
[builder] [INFO] ------------------------------------------------------------------------
...
[cacher] Caching layer 'org.cloudfoundry.springboot:spring-boot' with SHA sha256:6a1b3476da1c56f889f48d9f69dbe7e35369d4db880ac0f8226a2d9bc5fa65f8
Successfully built image myorg/demo
Just like the jib
example, this pushes the image to Dockerhub. To push to a different registry you just need a prefix on the image tag. E.g. for Google Container Registry (assuming you have a project called "myorg"):
$ pack build gcr.io/myorg/demo -p .
Instead of building from source, you can also build an image from a JAR file. E.g.
$ pack build myorg/demo -p target/*.jar
The resulting image can be run locally with docker, or deployed to kubernetes using the YAML we created already.
To automate the build, and benefit from some neat tooling for managing base images and things like JDK patches, you can build in the cluster with Kpack. Kpack is a bunch of kubernetes resources that allow you to automatically build and maintain application images from within a cluster. Install it according to the instructions in the README (it’s just a YAML file you can apply to the cluster). E.g.
$ kubectl apply -f https://github.com/pivotal/kpack/releases/download/v0.0.5/release-0.0.5.yaml
You need to define a "builder" for the cluster, similarly to the way we set up the default builder for pack
:
$ kubectl apply -f -
apiVersion: build.pivotal.io/v1alpha1
kind: ClusterBuilder
metadata:
name: default-builder
spec:
image: cloudfoundry/cnb:bionic
You will also need a service account and a secret that allows the service account to push to a Docker registry. There is an example in the online tutorial (steps 1 and 2). Create a service account called "service-account" in the default namespace, to keep it consistent with the sample YAML in the next paragraph. For example:
apiVersion: v1
kind: ServiceAccount
metadata:
name: service-account
secrets:
- name: registry-credentials
---
apiVersion: v1
kind: Secret
metadata:
name: registry-credentials
annotations:
build.pivotal.io/docker: index.docker.io
type: kubernetes.io/basic-auth
stringData:
username: <dockerhub-username>
password: <dockerhub-password>
To start with you declare an "image" resource.
$ kubectl apply -f -
apiVersion: build.pivotal.io/v1alpha1
kind: Image
metadata:
name: demo
spec:
tag: myorg/demo
serviceAccount: service-account
builder:
name: default-builder
kind: ClusterBuilder
source:
git:
url: https://github.com/myorg/demo
revision: master
Note that the tag
specified above has no prefix, so it defaults to index.docker.io
. A successful build will result in a push to dockerhub.
An image resource creates a source resolver that monitors your source code (e.g. looking for git commits). When the source changes there is a build resource that creates a new pod to build your application. You can see these resources in kubernetes:
$ kubectl get pods,images,sourceresolvers,build
NAME READY STATUS RESTARTS AGE
pod/demo-build-1-52rws-build-pod 0/1 Completed 0 3h43m
NAME LATESTIMAGE READY
image.build.pivotal.io/demo index.docker.io/myorg/demo@sha256:8af46... True
NAME AGE
sourceresolver.build.pivotal.io/demo-source 25h
NAME IMAGE SUCCEEDED
build.build.pivotal.io/demo-build-1-52rws index.docker.io/myorg/demo@sha256:8af46... True
The pod showing there is the one that ran the first (index "1") build for the "demo" image. The build was successful, as we can tell from the image and the build resources. If it had failed the status would be Error
(probably), and we could investigate the failure by asking kubernetes to describe the pod. It has a number of init containers:
$ kubectl get pod demo-build-1-52rws-build-pod -o jsonpath='{.spec.initContainers[*].name}'
creds-init source-init prepare detect restore analyze build export cache
One of the init containers would have failed, and hopefully emitted logs. E.g.
$ kubectl logs demo-build-1-52rws-build-pod -c build
Cloud Foundry OpenJDK Buildpack v1.0.36
OpenJDK JRE 11.0.4: Reusing cached layer
Cloud Foundry JVM Application Buildpack v1.0.52
Executable JAR: Contributing to layer
Writing CLASSPATH to shared
Process types:
executable-jar: java -cp $CLASSPATH $JAVA_OPTS org.springframework.boot.loader.JarLauncher
task: java -cp $CLASSPATH $JAVA_OPTS org.springframework.boot.loader.JarLauncher
web: java -cp $CLASSPATH $JAVA_OPTS org.springframework.boot.loader.JarLauncher
...
You can also get a summary of the init container logs using the logs
utility, downloadable from the Kpack releases page. E.g.
$ logs -image demo
{"level":"info","ts":1571388662.353281,"logger":"fallback-logger","caller":"creds-init/main.go:40","msg":"Credentials initialized.","commit":"002a41a"}
...
Note that logs
never exits - it’s like tail -f
. A successful build shows the image being created:
$ logs -image demo
...
Reusing layer 'org.cloudfoundry.jvmapplication:executable-jar' with SHA sha256:4504416...
Exporting layer 'org.cloudfoundry.springboot:spring-boot' with SHA sha256:fa22107...
Exporting layer 'org.cloudfoundry.springautoreconfiguration:auto-reconfiguration' with SHA sha256:55c92a2c...
*** Images:
myorg/demo - succeeded
index.docker.io/myorg/demo:b2.20191018.091148 - succeeded
*** Digest: sha256:8af467...
...
The image can then be pulled from myorg/demo:latest
or from the explicit, generated build label (b2.20191018.091148
in this case), or from the sha256 digest (as per the output from kubectl
). E.g.
$ docker run -p 8080:8080 myorg/demo@sha256:8af467...
. ____ _ __ _ _
/\\ / ___'_ __ _ _(_)_ __ __ _ \ \ \ \
( ( )\___ | '_ | '_| | '_ \/ _` | \ \ \ \
\\/ ___)| |_)| | | | | || (_| | ) ) ) )
' |____| .__|_| |_|_| |_\__, | / / / /
=========|_|==============|___/=/_/_/_/
:: Spring Boot :: (v2.2.0.RELEASE)
...
2019-10-18 08:52:39.283 INFO 1 --- [ main] o.s.b.web.embedded.netty.NettyWebServer : Netty started on port(s): 8080
2019-10-18 08:52:39.287 INFO 1 --- [ main] com.example.demo.DemoApplication : Started DemoApplication in 0.948 seconds (JVM running for 1.087)
Instead of building from a github source, you can build from an HTTP(S) URL that points to an archive. The archive contains the source code of your application, or it can be a Spring Boot executable JAR. You could use that to build from an artifactory repository, for instance. We can try it out using a simple HTTP server that accepts data on POST and serves it back on a GET. Such a server could be written easily in any language, but an example is available in dockerhub as dsyer/server
, listening on port 3001. So we deploy this container as a service in the cluster:
$ kubectl apply -f -
apiVersion: v1
kind: Service
metadata:
name: server-app
spec:
ports:
- port: 3001
protocol: TCP
targetPort: 3001
selector:
app: app
---
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: app
name: server-app
spec:
replicas: 1
selector:
matchLabels:
app: app
template:
metadata:
labels:
app: app
spec:
containers:
- image: dsyer/server
name: app
ports:
- containerPort: 3001
name: http
then expose the service on the host using kubectl port-forward svc/server-app 3001:3001
. At this point we can push a JAR file up into the server:
$ curl -v localhost:3001/app.jar --data-binary @target/docker-demo-0.0.1-SNAPSHOT.jar
at which point the JAR is available from the server at /app.jar
. So we can create the image resource like this:
kubectl apply -f -
apiVersion: build.pivotal.io/v1alpha1
kind: Image
metadata:
name: demo
spec:
tag: dsyer/demo
serviceAccount: service-account
builder:
name: default-builder
kind: ClusterBuilder
source:
blob:
url: http://server-app:3001/app.jar
Once that image resource is noticed by kpack
it triggers a build and the container is pushed to the registry configured in the builder. To create a new image you need to change the URL and re-apply the YAML (there is currently no way to monitor a blob source for changes). It’s fine to re-use URLs though, so you can always build the "latest" version using a blue-green naming convention, alternating between the two.
Riff is a container runtime with strong links to pack
and kpack
for building images. It can build and deploy "functions", and also "applications" (HTTP endpoints) and you can also bring your own container. There is a CLI to download, and a Getting Started Guide (the Minikube version works with kind
if you start from the section entitled Install Helm). Install the riff system in the cluster:
$ helm repo add projectriff https://projectriff.storage.googleapis.com/charts/releases
$ helm repo update
$ helm install projectriff/riff --name riff --version 0.4.x
Now you can use the CLI to build an image and deploy it. From the simple Spring Boot application we used above, we first build an image and install it as an "application" in the cluster:
$ ./mvnw install
$ riff application create demo --image myorg/myapp --local-path ./target/*.jar
...
2019/11/07 11:32:16.070533 DEBUG: ===> CACHING
[cacher] Reusing layer 'org.cloudfoundry.openjdk:d2df8bc799b09c8375f79bf646747afac3d933bb1f65de71d6c78e7466ff8fe4' with SHA sha256:636cde73aeca34a1e8730cdb74c4566fbf6ac7646fbbb2370b137ace1b4facf2
[cacher] Reusing layer 'org.cloudfoundry.jvmapplication:executable-jar' with SHA sha256:3d9310c8403c8710b6adcd40999547d6dc790513c64bba6abc7a338b429c35d2
[cacher] Reusing layer 'org.cloudfoundry.springboot:spring-boot' with SHA sha256:72b57201988836b0e1b1a6ab1f319be47aee332031850c1f4cd29b010f6a0f22
[cacher] Reusing layer 'org.cloudfoundry.springautoreconfiguration:0d524877db7344ec34620f7e46254053568292f5ce514f74e3a0e9b2dbfc338b' with SHA sha256:8768e331517cabc14ab245a654e48e01a0a46922955704ad80b1385d3f033c28
Created application "demo"
Note
|
Like with pack you can either build from source or from the executable jar file. In fact, riff is using exactly the same mechanism to build the container, embedding the same libraries and using the same builders. Riff has a custom builder for functions, but applications use the off-the-self Cloud Foundry builder.
|
Note
|
Riff can also build in the cluster, replacing --local-path with a --git-repo . We are focusing here on the "local" developer experience - no remote git repo is needed and everything can be built on the desktop.
|
At this point it is not running, but the image has been pushed to dockerhub, and there is a resource in the cluster that knows how to locate it:
$ kubectl get applications
NAME READY REASON
demo True
To create a deployment we need to bind the application to a deployer:
$ riff core deployer create demo --application-ref demo --tail
...
default/demo-deployer-6b4886c95c-jwbz8[handler]: 2019-11-07 11:56:34.897 INFO 1 --- [ main] o.s.b.web.embedded.netty.NettyWebServer : Netty started on port(s): 8080
default/demo-deployer-6b4886c95c-jwbz8[handler]: 2019-11-07 11:56:34.900 INFO 1 --- [ main] com.example.demo.DemoApplication : Started DemoApplication in 1.403 seconds (JVM running for 1.819)
At this point there is a regular deployment and service (listening on port 80):
$ kubectl get all
NAME READY STATUS RESTARTS AGE
pod/demo-deployer-6b4886c95c-jwbz8 1/1 Running 0 2m46s
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/demo-deployer ClusterIP 10.101.180.61 <none> 80/TCP 2m46s
service/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 5d20h
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/demo-deployer 1/1 1 1 2m46s
NAME DESIRED CURRENT READY AGE
replicaset.apps/demo-deployer-6b4886c95c 1 1 1 2m46s
...
So we can connect to it using a port forward (for instance):
$ kubectl port-forward svc/demo-deployer 8080:80
$ curl localhost:8080
Hello World!
To update the application we delete it and recreate. After making a change to the jar file:
$ riff application delete demo
$ riff application create demo --image myorg/myapp --local-path ./target/*.jar
Once the image is updated, the cluster will launch a new pod and switch traffic over to it when it comes up.
The Riff CLI is just a convenience wrapper around a container build, plus a few lines of YAML. If you already built the container a different way, like with a Dockerfile, you can create the YAML manually and simply apply it with kubcetl
. The two riff
invocations result in 2 API objects:
$ kubectl apply -f -
apiVersion: build.projectriff.io/v1alpha1
kind: Application
metadata:
name: demo
spec:
image: myorg/myapp
---
apiVersion: core.projectriff.io/v1alpha1
kind: Deployer
metadata:
name: demo
labels:
created: manual
spec:
build:
applicationRef: demo
template:
containers:
- name: handler
Since we built our own container, instead of Application
and applicationRef
we could specify that directly in the Deployer
, i.e.
apiVersion: core.projectriff.io/v1alpha1
kind: Deployer
metadata:
name: demo
labels:
created: manual
spec:
template:
containers:
- name: handler
image: myorg/myapp
If there is a change in the container, we need to change the tag and update the Deployer
resource (e.g. you can use myorg/myapp:red
and myorg/myapp:black
). If a new container is detected it will cause the deployer to do a rolling update on the application replicaset.
There is also a Container
resource that you could use to specify the container image and attach that to the Deployer
via a containerRef
:
$ kubectl apply -f -
apiVersion: build.projectriff.io/v1alpha1
kind: Container
metadata:
name: demo
spec:
image: myorg/myapp
----
apiVersion: core.projectriff.io/v1alpha1
kind: Deployer
metadata:
name: demo
labels:
created: manual
spec:
build:
containerRef: demo
template:
containers:
- name: handler
Then you can delete the Container
resource and re-create it when the image changes:
$ kubectl delete container demo
$ kubectl apply -f -
apiVersion: build.projectriff.io/v1alpha1
kind: Container
metadata:
name: demo
spec:
image: myorg/myapp
If we had been using an Application
and building using the riff builder in the cluster then there would be no need to delete and re-create. But if the container is built outside the cluster then we need to make a change so the feedback loop can kick off.
If you are running a MySQL service already on the cluster you can bind to it using the --env
and --envFrom
options on the riff core deployer create
command. Or you can create some YAML and bind to the configuration in the deployer spec. Example:
apiVersion: core.projectriff.io/v1alpha1
kind: Deployer
metadata:
name: petclinic
labels:
created: manual
spec:
template:
containers:
- name: handler
image: myorg/petclinic
env:
- name: MYSQL_HOST
valueFrom:
configMapKeyRef:
key: MYSQL_HOST
name: env-config
Combine that with a config map called "env-config" that was created by your MySQL service, and you have a functional Pet Clinic.
The deployer spec is just a pod spec, so you can add other things as well, like volume mounts. If you have an application.properties
file in a config map called "mysql-config", then this might be a good way to read it into the Spring Boot application. Here’s a kustomize
patch for the deployer:
apiVersion: core.projectriff.io/v1alpha1
kind: Deployer
metadata:
name: petclinic
spec:
template:
containers:
- name: handler
env:
- name: SPRING_CONFIG_LOCATION
valueFrom:
configMapKeyRef:
key: SPRING_CONFIG_LOCATION
name: env-config
imagePullPolicy: Always
volumeMounts:
- name: mysql-config
mountPath: /config/mysql
volumes:
- name: mysql-config
configMap:
name: mysql-config
where SPRING_CONFIG_LOCATION=classpath:/,file:///config/mysql/
is set separately in the "env-config" map.
A full kustomization.yaml
looks like this:
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- config.yaml
- deployer.yaml
patchesStrategicMerge:
- binding.yaml
configMapGenerator:
- name: env-config
behavior: merge
literals:
- SPRING_CONFIG_LOCATION=classpath:/,file:///config/mysql/
Where config.yaml
just has the empty env-config
:
apiVersion: v1
kind: ConfigMap
metadata:
name: env-config
deployer.yaml
is the container and deployer declarations, and binding.yaml
is the patch with the volume mount.
First make sure you have a CPU request in your app container, e.g:
apiVersion: apps/v1
kind: Deployment
metadata:
name: app
spec:
template:
spec:
containers:
...
resources:
requests:
cpu: 200m
limits:
cpu: 500m
You need a [Metrics Server](https://github.com/kubernetes-sigs/metrics-server) to benefit from kubectl top
and the [Autoscaler](https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/). Kind doesn’t support the metrics server [out of the box](kubernetes-sigs/kind#398):
$ kubectl top pod
W0323 08:01:25.173488 18448 top_pod.go:266] Metrics not available for pod default/app-5f969c594d-79s79, age: 65h4m54.173475197s
error: Metrics not available for pod default/app-5f969c594d-79s79, age: 65h4m54.173475197s
But you can install it using the manifests in the [Metrics Server source code](https://github.com/kubernetes-sigs/metrics-server/blob/master/deploy/kubernetes/). It is available here as well with some tweaks to do with service ports and secrets:
$ kubectl apply -f metrics/manifest.yaml
$ kubectl top pod
NAME CPU(cores) MEMORY(bytes)
app-79fdc46f88-mjm5c 217m 143Mi
Note
|
You might need to recycle the application Pods to make them wake up to the metrics server. |
First make sure you have a CPU request in your app container:
apiVersion: apps/v1
kind: Deployment
metadata:
name: app
spec:
template:
spec:
containers:
...
resources:
requests:
cpu: 200m
limits:
cpu: 500m
And recycle the deployment (Skaffold will do it for you). Then add an autoscaler:
$ kubectl autoscale deployment app --min=1 --max=3
$ kubectl get hpa
NAME REFERENCE TARGETS MINPODS MAXPODS REPLICAS AGE
app Deployment/app 5%/80% 1 3 1 9s
Hit the endpoints hard with (e.g.) Apache Bench:
$ ab -c 100 -n 10000 http://localhost:4503/actuator/
and you should see it scale up:
$ kubectl get hpa
NAME REFERENCE TARGETS MINPODS MAXPODS REPLICAS AGE
app Deployment/app 112%/80% 1 3 2 7m25s
and then back down:
$ kubectl get hpa
NAME REFERENCE TARGETS MINPODS MAXPODS REPLICAS AGE
app Deployment/app 5%/80% 1 3 1 20m
Note
|
If you update the app and it restarts or redeploys, the CPU activity on startup can trigger an autoscale up. Kind of nuts. It’s potentially a thundering herd. |
The kubectl autoscale
command generates a manifest for the "hpa" something like this:
apiVersion: autoscaling/v2beta2
kind: HorizontalPodAutoscaler
metadata:
name: app
spec:
maxReplicas: 3
metrics:
- resource:
name: cpu
target:
averageUtilization: 80
type: Utilization
type: Resource
minReplicas: 1
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: app
First download and install the Helm CLI. Then initialize it (assuming you have RBAC enabled in your cluster):
$ kubectl create clusterrolebinding tiller-cluster-rule --clusterrole=cluster-admin --serviceaccount=kube-system:default
clusterrolebinding.rbac.authorization.k8s.io/tiller-cluster-rule created
$ helm init --upgrade --service-account default
$ helm list
The result is empty, but if there are no errors then you are ready to start. More docs online.
A minimal, ephemeral (not for production use) prometheus:
$ helm install stable/prometheus --name prometheus --set=server.persistentVolume.enabled=false,alertmanager.enabled=false,kubeStateMetrics.enabled=false,pushgateway.enabled=false,nodeExporter.enabled=false
$ kubectl port-forward svc/prometheus-server 8000:80
With prometheus running, your Spring Boot application needs to expose metrics in the right format. To do that we just need a couple of dependencies:
<dependency>
<groupId>io.micrometer</groupId>
<artifactId>micrometer-core</artifactId>
</dependency>
<dependency>
<groupId>io.micrometer</groupId>
<artifactId>micrometer-registry-prometheus</artifactId>
</dependency>
And we need some configuration in the application to expose the endpoint:
management.endpoints.web.exposure.include=prometheus,info,health
Then, finally, we need to tell prometheus where the endpoint is (it looks at /metrics
on port 80 by default). So in the kubernetes deployment we add some annotations:
apiVersion: apps/v1
kind: Deployment
metadata:
name: demo
spec:
template:
metadata:
annotations:
prometheus.io/path: /actuator/prometheus
prometheus.io/port: "8080"
prometheus.io/scrape: "true"
...
The annotations are picked up by "scraping rules" that were defined for us in the helm chart.
TODO:
-
Security for the actuator endpoint
-
Kubernetes native actuators (like in PCF)
-
Describe MySQL set up: hand-rolled and and CNB bindings