Skip to content

Project for demonstrating the use of k6-operator within Kubernetes clusters.

License

Notifications You must be signed in to change notification settings

javaducky/demo-k6-operator

Repository files navigation

Demo for k6-operator

Demo files for the "Running distributed load tests with k6" (Video), originally presented to the Kubernetes & Cloud Native STL meetup group.

Prerequisites

  • git - For accessing the sourcecode repositories.
  • Docker - For building our custom k6 and running the examples.
  • kubectl - Client for talking to Kubernetes clusters.
  • go - To build and install k6-operator.
  • Yq - Just for the helper scripts to parse YAML files.

There may be others that I didn't recall as having them installed long ago. My apologies for any issues!

Gather sources

For the demo, we'll be pulling the code sources for the operator as well as any desired extensions. For housekeeping purposes, we'll locate each repository in the dependencies directory.

# Pull down the operator which we'll install into Kubernetes.
git clone https://github.com/grafana/k6-operator.git dependencies/k6-operator

# At minimum, we're adding the ability to output test metrics to InfluxDB as well as Kafka.
git clone https://github.com/grafana/xk6-output-influxdb dependencies/xk6-output-influxdb
git clone https://github.com/grafana/xk6-output-kafka dependencies/xk6-output-kafka

Note

If you'd like additional extensions to try out, take a look at the Explore section of the k6 documentation for a listing of known extensions.

Build our customized k6 image

In order to create our k6 image using our desired extensions, we'll need to build using xk6. Our Dockerfile will set up our Go environment and handle the build.

# Build the image to be published.
# NOTE: You'll want to change the image name from `javaducky/...` to use your Dockerhub user id!
docker build -t javaducky/demo-k6-operator:latest .

# Publish your image to Dockerhub or whichever container registry your Kubernetes cluster can access.
docker push javaducky/demo-k6-operator:latest

Note

If you've browsed the list of known extensions and wish to include more custom functionality, update the Dockerfile to include your desired extensions using the --with option. More details about building custom binaries with xk6 can be found in the documentation.

Running a local test

Before entering the world of distributed tests in Kubernetes, let's exercise our new image the typical way; a single instance, but we'll still use Docker to execute the test.

docker run -v $PWD:/scripts -it --rm javaducky/demo-k6-operator run /scripts/test-scripts/simple.js

👍 OR use the provided convenience script...

# Helper script to run k6 as a Docker container.
./run-local.sh test-scripts/simple.js

👉 The above will run my publicly available image, so you can override the image by specifying the IMAGE_NAME environment variable as in the following.

# To run another image, override the `IMAGE_NAME` variable.
IMAGE_NAME=my-custom-image ./run-local.sh test-scripts/simple.js

Again, this closely resembles the typical usage when you have a k6 binary installed on your system. You see log output directly on the console and see the result summary at the end of the test.

Create a local Kubernetes cluster (optional)

I'm using k3d locally to run a Kubernetes cluster within Docker. Once installed, I use the following command to create a cluster named k6-demo-cluster.

k3d cluster create k6-demo-cluster \
 --api-port 6550 \
 -p "8081:80@loadbalancer" \
 --agents 3

Note

If you've previously created the cluster, you can start the cluster using k3d cluster start k6-demo-cluster if not already running.

Once this is complete, I now have a running Kubernetes cluster on which I can use kubectl as well as other tooling like k9s.

Build and install the k6-operator

Important

Always ensure your kubectl is set to the appropriate profile targeting the correct cluster! Clone the k6-operator source code into our working directory. We'll be building and installing directly from the source code.

Tip

Helm chart available! If you'd rather install via Helm, you can install the k6 chart as below:

helm repo add grafana https://grafana.github.io/helm-charts
helm repo update
helm install k6-operator grafana/k6-operator
# Change into the k6-operator source directory. (You downloaded this in the first step!)
cd dependencies/k6-operator
make deploy
cd ../..

Note

If you'd like to perform k6-browser tests, I've got custom images for the operator which can be installed instead:

# Change into the k6-operator source directory. (You downloaded this in the first step!)
cd dependencies/k6-operator
IMG_NAME="javaducky/k6-operator" IMG_TAG="with-browser" make deploy
cd ../..

At this point, the operator and applicable resource definition have been installed into your Kubernetes cluster.

Resource setup

Now that we have a working Kubernetes cluster, let's create an isolated Namespace and add our example test scripts as a ConfigMap.

# Let's create an isolated namespace for our testing
kubectl create namespace k6-demo

# Create a ConfigMap containing each of your test scripts
kubectl create configmap test-scripts -n k6-demo \
 --from-file=./test-scripts/simple.js \
 --from-file=./test-scripts/simple-checks.js \
 --from-file=./test-scripts/simple-browser.js \
 --from-file=./test-scripts/multi-scenario.js \
 --from-file=./test-scripts/door-buster-sale.js

Grafana Cloud (Prometheus)

For my demonstration, I'm using the Free Forever Cloud from Grafana Cloud which will receive Prometheus metrics during test executions.

Once signed up, update the Prometheus endpoint, user, and password (api-key) placeholders for your account in the following commands.

Tip

Take a look at the k6 docs for more information on setting up your account and API key.

# Create a ConfigMap with our non-secret configuration for our cloud account
kubectl create configmap -n k6-demo prometheus-config \
 --from-literal=K6_PROMETHEUS_RW_SERVER_URL=[YOUR REMOTE WRITE ENDPOINT] \
 --from-literal=K6_PROMETHEUS_RW_STALE_MARKERS=true

# Create a secret with our authentication data for our cloud account
kubectl create secret -n k6-demo generic prometheus-secrets \
 --from-literal=K6_PROMETHEUS_RW_USERNAME=[YOUR USERNAME] \
 --from-literal=K6_PROMETHEUS_RW_PASSWORD=[YOUR PASSWORD] 

Grafana k6 Cloud

Not only can we use the Grafana Free Forever Cloud, but we can also # to use the Grafana k6 Cloud offering for the same low price (FREE)!

After signing up, update the project ID and api-key placeholders below then create your Kubernetes secret.

Tip

Take a look at the k6 docs for more information on creating your API key.

kubectl create secret -n k6-demo generic k6-cloud-secrets \
 --from-literal=K6_CLOUD_PROJECT_ID=[YOUR PROJECT ID]
 --from-literal=K6_CLOUD_TOKEN=[YOUR K6 API KEY]

Now that we have our necessary resources available in Kubernetes, we can trigger a test execution.

Running a distributed test

To perform a distributed test, you simply apply the k6 custom resource definition (CRD) to your Kubernetes cluster using the standard kubectl tool.

# Adds the k6 CRD to trigger an test execution
kubectl apply -n k6-demo -f resources/k6-output-grafana-cloud.yaml

Once you've finished up, you can clear previous executions from your Kubernetes cluster in order to run the same script again.

# Post-test cleanup paves way for next test execution
kubectl delete -n k6-demo -f resources/k6-output-grafana-cloud.yaml

👍 My dashboard example makes use of a custom testid tag to keep track of each test run. For this, I use the convenience script run-kube.sh which will add a unique timestamp to keep each test execution separate. This script will also replace a previous execution of the test resource if one already existed.

# Removes previous execution (if any), then run test with a unique `testid`
./run-kube.sh resources/k6-output-grafana-cloud.yaml

👉 The same process can be used to publish metrics to the Grafana k6 Cloud by running the k6-output-k6-cloud.yaml instead.

Warning

By default, the free-tier subscription for k6 Cloud will only allow for parallel execution, nor multiple scenarios.

Looking for more?!

Additional scenarios are available:

  • InfluxDB - Output metrics to an InfluxDB 2.0 instance running in Docker.

About

Project for demonstrating the use of k6-operator within Kubernetes clusters.

Resources

License

Stars

Watchers

Forks

Packages

No packages published