Based on microservices-demo
A containerized online boutique app with 11 microservices built ,pushed and deployed with a CI/CD pipline using Docker ,Jenkins ,Nexus , Argocd and an on premesis Kubernetes cluster setup with kubeadm using vagrant
- Docker
- A kubernetes cluster
- Jenkins
- Nexus
- Argocd
Service | Description | |
---|---|---|
frontend | Go | serve the website |
cartservice | C# | stores selected items in the shopping cart |
productcatalogservice | Go | list ,search and select products |
currencyservice | Node.js | converts currency |
paymentservice | Node.js | charges the given credit card (mock) |
shippingservice | Go | extimate shipping cost (mock) |
emailservice | Python | sends emails about the transactions (mock) |
checkoutservice | Go | manage the cart ,order ,shipping ,payment and email notifications |
recommendationservice | Python | recommends other products |
adservice | Java | provides text ads |
- A Kubernetes cluster setup with kubeadm using vagrant
- here is guide on how to setup a k8s cluster with vagrant and kubeadm vagrant-k8s
- A Jenkins container, setup on a VM (4Cpu,6Ram) runnig on port 8080
- Nexus repository on the same VM running on port 8081
- Argocd installed on the k8s cluster and exposed on port 32000
- I recommend using the same network CIDR if youre using this infrastructure locally
- follow this guide to set up jenkins as a container Docker in Jenkins
-
For the docker registry you can use Docker Hub or others, in this case i used Sonatype Nexus
-
install & start Nexus :
sudo apt install openjdk-17-jdk openjdk-17-jre wget https://sonatype-download.global.ssl.fastly.net/repository/downloads-prod-group/3/nexus-3.74.0-05-unix.tar.gz tar -zxvf nexus-3.74.0-05-unix.tar.gz nexus-version/bin/nexus start
-
setup a docker hosted registry (repository => create repository => docker hosted ) :
-
create a role that grants access to the created registry (admin setting => roles => create role) :
-
create jenkins & kubernetes users in Nexus (you can create one for both) :
- make sure to add the created docker hosted role to these users
-
allow jenkins and the k8s cluster to pull & push images :
- for docker runtime :
- edit the daemon.json file :
sudo nano /etc/docker/daemon.json
- configuration example :
{ insecure-registries": ["192.168.1.16:8082"] }
- restart docker :
sudo systemctl restart docker
- configuration example :
- edit the daemon.json file :
- for containerd runtime :
- create & edit the hosts.toml file :
sudo mkdir -p /etc/containerd/certs.d/192.168.1.16:8082/ sudo nano /etc/containerd/certs.d/192.168.1.16:8082/hosts.toml
- configuration example :
server = "http://192.168.1.16:8082" [host."http://192.168.1.16:8082"] capabilities = ["pull", "resolve"]
- configuration example :
- modify the config file :
sudo nano /etc/containerd/config.toml
- add this :
[plugins."io.containerd.grpc.v1.cri".registry] config_path = "/etc/containerd/certs.d"
- add this :
- restart containerd:
sudo systemctl restart containerd
- create & edit the hosts.toml file :
- for docker runtime :
- set up a deployment with its corresponding service for each microservices
- utilized a script to manually deploy and test the app on the k8s cluster
- make sure to add secret in the k8s cluster to allow acces to the Nexus repo using the user created previously
kubectl create secret docker-registry nexus-registry-secret --docker-server=192.168.1.16:8082 --docker-username=kubernetes --docker-password=kubernetes
- also make sure to add the imagePullSecrets and specify the secrect name in each YAML file
template: metadata: labels: app: frontend spec: imagePullSecrets: - name: nexus-registry-secret containers: - name: frontend image: 192.168.1.16:8082/frontend:1.0.4
- Utilized webhook triggers to launch the pipline automaticly
- I recommnad using Ngrok if youre planning on using webhooks on a local setup
- I also recommend using a multibranch pipline for its usefull plugins like the
Ignore Commiter Strategy
andMultibranch Scan Webhook Trigger
- utilzed a function to build multiple images by passing a list of (ImageName with corresponding Docker File Location) pairs
- view the funtcion here
- example :
def ImageName_DockerFileLocation=[['frontend','services/frontend'],['adservice','services/adservice']] gs.build( '192.168.1.16:8082', // Nexus repo url env.VERSION, // Version 'nexus-jenkins', // Credentail Id for docker repo from jenkins ImageName_DockerFileLocation // List of pairs (Image Name along with Docker File Location) )
- another function pushes the images the docker registry by passing in a list of the built Image names
- view the funtcion here
- example :
ImageNames=['frontend','adservice'] gs.push( '192.168.1.16:8082', // Nexus url env.VERSION, // Version 'nexus-jenkins', // Credentail Id for docker repo from jenkins ImageNames // List of Image Names )
- utilized the
$BUILD_NUMBER
built in jenkins pipline variable in the versioning of the docker images - implemeted a script that changes the YAML file image version after each commit
- view the script here
version=1.2.3 file="./deployment.yml" sed -i "s|image: 192.168.1.16/\([^:]*\):.*|image: 192.168.1.16/\1:$version|" $file
- search for this exact pattern "image: 192.168.1.16/
<imageName>
:<version>
" \([^:]*\)
: capture the<imageName>
.*
: select anything after ":"/1
: keep the same<imageName>
=>\([^:]*\)
- change the version to
$version
-i
: write directly in the file
- search for this exact pattern "image: 192.168.1.16/
- view the script here
- set up jenkins to update the git repository with new YAML files version
- set up a git user.name and user.eamil for jenkins :
docker exec -it jenkins bash git config --global user.name "jenkins" git config user.email "jenkins@jenkins.com"
- ustilized a function that adds and push changes to a git repository
- view the function here
- the function uses a git access token wish must be configured and added to jenkins as a
Username and Password
credentail- set up an access token in git hub :
- make sure to add the
Ignore Commiter Strategy
plugin and ignore jenkins pushes to avoid infinite build loops :
- install Argocd :
kubectl create namespace argocd kubectl apply -n argocd -f https://raw.githubusercontent.com/argoproj/argo-cd/stable/manifests/install.yaml kubectl delete pod -n argocd --all # hard reset of the pods kubectl get pods -n argocd
- expose the ui on port 32000 :
kubectl patch svc argocd-server -n argocd -p '{"spec": {"type": "NodePort", "ports": [{"port": 443, "targetPort": 8080, "nodePort": 32000}]}}'
- retrieve the admin password to login in th ui :
kubectl get secret argocd-initial-admin-secret -n argocd -o jsonpath="{.data.password}" | base64 -d
- create a new app and add the :
- git repository url
- branch
- YAML manifests location in the git rpository
- destination cluster (
https://kubernetes.default.svc
for the current local cluster) - namespace
- make sure the app is healthy and synched :
- Jenkins pipline :
- Nexus repo :
- Argocd's deployment :
- kubectl get pods,svc,deploy :
- market-app :