Skip to content

WIP: Feature/storage #10

New issue

Have a question about this project? # for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “#”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? # to your account

Open
wants to merge 4 commits into
base: master
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 2 additions & 0 deletions .gitignore
Original file line number Diff line number Diff line change
@@ -1 +1,3 @@
deployment
rs-key
google-credentials
94 changes: 72 additions & 22 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
# kubernetes-mongodb-cluster

A scalable kubernetes cluster for SSL secured mongodb.
A scalable kubernetes cluster for SSL secured mongodb on GKE with backups.

![issues](https://img.shields.io/github/issues/AlexsJones/kubernetes-mongodb-cluster.svg)
![forks](https://img.shields.io/github/forks/AlexsJones/kubernetes-mongodb-cluster.svg)
Expand All @@ -9,50 +9,99 @@ A scalable kubernetes cluster for SSL secured mongodb.
![twitter](https://img.shields.io/twitter/url/https/github.com/AlexsJones/kubernetes-mongodb-cluster.svg?style=social)


Built on the great work of others, brought together in k8s manifests.

- GKE local disks
- Backups with FUSE to Google storage
- Statefulset
- Node/Pod affinity keys
- Configmap for mongo.conf, boot options and per env tuning
- Service discovery with sidecars
- Supports auto scaling
- Example built with generated SSL cert

Influenced and inspired by:
- https://github.com/MichaelScript/kubernetes-mongodb
- https://github.com/cvallance/mongo-k8s-sidecar
- My own experience with trying to implement this.. https://kubernetes.io/blog/2017/01/running-mongodb-on-kubernetes-with-statefulsets/

## Dependencies

```
- golang
- go get github.com/AlexsJones/vortex
- google cloud platform (for a few annotations e.g. load balancer and pvc)
- google cloud platform (for a few annotations e.g. load balancer and pvc) and GKE cluster
```
## Get me started

If you want to start from absolute zero here is the command to build the cluster on GKE:

1.

```
gcloud container clusters create mongodbcluster --num-nodes 1 --node-locations=europe-west2-a,europe-west2-b,europe-west2-c --local-ssd-count 3 --region=europe-west2 --labels=type=mongodb --node-labels=node-type=mongodb --machine-type=n1-standard-8

kubectl create clusterrolebinding cluster-admin-binding --clusterrole=cluster-admin --user=$(gcloud config get-value core/account)
```
kubectl create ns mongodb
./build_environment.sh dev
./generate_pem.sh <SomePassword>
kubectl apply -f deployment/mongo

2.

- Provide a service account with access to the `storage_bucket` as defined in `environments/<yourenv>`
e.g `storage_bucket: mybucketintheus` *It must have access to storage object get/list/create*

- Download the secret for this service account locally e.g `gcloud iam service-accounts keys create google-credentials --iam-account <EMAIL_OF_SVC_ACCOUNT>`

- `kubectl create secret generic google-credentials --from-file=google-credentials -n mongodb`

3.

Followed by the deployment (production-gke)

```
- `./build_environment.sh production-gke `
- ./generate_pem.sh <SomePassword>
- `kubectl apply -f deployment/gke-storage -n mongodb`
- `kubectl apply -f deployment/mongo -n mongodb`
```

_To confirm the local-disks are attached run the following_

```
❯ kubectl get pvc -n mongodb
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
data-mongod-0 Bound local-pv-46a6870e 368Gi RWO local-scsi 1m
data-mongod-1 Bound local-pv-93823dd3 368Gi RWO local-scsi 40s
data-mongod-2 Bound local-pv-69642ae6 368Gi RWO local-scsi 17s

```

4.




## But I don't like GKE and/or I'm on another provider

If you do not wish to use GKE nor local-scsi do the following deployment


```
- Check your environment/<env> does not have local-scsi storage class set
- `./build_environment.sh <env>`
- ./generate_pem.sh <SomePassword>
- `kubectl apply -f deployment/mongo -n mongodb`
```
./build

## Test it works

The mongo-job runs the following command

```
kubectl exec -it mongod-0 -c mongod -- mongo --host 127.0.0.1:27017 --authenticationDatabase admin --username root --password root --eval "rs.status()"
kubectl exec -it mongod-0 -n mongodb -c mongod -- mongo --host 127.0.0.1:27017 --authenticationDatabase admin --username root --password root --eval "rs.status()"
```

Execute the job with

```
kubectl apply -f deployment/utils/job.yaml
kubectl apply -f deployment/utils/job.yaml -n mongodb
```

## Configuration

- Primary mongodb configuration is within `templates/mongo/configmap.yaml` for wiredtiger settings and log paths

- Within `templates/mongo/statefulset.yaml` the mongod boot options can be changed to suit requirements.
Expand All @@ -70,14 +119,15 @@ mongodb:

Can be changed in the environment folder file


### Restoring a database backup

Since 1.10 you can now upload mongodumps into configmaps or in `utils/pod-mongorestore.yaml` you can just use `kubectl cp`
then execute the backup file


## Using UI tools

Tools such as mongochef/robochef can be used with their direct connection mode on localhost:27017 and
`kubectl port-forward mongod-0 27017:27017`


### Creditations

Influenced and inspired by:
- https://github.com/MichaelScript/kubernetes-mongodb
- https://github.com/cvallance/mongo-k8s-sidecar
- My own experience with trying to implement this.. https://kubernetes.io/blog/2017/01/running-mongodb-on-kubernetes-with-statefulsets/
9 changes: 6 additions & 3 deletions environments/default.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -20,10 +20,13 @@ dev: "true"
image: "{{.image}}"
namespace: "{{.namespace}}"
replica: "3"
affinity_key: "{{.affinitykey}}"
affinity_selector: "{{.affinityselector}}"
with_affinity: "true"
pod_affinity_key: "app"
pod_affinity_selector: "mongodb"
node_affinity_key: "{{.affinitykey}}"
node_affinity_selector: "{{.affinityselector}}"
storage_size: "{{.storagesize}}"
storage_class: "{{.storageclass}}"
storage_class: "{{.storageclass}}" #local-scsi can be selected on GKE with the ./templates/gke-storage applied
resources:
requests:
cpu: "{{.cpurequest}}"
Expand Down
14 changes: 9 additions & 5 deletions environments/dev.yaml
Original file line number Diff line number Diff line change
@@ -1,11 +1,15 @@
dev: "true"
image: "mongo"
image: "mongo:3.6"
namespace: "mongodb"
replica: "3"
affinity_key: "app"
affinity_selector: "mongodb"
with_affinity: "false"
# These wll not apply -----------------
pod_affinity_key: "app"
pod_affinity_selector: "mongodb"
node_affinity_key: "node-type"
node_affinity_selector: "mongodb"
# --------------------------------------
storage_size: "1Gi"
storage_class: "fast-retain"
storage_class: "fast-retain" #local-scsi can be selected on GKE with the ./templates/gke-storage applied
resources:
requests:
cpu: "0.2m"
Expand Down
27 changes: 27 additions & 0 deletions environments/production-gke.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,27 @@
image: "mongo:3.6"
namespace: "mongodb"
replica: "3"
with_affinity: "true"
pod_affinity_key: "app"
pod_affinity_selector: "mongodb"
node_affinity_key: "node-type"
node_affinity_selector: "mongodb"
with_gcs_backups: "true"
gcsfuse:
storage_bucket: mongodb-gcs-backups
storage_size: "250Gi"
storage_class: "local-scsi" #local-scsi can be selected on GKE with the ./templates/gke-storage applied
resources:
requests:
cpu: "2"
memory: "1000Mi"
limits:
cpu: "3"
memory: "30000Mi"
mongodb:
rootusername: "root"
rootpassword: "root"
replsetname: "MainRepSet"
sslmode: "preferSSL"
mongosidecar:
sslenabled: "true"
6 changes: 6 additions & 0 deletions gcsfuse/Dockerfile
Original file line number Diff line number Diff line change
@@ -0,0 +1,6 @@
FROM ubuntu:xenial

RUN apt-get update && apt-get install curl -y

RUN echo "deb http://packages.cloud.google.com/apt gcsfuse-xenial main" | tee /etc/apt/sources.list.d/gcsfuse.list && \
curl https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key add - && apt-get update && apt-get install gcsfuse -y
4 changes: 4 additions & 0 deletions gcsfuse/build_and_push.sh
Original file line number Diff line number Diff line change
@@ -0,0 +1,4 @@
#!/bin/bash

docker build -t tibbar/gcsfuse:latest .
docker push tibbar/gcsfuse:latest
2 changes: 1 addition & 1 deletion generate_pem.sh
Original file line number Diff line number Diff line change
@@ -1,3 +1,4 @@
kubectl create ns mongodb || true
echo 'Generating self signed certificate'
KEY=$1
openssl genrsa -des3 -passout pass:$KEY -out server.pass.key 2048
Expand All @@ -15,4 +16,3 @@ openssl rand -base64 741 > rs-key
chmod 0400 rs-key
kubectl --namespace=mongodb delete secret mongodb-rs-key || true
kubectl --namespace=mongodb create secret generic mongodb-rs-key --from-file=rs-key
rm rs-key
1 change: 0 additions & 1 deletion mongo-k8s-sidecar
Submodule mongo-k8s-sidecar deleted from 770b1c
128 changes: 128 additions & 0 deletions templates/gke-storage/scsi-storage.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,128 @@
apiVersion: v1
kind: PersistentVolume
metadata:
name: local-scsi
spec:
nodeAffinity:
required:
nodeSelectorTerms:
- matchExpressions:
- key: node-type
operator: In
values:
- {{.namespace}}
capacity:
storage: 375Gi
accessModes:
- "ReadWriteOnce"
persistentVolumeReclaimPolicy: "Retain"
storageClassName: "local-storage"
local:
path: "/mnt/disks/ssd0"

---
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: "local-scsi"
provisioner: "kubernetes.io/no-provisioner"
volumeBindingMode: "WaitForFirstConsumer"
---
# Source: provisioner/templates/provisioner.yaml

apiVersion: v1
kind: ConfigMap

metadata:
name: local-provisioner-config
data:
useNodeNameOnly: "true"
storageClassMap: |
local-scsi:
hostDir: /mnt/disks
mountDir: /mnt/disks
---
apiVersion: extensions/v1beta1
kind: DaemonSet
metadata:
name: local-volume-provisioner
namespace: {{.namespace}}
labels:
app: local-volume-provisioner
spec:
selector:
matchLabels:
app: local-volume-provisioner
template:
metadata:
labels:
app: local-volume-provisioner
spec:
serviceAccountName: local-storage-admin
containers:
- image: "quay.io/external_storage/local-volume-provisioner:v2.2.0"
imagePullPolicy: "Always"
name: provisioner
securityContext:
privileged: true
env:
- name: MY_NODE_NAME
valueFrom:
fieldRef:
fieldPath: spec.nodeName
volumeMounts:
- mountPath: /etc/provisioner/config
name: provisioner-config
readOnly: true
- mountPath: /mnt/disks
name: local-scsi
volumes:
- name: provisioner-config
configMap:
name: local-provisioner-config
- name: local-scsi
hostPath:
path: /mnt/disks
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: local-storage-admin
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: local-storage-provisioner-pv-binding
namespace: {{.namespace}}
subjects:
- kind: ServiceAccount
name: local-storage-admin
namespace: {{.namespace}}
roleRef:
kind: ClusterRole
name: system:persistent-volume-provisioner
apiGroup: rbac.authorization.k8s.io
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: local-storage-provisioner-node-clusterrole
namespace: {{.namespace}}
rules:
- apiGroups: [""]
resources: ["nodes"]
verbs: ["get"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: local-storage-provisioner-node-binding
namespace: {{.namespace}}
subjects:
- kind: ServiceAccount
name: local-storage-admin
namespace: {{.namespace}}
roleRef:
kind: ClusterRole
name: local-storage-provisioner-node-clusterrole
apiGroup: rbac.authorization.k8s.io
Loading