Skip to content
This repository was archived by the owner on Sep 21, 2021. It is now read-only.

standardize all k8s vanilla deployment #772

Merged
merged 2 commits into from
Dec 6, 2018
Merged

standardize all k8s vanilla deployment #772

merged 2 commits into from
Dec 6, 2018

Conversation

arnaud-deprez
Copy link
Contributor

@arnaud-deprez arnaud-deprez commented Nov 18, 2018

Description

The purpose of this PR is to fix #750 by using Helm template for all kubernetes vanilla deployments.

Helm has become the standard package manager running atop Kubernetes and so it seems reasonable to support only that type of deployment.

Helm 2.x typically uses a server side component called tiller which is responsible to create your Kubernetes objects and so it does not use the RBAC configuration set for the user but it uses the one set for tiller (more information on the jenkins-x website)
Until Helm 3 is GA, it is still possible to use Helm 2 without Tiller by simply applying the result of the template helm template --name zalenium charts/zalenium | kubectl apply -f -

Motivation and Context

There are currently many documented ways to deploy zalenium on vanilla kubernetes resulting in a maintenance overhead.
Moreover some discrepancies between each deployment method have been identified in the issue.

How Has This Been Tested?

  • Openshift (minishift) deployment with RBAC and PVs: helm template --name zalenium --set ingress.enabled=true --set ingress.hostname=hub-zalenium.$(minishift ip).nip.io --set rbac.create=true --set serviceAccount.create=true --set persistence.enabled=true charts/zalenium | kubectl -n myproject apply -f -
  • Kubernetes (minikube) deployment with RBAC and PVs: helm template --name zalenium --set ingress.enabled=true --set ingress.hostname=hub-zalenium.$(minikube ip).nip.io --set rbac.create=true --set serviceAccount.create=true --set persistence.enabled=true charts/zalenium | kubectl -n zalenium apply -f -
  • Kubernetes (minikube) deployment with RBAC, persistence and security context: helm template --name zalenium --set ingress.enabled=true --set ingress.hostname=hub-zalenium.$(minikube ip).nip.io --set rbac.create=true --set serviceAccount.create=true --set persistence.enabled=true --set hub.securityContext.enabled=true charts/zalenium | kubectl -n zalenium apply -f -

Types of changes

  • New feature (non-breaking change which adds functionality)

Checklist:

  • My code follows the code style of this project.
  • My change requires a change to the documentation.
  • I have updated the documentation accordingly.
  • I have read the CONTRIBUTING document.
  • I have added tests to cover my changes.
  • All new and existing tests passed.

@codecov-io
Copy link

codecov-io commented Nov 18, 2018

Codecov Report

Merging #772 into master will decrease coverage by 0.05%.
The diff coverage is n/a.

@@             Coverage Diff             @@
##             master    #772      +/-   ##
===========================================
- Coverage     62.55%   62.5%   -0.06%     
  Complexity      551     551              
===========================================
  Files            47      47              
  Lines          3456    3456              
  Branches        297     297              
===========================================
- Hits           2162    2160       -2     
- Misses         1101    1103       +2     
  Partials        193     193

@arnaud-deprez arnaud-deprez changed the title [WIP] standardize all k8s vanilla deployment standardize all k8s vanilla deployment Nov 18, 2018
@arnaud-deprez
Copy link
Contributor Author

arnaud-deprez commented Nov 19, 2018

Hi @diemol @pearj,

The PR is finally ready to be merged.
FYI, I initially have tried to upgrade to latest minikube and kubernetes but the tests were failing so I left it for another eventual PR.

IMO, it is just working fine like that but if you have a way to try to reproduce #631 I can try but in both Minikube and Minishift, I can see the dashboard and the videos without setting any security context.
However this might be necessary in some cloud provider to use fsGroup=0.
But then I'm wondering if the Pods instanciated by the Hub shouldn't inherit this security context too.

@diemol
Copy link
Contributor

diemol commented Nov 22, 2018

Hi @arnaud-deprez,

Thanks for taking the time to put this together. In general, this looks good to me as it simplifies the docs and maintenance. I just wonder if it makes sense to put the command examples you posted above to start in OpenShift and k8s. Also, if everyone is using helm now... I don't have a k8s cluster to try by the way.

What do you think @pearj?

@arnaud-deprez
Copy link
Contributor Author

arnaud-deprez commented Nov 22, 2018

Hi @diemol,

This is indeed the same command for Kubernetes and Openshift, so I don't plan different documentation for this.
But as I said it might be that setting securityContext with helm template --name zalenium --set hub.securityContext.enabled=true is required in some vanilla k8s setup to overcome #631 (sorry if this wasn't clear).

As I said too, it works fine without that securityContext set on both minikube and minishift and I can see the dashboard and videos, so I don't know how should I validate against #631 to be 100% sure that the initContainer can be removed.

I hope it is now clearer for you guys :-)

@diemol
Copy link
Contributor

diemol commented Nov 23, 2018

Ah ok, thanks, get it. Let's wait for @pearj's opinion.

@rodor87
Copy link

rodor87 commented Nov 23, 2018

Would it make sense to add this to the official helm charts repository? I see there's a chart for selenium grid, but none for zalenium

https://github.com/helm/charts/tree/master/stable/selenium

@arnaud-deprez
Copy link
Contributor Author

Hi @rodor87,

Indeed, that was part of my initial issue #750.
My initial idea was to first to propose the change here and then eventually decide to migrate to helm chart repo.

IMO, we have to decide based on the following pros/cons:

pros:

  • direct integration with helm and so (helm hub)[https://hub.kubeapps.com/]
  • better visibility of the project

cons:

  • if we need to modify both code and helm template, it's harder to review the whole thing

@rodor87
Copy link

rodor87 commented Nov 25, 2018

@arnaud-deprez Yes, that's what I mean as well. It makes sense to make the change here first, cut a new release and as part of a new PR migrate to the official helm charts

@pearj
Copy link
Collaborator

pearj commented Nov 26, 2018

@arnaud-deprez what are you thoughts on supporting the OpenShift DeploymentConfig? When using OpenShift I prefer DeploymentConfig over Deployment especially for development of zalenium.

I think the template section of deployment.yaml can be common between a DeploymentConfig and a Deployment. Supporting this would also allow us a BuildConfig too, which would make it possible to have one command to setup a build environment for zalenium inside openshift.

Another thing, that I think would be worthwhile is to move all of the args to be environment variables instead, like what I did with the openshift template. For OpenShift at least it is much easier to edit environment variables than it is to edit arguments.

env:
- name: ZALENIUM_KUBERNETES_CPU_REQUEST
value: ${SELENIUM_KUBERNETES_CPU_REQUEST}
- name: ZALENIUM_KUBERNETES_CPU_LIMIT
value: ${SELENIUM_KUBERNETES_CPU_LIMIT}
- name: ZALENIUM_KUBERNETES_MEMORY_REQUEST
value: ${SELENIUM_KUBERNETES_MEMORY_REQUEST}
- name: ZALENIUM_KUBERNETES_MEMORY_LIMIT
value: ${SELENIUM_KUBERNETES_MEMORY_LIMIT}
- name: GRID_USER
valueFrom:
secretKeyRef:
key: zalenium-user
name: zalenium
- name: GRID_PASSWORD
valueFrom:
secretKeyRef:
key: zalenium-password
name: zalenium
- name: DESIRED_CONTAINERS
value: ${DESIRED_CONTAINERS}
- name: MAX_DOCKER_SELENIUM_CONTAINERS
value: ${MAX_DOCKER_SELENIUM_CONTAINERS}
- name: MAX_TEST_SESSIONS
value: ${MAX_TEST_SESSIONS}
- name: SEND_ANONYMOUS_USAGE_INFO
value: ${SEND_ANONYMOUS_USAGE_INFO}
- name: SELENIUM_IMAGE_NAME
value: ${SELENIUM_IMAGE_NAME}
- name: VIDEO_RECORDING_ENABLED
value: ${VIDEO_RECORDING_ENABLED}

In zalenium.sh you can see that the environment variables are honoured:

CONTAINER_NAME="zalenium"
SELENIUM_IMAGE_NAME=${SELENIUM_IMAGE_NAME:-"elgalu/selenium"}
MAX_TEST_SESSIONS=${MAX_TEST_SESSIONS:-1}
CHROME_CONTAINERS=1
FIREFOX_CONTAINERS=1
DESIRED_CONTAINERS=${DESIRED_CONTAINERS:-2}
MAX_DOCKER_SELENIUM_CONTAINERS=${MAX_DOCKER_SELENIUM_CONTAINERS:-10}
ZALENIUM_ARTIFACT="$(pwd)/${project.build.finalName}.jar"
DEPRECATED_PARAMETERS=false
SAUCE_LABS_ENABLED=${SAUCE_LABS_ENABLED:-false}
BROWSER_STACK_ENABLED=${BROWSER_STACK_ENABLED:-false}
TESTINGBOT_ENABLED=${TESTINGBOT_ENABLED:-false}
VIDEO_RECORDING_ENABLED=${VIDEO_RECORDING_ENABLED:-true}
SCREEN_WIDTH=${SCREEN_WIDTH:-1920}
SCREEN_HEIGHT=${SCREEN_HEIGHT:-1080}
TZ=${TZ:-"Europe/Berlin"}
SEND_ANONYMOUS_USAGE_INFO=${SEND_ANONYMOUS_USAGE_INFO:-true}
START_TUNNEL=${START_TUNNEL:-false}
DEBUG_ENABLED=${DEBUG_ENABLED:-false}
KEEP_ONLY_FAILED_TESTS=${KEEP_ONLY_FAILED_TESTS:-false}
RETENTION_PERIOD=${RETENTION_PERIOD:-3}
LOG_JSON=${LOG_JSON:-false}
LOGBACK_PATH=${LOGBACK_PATH:-logback.xml}
NEW_SESSION_WAIT_TIMEOUT=${NEW_SESSION_WAIT_TIMEOUT:-600000}

Another thing that I think is work keeping from the openshift template is using secrets for the grid username and password. Then inject the secret into the environment variable.

- name: GRID_USER
valueFrom:
secretKeyRef:
key: zalenium-user
name: zalenium
- name: GRID_PASSWORD
valueFrom:
secretKeyRef:
key: zalenium-password
name: zalenium

See: https://kubernetes.io/docs/concepts/configuration/secret/#using-secrets-as-environment-variables

@arnaud-deprez
Copy link
Contributor Author

Hi @pearj, thank you for your feedback.

I fully agree with you for the environment variables vs args and the secret.

For Openshift deployment vs Kubernetes point, my point was to avoid vendor locking and so support Kubernetes deployment first.
But it should indeed be possible to use DeploymentConfig and BuildConfig without too much pain.
The main difference I see is the triggers options:

  • optional use of ImageStreamTag trigger or not.
  • optional ConfigChange trigger(if enabled then the behaviour is more or less equivalent to Deployment from Kubernetes).

I'll see what I can do and come with a new proposal, but I'm also quite busy right now.

If in the meantime you see other adaptation, please shoot.

This also improve:
- add optional securityContext for non openshift cluster
- use fine grained role
@arnaud-deprez
Copy link
Contributor Author

Hi,

Please find the requested changes.

Remarks

  • I put the common PodTemplateSpec section in _.pod-template.yamlfile. Because it starts with _, helm does not generate an output for this.
  • I have updated the documentation accordingly.
  • I have also tried to refactor the old documentation to use Helm a bit everywhere where it referred previously to docs/k8s/**. I didn't find a way to see the preview of this new doc, so if you can please double check this part with me, it will be appreciate :-).

Cheers,

Copy link
Collaborator

@pearj pearj left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for making these changes, I'll get it merged to a branch where we can run the full travis suite.

@pearj
Copy link
Collaborator

pearj commented Dec 6, 2018

Actually I noticed it already used helm in travis, I'll just merge it.

@pearj pearj merged commit ab74519 into zalando:master Dec 6, 2018
@arnaud-deprez
Copy link
Contributor Author

Thank you :-)

@diemol
Copy link
Contributor

diemol commented Dec 6, 2018

Actually, thank you @arnaud-deprez for your time and putting this PR together!

@arnaud-deprez arnaud-deprez deleted the feature/k8s-vanilla-standardization branch January 11, 2019 11:04
# for free to subscribe to this conversation on GitHub. Already have an account? #.
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Kubernetes and Openshift standardization
5 participants