Skip to content
New issue

Have a question about this project? # for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “#”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? # to your account

does not work on clusters with Pod Security Policies #32

Closed
alban opened this issue Jan 14, 2019 · 6 comments
Closed

does not work on clusters with Pod Security Policies #32

alban opened this issue Jan 14, 2019 · 6 comments

Comments

@alban
Copy link
Contributor

alban commented Jan 14, 2019

Symptoms:

  • the job is created successfully
  • its status stays empty (status: {})
  • no pods get created
  • I can see the following event with kubectl get events -n default:

3s Warning FailedCreate Job Error creating: pods "kubectl-trace-603733c8-17e6-11e9-aec8-c85b763781a4-" is forbidden: unable to validate against any pod security policy: [spec.securityContext.hostPID: Invalid value: true: Host PID is not allowed to be used spec.volumes[1]: Invalid value: "hostPath": hostPath volumes are not allowed to be used spec.volumes[2]: Invalid value: "hostPath": hostPath volumes are not allowed to be used spec.containers[0].securityContext.privileged: Invalid value: true: Privileged containers are not allowed]

kubectl-trace should use a specific ServiceAccount and its documentation should say how to install the ServiceAccount and its associated PSP.

@alban
Copy link
Contributor Author

alban commented Jan 14, 2019

As a quick workaround for testing, I applied the following:

kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: kubectltrace
roleRef:
  kind: ClusterRole
  name: privileged-psp
  apiGroup: rbac.authorization.k8s.io
subjects:
- kind: ServiceAccount
  name: default
  namespace: default

and then the pod got started correctly on my cluster.
But for a correct fix, we should not use the default service account.

@fntlnz
Copy link
Member

fntlnz commented Jan 14, 2019

Good catch @alban !
What do you think about adding an option to let the users to choose which service account to use?

Another option could be to create the ServiceAccount, Role and RoleBinding while creating the trace as we do for Job and ConfigMap and then just use the created ServiceAccount.

Or we could have three modes:

  • default: like how it is now
  • custom service account: the user specifies a service account with the right privileges
  • auto: service account, role and rolebinding are created for the user namespace scoped.

@alban
Copy link
Contributor Author

alban commented Jan 14, 2019

@fntlnz I think I prefer the option with a custom service account and add in the documentation how to create the ClusterRole and ClusterRoleBinding because that could be potentially different from one cluster to another. In my snippet, I use privileged-psp because that's the name of the PSP defined in my cluster, but it could be a different name in another cluster, or not even created.

@fntlnz
Copy link
Member

fntlnz commented Jan 16, 2019

Ok so the two possible scenarios would be:

  • No service account specified, uses default because tehre's no specific security policy to follow
  • Service account name passed as parameter, the service account is setup by the user and then used for kubectl-trace

@alban

@alban
Copy link
Contributor Author

alban commented Jan 16, 2019

@fntlnz yes, that looks good! The next thing to implement could be something to get a useful error message that hints at the service account parameter when it's not working. Not sure how to detect that...

@fntlnz
Copy link
Member

fntlnz commented Jan 16, 2019

Cool, would you mind opening an issue on that to explore a bit @alban ?

# for free to join this conversation on GitHub. Already have an account? # to comment
Projects
None yet
Development

No branches or pull requests

2 participants