-
Notifications
You must be signed in to change notification settings - Fork 164
New issue
Have a question about this project? # for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “#”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? # to your account
does not work on clusters with Pod Security Policies #32
Comments
As a quick workaround for testing, I applied the following:
and then the pod got started correctly on my cluster. |
Good catch @alban ! Another option could be to create the ServiceAccount, Role and RoleBinding while creating the trace as we do for Job and ConfigMap and then just use the created ServiceAccount. Or we could have three modes:
|
@fntlnz I think I prefer the option with a custom service account and add in the documentation how to create the ClusterRole and ClusterRoleBinding because that could be potentially different from one cluster to another. In my snippet, I use |
Ok so the two possible scenarios would be:
|
@fntlnz yes, that looks good! The next thing to implement could be something to get a useful error message that hints at the service account parameter when it's not working. Not sure how to detect that... |
Cool, would you mind opening an issue on that to explore a bit @alban ? |
Symptoms:
status: {}
)kubectl get events -n default
:kubectl-trace should use a specific ServiceAccount and its documentation should say how to install the ServiceAccount and its associated PSP.
The text was updated successfully, but these errors were encountered: