-
Notifications
You must be signed in to change notification settings - Fork 13
New issue
Have a question about this project? # for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “#”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? # to your account
[AWS Fargate] Fetching in-cluster config fails from farget container. #19
Comments
@ssudake21 will respond as soon as I have a clear path forward here! |
Sure @d-nishi . Thanks for acknowledgement. |
@ssudake21 @d-nishi This is likely due to the current limitation of ECS and Fargate. I believe that we either need to implement sidecars for the fargate provider to exploit it, or wailt until Fargate/ECS implements a thing like "pre-populated volume mounts", or the EFS volume mounts gets supported for Fargate too. The in-cluster configuration, AFAIK, works by mounting service account token at the specific path and client-go discovers it. Neither Fargate nor ECS have notions of pre-populated volumes, and relevant Kubernetes features like ConfigMaps and Secrets, and also ability to mount them as container volumes. Therefore the fargate provider has no straightforward way to mount service account tokens onto containers due to the above. @d-nishi Do you think a feature that sounds like "pre-populated volumes", configmaps, secrets w/ volume mount support is in the ECS/Fargate roadmap? Then we should better wait for that to land. Otherwise, my best bet is to add the sidecar support like @lizrice recently attempted in virtual-kubelet/virtual-kubelet#484. I believe fargate supports local volume sharing across containers within a task. With the sidecar support, we may implement a configmap/secret-writer-sidecar that watches k8s configmap/secret to write to files on the shared local volume. Applying this to the serviceaccount token secrets solves this specific issue, along with all the benefits of configmaps/secrets support for the fargate provider. One thing to consider would be how to allow the sidecar to authenticate against the K8S API, given that it can't rely on the serviceaccount token(chicken-and-egg!). For that, I guess we can just add a support for |
I did get virtual-kubelet/virtual-kubelet#484 working in my own fork, at least for what I was trying to achieve, but that PR here was inadvertent as it seems pretty hacky to me and I didn't really think anyone else would want it! But let me know if it would be useful for me to push any of it (or part of it) here. |
Wouldn't it be possible to make this push based instead of pull? E.g. have the vk itself write the secretmap/configmap content to a shared storage (say S3) and then have a fargate sidecar polling this into a local volume? This would at least not create a per task api-client and might be more resilient against k8s master hiccups. |
That sounds like a great idea! The only challenge would be how we can safely open up s3 bucket access only to the sidecar. Altering the task role won't be an option. Perhaps we can teach the fargate provider to automatically update the bucket policy to accept access from the pod on start, and to decline access on pod stop? |
Environment summary
Provider - Farget
Version - Latest 18adde2aca4ebe72aee1c0320e9affad218e1933
K8s Master Info - Cluster created with Kops on AWS
Install Method (e.g. Helm Chart, ) Manually. Referred steps in https://aws.amazon.com/blogs/opensource/aws-fargate-virtual-kubelet/
Issue Details
I am trying to run a pod which uses in cluster configuration to create k8s go client. All of details are mentioned in pod spec, but looks like aws farget provider totally ignores that service account is mentioned in pod run. It just parses pod
spec.containers
I get following error in pod log,
Comes from https://github.com/kubernetes-client/go/blob/78199cc914eead8a64d1eb11061bf4a031b63a1e/kubernetes/config/incluster_config.go#L45
Repro Steps
Try to run kubernetes client-go example,
https://github.com/kubernetes/client-go/tree/master/examples/in-cluster-client-configuration
The text was updated successfully, but these errors were encountered: