You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Getting following error from EKS console: Your worker nodes do not have access to the cluster. Verify if the node instance role is present and correctly configured in the aws-auth ConfigMap.
Node IAM Role ARN is arn:aws:iam::311022431024:role/StreamNative/sn-rxrevu-prod-us-east-120211109000743895600000009 But in aws-auth configmap there is:
The problem is the EKS API reports on the external configuration state of the cluster, i.e. the parameters contained in our API request to create an EKS node group (as a refresher, you create an EKS cluster first, then create a node group(s) second. They are separate API actions).
The issue is the IAM role we told the EKS API our node group would be using contains a path, but due to a problem the IAM integration in EKS, IAM roles are only recognized in the aws-auth configmap if they don’t contain a path. kubernetes-sigs/aws-iam-authenticator#153
And for a bit of background, we recently added paths to our IAM resources to improve how we’re scoping our vendor access in customer’s managed accounts.
So TL;DR: Don’t trust the EKS API for the cluster status! Check the cluster itself…
Getting following error from EKS console:
Your worker nodes do not have access to the cluster. Verify if the node instance role is present and correctly configured in the aws-auth ConfigMap.
Node IAM Role ARN is
arn:aws:iam::311022431024:role/StreamNative/sn-rxrevu-prod-us-east-120211109000743895600000009
But in aws-auth configmap there is:The 2 role ARNs doesn't match.
The text was updated successfully, but these errors were encountered: