Recently started setting up an AWS EKS cluster. Everything working fine, but decided to look at the kubelet logs on one of the nodes and saw a bunch of errors like these:


Dec 01 09:42:52 ip-172-26-0-213.ec2.internal kubelet[4226]: E1009 09:42:52.335445 4226 kubelet_node_status.go:377] Error updating node status, will retry: error getting node "ip-172-26-0-213.ec2.internal": Unauthorized
Dec 01 10:03:54 ip-172-26-0-213.ec2.internal kubelet[4226]: E1009 10:03:54.831820 4226 kubelet_node_status.go:377] Error updating node status, will retry: error getting node "ip-172-26-0-213.ec2.internal": Unauthorized

Was worried that there might be some type of setup issue so started digging around. Took a couple weeks to get an answer but it turns out that this isn’t anything to worry about on EKS. Figured I would post here just to maybe save someone some troubleshooting time. Here’s the offical response from AWS regarding these errors

The kubelet will regularly report node status to the Kubernetes API. When it does so it needs an authentication token generated by the aws-iam-authenticator. The kubelet will invoke the aws-iam-authenticator and store the token in it’s global cache. In EKS this authentication token expires after 21 minutes.

The kubelet doesn’t understand token expiry times so it will attempt to reach the API using the token in it’s cache. When the API returns the Unauthorized response, there is a retry mechanism to fetch a new token from aws-iam-authenticator and retry the request.