简体   繁体   中英

kubelet doesn't seem to be using correct user when registering node

When kubelet tries to start on my Kubernetes worker nodes, I'm getting messages like this in the system log:

May 25 19:43:57 ip-10-240-0-223 kubelet[4882]: I0525 19:43:57.627389    4882 kubelet_node_status.go:82] Attempting to register node worker-1
May 25 19:43:57 ip-10-240-0-223 kubelet[4882]: E0525 19:43:57.628967    4882 kubelet_node_status.go:106] Unable to register node "worker-1" with API server: nodes is forbidden: User "system:node:" cannot create nodes at the cluster scope: unknown node for user "system:node:"
May 25 19:43:58 ip-10-240-0-223 kubelet[4882]: E0525 19:43:58.256557    4882 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:451: Failed to list *v1.Service: services is forbidden: User "system:node:" cannot list services at the cluster scope: unknown node for user "system:node:"
May 25 19:43:58 ip-10-240-0-223 kubelet[4882]: E0525 19:43:58.257381    4882 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: pods is forbidden: User "system:node:" cannot list pods at the cluster scope: unknown node for user "system:node:"

If I'm reading these correctly, the problem is that the node is using the username system:node: when connecting to the API server rather than system:node:worker-1 . But as far as I can tell, it should be using a worker-specific one. Here's my kubeconfig (with private stuff elided):

apiVersion: v1
clusters:
- cluster:
    certificate-authority-data: [elided]
    server: https://[elided]:6443
  name: kubernetes-the-hard-way
contexts:
- context:
    cluster: kubernetes-the-hard-way
    user: system:node:worker-1
  name: default
current-context: default
kind: Config
preferences: {}
users:
- name: system:node:worker-1
  user:
    client-certificate-data:  [elided]
    client-key-data:  [elided]

I was under the impression that the user s specified there were the ones used when contacting the API, but clearly I'm wrong. Is there somewhere else I've missed out a reference to worker-1 ?

I'm following the Kubernetes the Hard Way tutorial, but adjusting it for AWS as I go, so this problem is almost certainly a mistake I made when adjusting the config files. If there are any other config files that I should provide to make this easier/possible to debug, please do let me know.

The server determines the user from the CN of the certificate. Check the script that generated the certificate, it likely had an unset var when it created the CN in the form CN=system:node:$NODE

The current "Kubernetes-The-Hard-Way" is using Node Authorization so ensure your kubelet x509 certificates contains

Subject: CN=system:node:worker-1, O=system:nodes 

Also double check your API server have these options

--authorization-mode=Node,RBAC
--enable-admission-plugins=...,NodeRestriction,...

otherwise the node won't be able to auto-register in the API.

You can check your x509 certificate with

openssl x509 -in /var/lib/kubelet/${HOSTNAME}.pem -text

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM