简体   繁体   English

Kubernetes 仪表板 CrashLoopBackOff:Raspberry Pi 集群超时错误

[英]Kubernetes Dashboard CrashLoopBackOff: timeout error on Raspberry Pi cluster

Should be a simple task, I simply want to run the Kubernetes Dashboard on a clean install of Kubernetes on a Raspberry Pi cluster.应该是一项简单的任务,我只想在 Raspberry Pi 集群上全新安装 Kubernetes 上运行Kubernetes 仪表板

What I've done:我做了什么:

  • Setup the initial cluster (hostname, static ip, cgroup, swapspace, install and configure docker, install kubernetes, setup kubernetes network and join nodes) Setup the initial cluster (hostname, static ip, cgroup, swapspace, install and configure docker, install kubernetes, setup kubernetes network and join nodes)
  • I have flannel installed我安装了法兰绒
  • I have applied the dashboard我已应用仪表板
  • Bunch of random testing trying to figure this out一堆随机测试试图解决这个问题

Obviously, as seen below, the container in the dashboard pod is not working because it cannot access kubernetes-dashboard-csrf .显然,如下所示,dashboard pod 中的容器无法正常工作,因为它无法访问kubernetes-dashboard-csrf I have no idea why this cannot be accessed, my only thought is that I missed a step when setting up the cluster.我不知道为什么无法访问,我唯一的想法是我在设置集群时错过了一步。 I've followed about 6 different guides without success, prioritizing the official guide.我遵循了大约 6 个不同的指南,但没有成功,优先考虑官方指南。 I have also seen quite a few people having the same or similar issues that most have not posted a resolution.我还看到不少人遇到相同或相似的问题,但大多数人都没有发布解决方案。 Thanks!谢谢!

Nodes: kubectl get nodes节点: kubectl get nodes

NAME      STATUS   ROLES                  AGE    VERSION
gus3      Ready    <none>                 346d   v1.23.1
juliet3   Ready    <none>                 346d   v1.23.1
shawn4    Ready    <none>                 346d   v1.23.1
vick4     Ready    control-plane,master   346d   v1.23.1

All Pods: kubectl get pods --all-namespaces所有 Pod: kubectl get pods --all-namespaces

NAMESPACE              NAME                                         READY   STATUS             RESTARTS   AGE
kube-system            coredns-74ff55c5b-7j2xg                      1/1     Running            27         346d
kube-system            coredns-74ff55c5b-cb2x8                      1/1     Running            27         346d
kube-system            etcd-vick4                                   1/1     Running            2          169m
kube-system            kube-apiserver-vick4                         1/1     Running            2          169m
kube-system            kube-controller-manager-vick4                1/1     Running            2          169m
kube-system            kube-flannel-ds-gclmp                        1/1     Running            0          11m
kube-system            kube-flannel-ds-hshjv                        1/1     Running            0          12m
kube-system            kube-flannel-ds-kdd4w                        1/1     Running            0          11m
kube-system            kube-flannel-ds-wzhkt                        1/1     Running            0          10m
kube-system            kube-proxy-4t25v                             1/1     Running            26         346d
kube-system            kube-proxy-b6vbx                             1/1     Running            26         346d
kube-system            kube-proxy-jgj4s                             1/1     Running            27         346d
kube-system            kube-proxy-n65sl                             1/1     Running            26         346d
kube-system            kube-scheduler-vick4                         1/1     Running            2          169m
kubernetes-dashboard   dashboard-metrics-scraper-5b8896d7fc-99wfk   1/1     Running            0          77m
kubernetes-dashboard   kubernetes-dashboard-897c7599f-qss5p         0/1     CrashLoopBackOff   18         77m

Resources: kubectl get all -n kubernetes-dashboard资源: kubectl get all -n kubernetes-dashboard

NAME                                             READY   STATUS             RESTARTS   AGE
pod/dashboard-metrics-scraper-5b8896d7fc-99wfk   1/1     Running            0          79m
pod/kubernetes-dashboard-897c7599f-qss5p         0/1     CrashLoopBackOff   19         79m

NAME                                TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)    AGE
service/dashboard-metrics-scraper   ClusterIP   172.20.0.191   <none>        8000/TCP   79m
service/kubernetes-dashboard        ClusterIP   172.20.0.15    <none>        443/TCP    79m

NAME                                        READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/dashboard-metrics-scraper   1/1     1            1           79m
deployment.apps/kubernetes-dashboard        0/1     1            0           79m

NAME                                                   DESIRED   CURRENT   READY   AGE
replicaset.apps/dashboard-metrics-scraper-5b8896d7fc   1         1         1       79m
replicaset.apps/kubernetes-dashboard-897c7599f         1         1         0       79m

Notice CrashLoopBackOff注意CrashLoopBackOff

Pod Details: kubectl describe pods kubernetes-dashboard-897c7599f-qss5p -n kubernetes-dashboard Pod 详细信息: kubectl describe pods kubernetes-dashboard-897c7599f-qss5p -n kubernetes-dashboard

Name:         kubernetes-dashboard-897c7599f-qss5p
Namespace:    kubernetes-dashboard
Priority:     0
Node:         shawn4/192.168.10.71
Start Time:   Fri, 17 Dec 2021 18:52:15 +0000
Labels:       k8s-app=kubernetes-dashboard
              pod-template-hash=897c7599f
Annotations:  <none>
Status:       Running
IP:           172.19.1.75
IPs:
  IP:           172.19.1.75
Controlled By:  ReplicaSet/kubernetes-dashboard-897c7599f
Containers:
  kubernetes-dashboard:
    Container ID:  docker://894a354e40ca1a95885e149dcd75415e0f186ead3f2e05ec0787f4b1c7a29622
    Image:         kubernetesui/dashboard:v2.4.0
    Image ID:      docker-pullable://kubernetesui/dashboard@sha256:526850ae4ea9aba360e72b6df69fd3126b129d446efe83ac5250282b85f95b7f
    Port:          8443/TCP
    Host Port:     0/TCP
    Args:
      --auto-generate-certificates
      --namespace=kubernetes-dashboard
    State:          Waiting
      Reason:       CrashLoopBackOff
    Last State:     Terminated
      Reason:       Error
      Exit Code:    2
      Started:      Fri, 17 Dec 2021 20:10:19 +0000
      Finished:     Fri, 17 Dec 2021 20:10:49 +0000
    Ready:          False
    Restart Count:  19
    Liveness:       http-get https://:8443/ delay=30s timeout=30s period=10s #success=1 #failure=3
    Environment:    <none>
    Mounts:
      /certs from kubernetes-dashboard-certs (rw)
      /tmp from tmp-volume (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from kubernetes-dashboard-token-wq9m8 (ro)
Conditions:
  Type              Status
  Initialized       True
  Ready             False
  ContainersReady   False
  PodScheduled      True
Volumes:
  kubernetes-dashboard-certs:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  kubernetes-dashboard-certs
    Optional:    false
  tmp-volume:
    Type:       EmptyDir (a temporary directory that shares a pod's lifetime)
    Medium:
    SizeLimit:  <unset>
  kubernetes-dashboard-token-wq9m8:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  kubernetes-dashboard-token-wq9m8
    Optional:    false
QoS Class:       BestEffort
Node-Selectors:  kubernetes.io/os=linux
Tolerations:     node-role.kubernetes.io/master:NoSchedule
                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
                 node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
  Type     Reason   Age                  From     Message
  ----     ------   ----                 ----     -------
  Warning  BackOff  21s (x327 over 79m)  kubelet  Back-off restarting failed container

Logs: kubectl logs -f -n kubernetes-dashboard kubernetes-dashboard-897c7599f-qss5p日志: kubectl logs -f -n kubernetes-dashboard kubernetes-dashboard-897c7599f-qss5p

2021/12/17 20:10:19 Starting overwatch
2021/12/17 20:10:19 Using namespace: kubernetes-dashboard
2021/12/17 20:10:19 Using in-cluster config to connect to apiserver
2021/12/17 20:10:19 Using secret token for csrf signing
2021/12/17 20:10:19 Initializing csrf token from kubernetes-dashboard-csrf secret
panic: Get "https://172.20.0.1:443/api/v1/namespaces/kubernetes-dashboard/secrets/kubernetes-dashboard-csrf": dial tcp 172.20.0.1:443: i/o timeout

goroutine 1 [running]:
github.com/kubernetes/dashboard/src/app/backend/client/csrf.(*csrfTokenManager).init(0x400055fae8)
        /home/runner/work/dashboard/dashboard/src/app/backend/client/csrf/manager.go:41 +0x350
github.com/kubernetes/dashboard/src/app/backend/client/csrf.NewCsrfTokenManager(...)
        /home/runner/work/dashboard/dashboard/src/app/backend/client/csrf/manager.go:66
github.com/kubernetes/dashboard/src/app/backend/client.(*clientManager).initCSRFKey(0x40001fc080)
        /home/runner/work/dashboard/dashboard/src/app/backend/client/manager.go:502 +0x8c
github.com/kubernetes/dashboard/src/app/backend/client.(*clientManager).init(0x40001fc080)
        /home/runner/work/dashboard/dashboard/src/app/backend/client/manager.go:470 +0x40
github.com/kubernetes/dashboard/src/app/backend/client.NewClientManager(...)
        /home/runner/work/dashboard/dashboard/src/app/backend/client/manager.go:551
main.main()
        /home/runner/work/dashboard/dashboard/src/app/backend/dashboard.go:95 +0x1dc

If you need any more information please ask!如果您需要更多信息,请询问!

UPDATE 12/29/21: Fixed this issue by reinstalling the cluster to the newest versions of Kubernetes and Ubuntu.更新 12/29/21:通过将集群重新安装到最新版本的 Kubernetes 和 Ubuntu 来解决此问题。

Turned out there were several issues:结果发现有几个问题:

  • I was using Ubuntu Buster which is deprecated.我使用的是已弃用的 Ubuntu Buster。
  • My client/server Kubernetes versions were +/-0.3 out of sync我的客户端/服务器 Kubernetes 版本不同步 +/-0.3
  • I was following outdated instructions我遵循过时的指示

I reinstalled the whole cluster following Kubernetes official guide and, with a few snags along the way, it works!我按照 Kubernetes 官方指南重新安装了整个集群,并且一路上遇到了一些障碍,它可以工作!

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM