简体   繁体   English

Kube8s pod 无法连接到调度程序

[英]Kube8s pod unable to connect to scheduler

I am following link: https://kubernetes.dask.org/en/latest/ , to run dask array on Kubernetes cluster.我正在访问链接: https ://kubernetes.dask.org/en/latest/,在 Kubernetes 集群上运行 dask 阵列。

Steps:脚步:

  1. Installed Kubernetes on 3 nodes(1 Master and 2 workers).在 3 个节点(1 个主节点和 2 个工作节点)上安装 Kubernetes。
  2. install miniconda3安装 miniconda3
  3. pip install dask-kubernetes pip 安装 dask-kubernetes
  4. dask_example.py with code to run dask array (same as example given on link) dask_example.py 带有运行 dask 数组的代码(与链接上给出的示例相同)
  5. Worker-spec.yml file with pod configuration (same as example given on link)带有 pod 配置的 Worker-spec.yml 文件(与链接中给出的示例相同)

While running the example code, the worker pod is unable to connect to scheduler: Workerpod logs below运行示例代码时,worker pod 无法连接到调度程序:下面的 Workerpod 日志

(base) [root@k8s-master example]# kubectl logs workerpod

...
Successfully installed distributed-2.8.1+4.g1d9aaac6 fastparquet-0.3.2 llvmlite-0.30.0 numba-0.46.0 thrift-0.13.0
+ exec dask-worker --nthreads 2 --no-bokeh --memory-limit 6GB --death-timeout 60
/opt/conda/lib/python3.7/site-packages/distributed/cli/dask_worker.py:252: UserWarning: The --bokeh/--no-bokeh flag has been renamed to --dashboard/--no-dashboard.
  "The --bokeh/--no-bokeh flag has been renamed to --dashboard/--no-dashboard. "
distributed.nanny - INFO -         Start Nanny at: 'tcp://10.32.0.2:43161'
distributed.worker - INFO -       Start worker at:      tcp://10.32.0.2:45099
distributed.worker - INFO -          Listening to:      tcp://10.32.0.2:45099
distributed.worker - INFO - Waiting to connect to:    tcp://172.16.0.76:40641
distributed.worker - INFO - -------------------------------------------------
distributed.worker - INFO -               Threads:                          2
distributed.worker - INFO -                Memory:                    6.00 GB
distributed.worker - INFO -       Local Directory:           /worker-0mlqwccq
distributed.worker - INFO - -------------------------------------------------
distributed.worker - INFO - Waiting to connect to:    tcp://172.16.0.76:40641
distributed.worker - INFO - Waiting to connect to:    tcp://172.16.0.76:40641
distributed.worker - INFO - Waiting to connect to:    tcp://172.16.0.76:40641
distributed.worker - INFO - Waiting to connect to:    tcp://172.16.0.76:40641
distributed.worker - INFO - Waiting to connect to:    tcp://172.16.0.76:40641
distributed.nanny - INFO - Closing Nanny at 'tcp://10.32.0.2:43161'
distributed.worker - INFO - Stopping worker at tcp://10.32.0.2:45099
distributed.worker - INFO - Closed worker has not yet started: None
distributed.dask_worker - INFO - Timed out starting worker
distributed.dask_worker - INFO - End worker
(base) [root@k8s-master example]#

It seems it is unable to connect to scheduler, logs below:似乎无法连接到调度程序,日志如下:

(base) [root@k8s-master example]# kubectl -n kube-system logs kube-scheduler-k8s-master
I1126 15:34:16.048901       1 serving.go:319] Generated self-signed cert in-memory
W1126 15:34:18.709418       1 authentication.go:262] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
W1126 15:34:18.709438       1 authentication.go:199] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
W1126 15:34:18.709447       1 authentication.go:200] Continuing without authentication configuration. This may treat all requests as anonymous.
W1126 15:34:18.709453       1 authentication.go:201] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
I1126 15:34:18.714711       1 server.go:148] Version: v1.16.3
I1126 15:34:18.714796       1 defaults.go:91] TaintNodesByCondition is enabled, PodToleratesNodeTaints predicate is mandatory
W1126 15:34:18.724908       1 authorization.go:47] Authorization is disabled
W1126 15:34:18.724921       1 authentication.go:79] Authentication is disabled
I1126 15:34:18.724930       1 deprecated_insecure_serving.go:51] Serving healthz insecurely on [::]:10251
I1126 15:34:18.725582       1 secure_serving.go:123] Serving securely on 127.0.0.1:10259
E1126 15:34:18.726754       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
E1126 15:34:18.727678       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
E1126 15:34:18.727685       1 reflector.go:123] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:250: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
E1126 15:34:18.727682       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
E1126 15:34:18.727695       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
E1126 15:34:18.727743       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
E1126 15:34:18.727819       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
E1126 15:34:18.727828       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
E1126 15:34:18.727875       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
E1126 15:34:18.727907       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
E1126 15:34:18.728054       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
E1126 15:34:19.729111       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
E1126 15:34:19.729119       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
E1126 15:34:19.729697       1 reflector.go:123] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:250: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
E1126 15:34:19.730823       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
E1126 15:34:19.731811       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
E1126 15:34:19.732952       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
E1126 15:34:19.733921       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
E1126 15:34:19.735081       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
E1126 15:34:19.736108       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
E1126 15:34:19.737238       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
E1126 15:34:19.738284       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
I1126 15:34:20.825768       1 leaderelection.go:241] attempting to acquire leader lease  kube-system/kube-scheduler...
I1126 15:34:20.832408       1 leaderelection.go:251] successfully acquired lease kube-system/kube-scheduler
E1126 15:34:28.839414       1 factory.go:585] pod is already present in the activeQ
(base) [root@k8s-master example]#

The list of default ClusterRoles includes the ClusterRoles that start with the system: prefix.默认 ClusterRoles 列表包括以 system: 前缀开头的 ClusterRoles。 These are meant to be used by the various Kubernetes components.这些旨在供各种 Kubernetes 组件使用。 Role system:kube-scheduler is used by the Scheduler, system:node is used by the kubelets.角色 system:kube-scheduler 由 Scheduler 使用,system:node 由 kubelets 使用。 Somehow you don't have in your kube-scheduler clusterrole all the rules needed.不知何故,您的 kube-scheduler clusterrole 中没有所需的所有规则。

kubectl get clusterrole system:kube-scheduler -o yaml

you should add in the cluster role all the rules you need:您应该在集群角色中添加您需要的所有规则:

kubectl edit clusterrole system:kube-scheduler 

https://kubernetes.io/docs/reference/access-authn-authz/rbac/ https://kubernetes.io/docs/reference/access-authn-authz/rbac/

you can find to which apigroup belong the resources:您可以找到资源属于哪个 apigroup:

kubectl api-resources 

statefulsets                      sts          apps                           true         StatefulSet

Stateful set belongs to apps apigroup, pods belongs to "" group (core) Stateful set属于apps apigroup,pod属于""组(core)

rules:
- apiGroups: [""] # "" indicates the core API group
  resources: ["pods"]
  verbs: ["get", "watch", "list"]

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM