简体   繁体   中英

kubernetes no nodes available to schedule pods in coreos

I create a kubernetes cluster to test. but cannot create rc. I got error reason: 'failedScheduling' no nodes available to schedule pods :

I1112 04:24:34.626614       6 factory.go:214] About to try and schedule pod my-nginx-63t4p
I1112 04:24:34.626635       6 scheduler.go:127] Failed to schedule: &{{ } {my-nginx-63t4p my-nginx- default /api/v1/namespaces/default/pods/my-nginx-63t4p c4198c29-88ef-11e5-af0e-002590fdff2c 1054 0 2015-11-12 03:45:07 +0000 UTC <nil> map[app:nginx] map[kubernetes.io/created-by:{"kind":"SerializedReference","apiVersion":"v1","reference":{"kind":"ReplicationController","namespace":"default","name":"my-nginx","uid":"c414bbd3-88ef-11e5-8682-002590fdf940","apiVersion":"v1","resourceVersion":"1050"}}]} {[{default-token-879cw {<nil> <nil> <nil> <nil> <nil> 0xc20834c030 <nil> <nil> <nil> <nil> <nil>}}] [{nginx nginx [] []  [{ 0 80 TCP }] [] {map[] map[]} [{default-token-879cw true /var/run/secrets/kubernetes.io/serviceaccount}] <nil> <nil> <nil> /dev/termination-log IfNotPresent <nil>}] Always 0xc20834c028 <nil> ClusterFirst map[] default  false []} {Pending []     <nil> []}}
I1112 04:24:34.626720       6 event.go:203] Event(api.ObjectReference{Kind:"Pod", Namespace:"default", Name:"my-nginx-63t4p", UID:"c4198c29-88ef-11e5-af0e-002590fdff2c", APIVersion:"v1", ResourceVersion:"1054", FieldPath:""}): reason: 'failedScheduling' no nodes available to schedule pods

the status of pod like :

core@core-1-86 ~ $ kubectl get po -o wide
NAME             READY     STATUS    RESTARTS   AGE       NODE
my-nginx-3w98h   0/1       Pending   0          56m
my-nginx-4fau8   0/1       Pending   0          56m
my-nginx-9zc4f   0/1       Pending   0          56m
my-nginx-fzz5i   0/1       Pending   0          56m
my-nginx-hqqpt   0/1       Pending   0          56m
my-nginx-pm2bo   0/1       Pending   0          56m
my-nginx-rf3tk   0/1       Pending   0          56m
my-nginx-v1dj3   0/1       Pending   0          56m
my-nginx-viiop   0/1       Pending   0          56m
my-nginx-yy23r   0/1       Pending   0          56m

the example rc :

core@core-1-85 ~ $ cat wk/rc-nginx.yaml
apiVersion: v1
kind: ReplicationController
metadata:
  name: my-nginx
spec:
  replicas: 10
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - name: nginx
        image: nginx
        ports:
      - containerPort: 80

and the node status in cluster is :

core@core-1-85 ~ $ kubectl get node
NAME         LABELS                              STATUS    AGE
10.12.1.90   kubernetes.io/hostname=10.12.1.90   Ready     37m
10.12.1.92   kubernetes.io/hostname=10.12.1.92   Ready     37m
10.12.1.93   kubernetes.io/hostname=10.12.1.93   Ready     37m
10.12.1.94   kubernetes.io/hostname=10.12.1.94   Ready     38m
10.12.1.95   kubernetes.io/hostname=10.12.1.95   Ready     38m
10.12.1.96   kubernetes.io/hostname=10.12.1.96   Ready     38m
10.12.1.97   kubernetes.io/hostname=10.12.1.97   Ready     38m
10.12.1.98   kubernetes.io/hostname=10.12.1.98   Ready     41m
core-1-89    kubernetes.io/hostname=core-1-89    Ready     22m

I found the Solution, the reason is the version of kube-apiserver,kube-controller-manager and kube-scheduler does not match with the kubelet.

the detail: https://github.com/kubernetes/kubernetes/issues/17154

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM