简体   繁体   中英

How to add rule to migrate on node failure in k8s

I have k8s cluster running on 2 nodes and 1 master in AWS.

When I changed replica of my all replication pods are span on same node. Is there a way to distribute across nodes.?

sh-3.2# kubectl get pods -o wide
NAME                                READY     STATUS    RESTARTS   AGE       IP           NODE
backend-6b647b59d4-hbfrp            1/1       Running   0          3h        100.96.3.3   node1
api-server-77765b4548-9xdql         1/1       Running   0          3h        100.96.3.1   node2
api-server-77765b4548-b6h5q         1/1       Running   0          3h        100.96.3.2   node2
api-server-77765b4548-cnhjk         1/1       Running   0          3h        100.96.3.5   node2
api-server-77765b4548-vrqdh         1/1       Running   0          3h        100.96.3.7   node2
api-db-85cdd9498c-tpqpw             1/1       Running   0          3h        100.96.3.8   node2
ui-server-84874d8cc-f26z2           1/1       Running   0          3h        100.96.3.4   node1

And when I tried to stop/terminated AWS instance (node-2) pods are in pending state instead of migrating to available node. Can we specify it ??

sh-3.2# kubectl get pods -o wide
NAME                                READY     STATUS    RESTARTS   AGE       IP           NODE
backend-6b647b59d4-hbfrp            1/1       Running   0          3h        100.96.3.3   node1
api-server-77765b4548-9xdql         0/1       Pending   0          32s       <none>       <none>
api-server-77765b4548-b6h5q         0/1       Pending   0          32s       <none>       <none>
api-server-77765b4548-cnhjk         0/1       Pending   0          32s       <none>       <none>
api-server-77765b4548-vrqdh         0/1       Pending   0          32s       <none>       <none>
api-db-85cdd9498c-tpqpw             0/1       Pending   0          32s       <none>       <none>
ui-server-84874d8cc-f26z2           1/1       Running   0          3h        100.96.3.4   node1

Normally scheduler takes that under account and tries to spread your pods, but there are many reasons why the other node might be unschedulable at time of starting the pods. If you don't need to have multiple pods on the same node, you can force that with Pod Anti Affinity rules, with which you can say that pods of the same set of labels (ie. name and version) can never run on the same node.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM