简体   繁体   English

Pod 与 StatefulSet 中不同节点的亲和性

[英]Pods affinity to different nodes in StatefulSet

I am creating StatefulSets and I want pods within one StatefulSet to be distributed across different nodes of the k8s cluster.我正在创建 StatefulSet,我希望一个 StatefulSet 中的 pod 分布在 k8s 集群的不同节点上。 In my case - one StatefulSet is one database replicaset.就我而言 - 一个 StatefulSet 是一个数据库副本集。

sts.Spec.Template.Labels["mydb.io/replicaset-uuid"] = replicasetUUID.String()
sts.Spec.Template.Spec.Affinity.PodAntiAffinity = &corev1.PodAntiAffinity{
    RequiredDuringSchedulingIgnoredDuringExecution: []corev1.PodAffinityTerm{
        {
            LabelSelector: &metav1.LabelSelector{
                MatchExpressions: []metav1.LabelSelectorRequirement{
                    {
                        Key:      "mydb.io/replicaset-uuid",
                        Operator: metav1.LabelSelectorOpIn,
                        Values:   []string{replicasetUUID.String()},
                    },
                },
            },
            TopologyKey: "kubernetes.io/hostname",
        },
    },
}

However, with these settings, I get the opposite.但是,通过这些设置,我得到了相反的结果。 storage-0-0 and storage-0-1 are on the same replicaset and on the same node... storage-0-0storage-0-1在同一个副本集和同一个节点上......

Moreover, they have exactly the same label mydb.io/replicaset-uuid此外,它们具有完全相同的 label mydb.io/replicaset-uuid

$ kubectl -n mydb get pods -o wide
NAME                             READY   STATUS    RESTARTS   AGE   IP        NODE                           NOMINATED NODE   READINESS GATES
storage-0-0                      1/1     Running   0          40m   x.x.x.x   kubernetes-cluster-x-main-0    <none>           <none>
storage-0-1                      1/1     Running   0          39m   x.x.x.x   kubernetes-cluster-x-main-0    <none>           <none>
storage-1-0                      1/1     Running   0          40m   x.x.x.x   kubernetes-cluster-x-slave-0   <none>           <none>
storage-1-1                      1/1     Running   0          40m   x.x.x.x   kubernetes-cluster-x-slave-0   <none>           <none>
mydb-operator-58c9bfbb9b-7djml   1/1     Running   0          46m   x.x.x.x   kubernetes-cluster-x-slave-0   <none>           <none>

I suggest using an podAntiAffinity rule in the statefulset definition to deploy your application so that no two instances are located on the same host.我建议在 statefulset 定义中使用 podAntiAffinity 规则来部署您的应用程序,以便没有两个实例位于同一主机上。

Reference: An example of a pod that uses pod affinity参考: 使用 pod 亲和性的 pod 示例

It works correctly, as @jesmart wrote in the comment:正如@jesmart 在评论中所写,它可以正常工作:

The description of the problem works correctly I just indicated the wrong image with the application问题描述正确我只是用应用程序指出了错误的图像

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM