简体   繁体   English

Kubernetes 服务用于所有 Pod,另一个仅用于领导者

[英]Kubernetes services for all pods and another for only the leader

In Kubernetes, is it possible to have 2 services for a single deployment, one which is "standard" and proxies in front of all ready pods, and a second service which sends traffic only the elected leader?在 Kubernetes 中,是否可以为单个部署提供 2 个服务,一个是“标准”服务并在所有准备好的 pod 前面代理,另一个服务仅发送选定领导者的流量? If so how?如果有怎么办? I am using client-go for leader election.我正在使用client-go进行领导者选举。 These are layer 4 services.这些是第 4 层服务。

I know that a service can use labels for a selector, but client-go uses an annotation to mark the leader.我知道服务可以使用标签作为选择器,但 client-go 使用注释来标记领导者。 Using a service without selectors and creating/removing an endpoint in the leader callbacks seems hacky/buggy.使用没有选择器的服务并在领导者回调中创建/删除端点似乎是 hacky/buggy。 Thank you.谢谢你。

In Kubernetes, is it possible to have 2 services for a single deployment, one which is "standard" and proxies in front of all ready pods, and a second service that sends traffic only the elected leader?在 Kubernetes 中,是否可以为单个部署提供 2 个服务,一个是“标准”服务,在所有准备好的 pod 前面代理,第二个服务仅发送选定领导者的流量?

Yes, but it seems a bit hacky.是的,但它似乎有点hacky。 The way services work is like this:服务的工作方式是这样的:

Service -> Service labels select endpoints -> Endpoint -> Endpoint labels select pods -> Pods (PodIP)服务 -> 服务标签 select 端点 -> 端点 -> 端点标签 select Pod -> Pods (PodIP)

  1. So you could have your regular "Service" that points to all the pods on your Deployment or StatefulSet which automatically provisions all the Endpoints.因此,您可以拥有指向 Deployment 或 StatefulSet 上的所有 pod 的常规“服务”,它会自动配置所有端点。

  2. You could also have another set of "Headless Service" + "Endpoint" each manually created individually that you make their labels match with each other and then have that Endpoint manually match with the label of the pod of your choice.您还可以单独手动创建另一组“无头服务” + “端点” ,使它们的标签相互匹配,然后让该端点与您选择的 pod 的 label 手动匹配。

Now with respect to client-go/leaderelection .现在关于client-go/leaderelection It seems like it works using an Endpoint or ConfigMap lock for the leader (The example shows a ConfigMap lock).似乎它为领导者使用EndpointConfigMap锁(该示例显示了ConfigMap锁)。 But, looks like you want to use the Endpoint lock.但是,看起来你想使用端点锁。 So this package doesn't work with services or labels, it looks like it just works on Endpoint resources.所以这个 package 不适用于服务或标签,看起来它只适用于端点资源。 So essentially if you have 3 nodes and want to find the leader you would have to use 3 manually created Endpoint resources.所以本质上,如果你有 3 个节点并且想要找到领导者,你将不得不使用 3 个手动创建的端点资源。 The one that is the leader will always have the annotation.领导者将始终具有注释。

Now how do you tie it to 2) above?现在你如何将它与上面的 2) 联系起来? As your client elects or selects the leader then you also have to change the endpoint labels so that they match your manually created headless service.当您的客户选举或选择领导者时,您还必须更改端点标签,以便它们与您手动创建的无头服务相匹配。 (It could be done in your code too) (也可以在您的代码中完成)

You could also just elect to just use Endpoints instead 2) above (No headless service) and have client-go/leaderelection talk directly to the endpoints.您也可以选择只使用上面的端点 2)(无无头服务),并让 client-go/leaderelection 直接与端点对话。

Another option is to take advantage of StatefulSets and its required headless service .另一种选择是利用StatefulSets及其所需的无头服务 So that service will resolve to the IP addresses of all the replicas in your quorum-based cluster.因此,该服务将解析为基于仲裁的集群中所有副本的 IP 地址。 The leader election would be up to the client package (client-go doesn't seem to support this) which is pretty the case for most quorum based applications (K8s, Zookeeper, Kafka, Etcd, etc, etc);领导者选举将取决于客户端 package(client-go 似乎不支持这一点),这对于大多数基于仲裁的应用程序(K8s、Zookeeper、Kafka、Etcd 等)来说都是如此; the client is the one that finds who the leader is.客户是找到领导者的人。

✌️ ✌️

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM