简体   繁体   English

如何使用多个 Google Kubernetes Engine (GKE) 集群制作多区域 Kafka/Zookeeper 集群?

[英]How to make a multi-regional Kafka/Zookeeper cluster using multiple Google Kubernetes Engine (GKE) clusters?

I have 3 GKE clusters sitting in 3 different regions on Google Cloud Platform.我在 Google Cloud Platform 上的 3 个不同区域中有 3 个 GKE 集群。 I would like to create a Kafka cluster which has one Zookeeper and one Kafka node (broker) in every region (each GKE cluster).我想创建一个 Kafka 集群,在每个区域(每个 GKE 集群)中都有一个 Zookeeper 和一个 Kafka 节点(代理)。

This set-up is intended to survive regional failure (I know a whole GCP region going down is rare and highly unlikely).这种设置旨在避免区域故障(我知道整个 GCP 区域出现故障是罕见的,而且极不可能发生)。

I am trying this set-up using this Helm Chart provided by Incubator.我正在尝试使用 Incubator 提供的Helm Chart进行此设置。

I tried this setup manually on 3 GCP VMs following this guide and I was able to do it without any issues.我按照本指南在 3 个 GCP 虚拟机上手动尝试了此设置,并且我能够毫无问题地完成此设置。

However, setting up a Kafka cluster on Kubernetes seems complicated.但是,在 Kubernetes 上设置 Kafka 集群似乎很复杂。

As we know we have to provide the IPs of all the zookeeper server in each zookeeper configuration file like below:我们知道我们必须在每个 zookeeper 配置文件中提供所有 zookeeper 服务器的 IP,如下所示:

...
# list of servers
server.1=0.0.0.0:2888:3888
server.2=<Ip of second server>:2888:3888
server.3=<ip of third server>:2888:3888
...

As I can see in the Helm chart config-script.yaml file has a script which creates the Zookeeper configuration file for every deployment.正如我在 Helm 图表中看到的config-script.yaml文件有一个脚本,它为每个部署创建 Zookeeper 配置文件。

The part of the script which echos the zookeeper servers looks something like below:zookeeper服务器相呼应的脚本部分如下所示:

...
for (( i=1; i<=$ZK_REPLICAS; i++ ))
do
   echo "server.$i=$NAME-$((i-1)).$DOMAIN:$ZK_SERVER_PORT:$ZK_ELECTION_PORT" >> $ZK_CONFIG_FILE
done
...

As of now the configuration that this Helm chart creates has the below Zookeeper server in the configuration with one replica ( replica here means Kubernetes Pods replicas ).截至目前,此 Helm 图表创建的配置具有以下 Zookeeper 服务器的配置,其中包含一个副本(此处的副本表示Kubernetes Pods replicas )。

...
# "release-name" is the name of the Helm release
server.1=release-name-zookeeper-0.release-name-zookeeper-headless.default.svc.cluster.local:2888:3888
...

At this point, I am clueless and do not know what to do, so that all the Zookeeper servers get included in the configuration file?在这一点上,我一无所知,不知道该怎么办,让所有的Zookeeper服务器都包含在配置文件中?

How shall I modify the script?我该如何修改脚本?

I see you are trying to create 3 node zookeeper cluster on top of 3 different GKE clusters.我看到您正在尝试在 3 个不同的 GKE 集群之上创建 3 节点Zookeeper集群。

This is not an easy task and I am sure there are multiple ways to achieve it but I will show you one way in which it can be done and I believe it should solve your problem.这不是一项容易的任务,我相信有多种方法可以实现它,但我将向您展示一种可以完成的方法,我相信它应该可以解决您的问题。

The first thing you need to do is create a LoadBalancer service for every zookeeper instance.您需要做的第一件事是为每个 zookeeper 实例创建一个 LoadBalancer 服务。 After LoadBalancers are created, note down the ip addresses that got assigned (remember that by default these ip addresses are ephemeral so you might want to change them later to static).创建 LoadBalancer 后,记下已分配的 ip 地址(请记住,默认情况下,这些 ip 地址是临时的,因此您可能希望稍后将它们更改为静态地址)。

Next thing to do is to create an private DNS zone on GCP and create A records for every zookeeper LoadBalancer endpoint eg:接下来要做的是在 GCP 上创建一个私有 DNS 区域,并为每个 zookeeper LoadBalancer 端点创建 A 记录,例如:

release-name-zookeeper-1.zookeeper.internal.
release-name-zookeeper-2.zookeeper.internal.
release-name-zookeeper-3.zookeeper.internal.

and in GCP it would look like this:在 GCP 中它看起来像这样:

dns

After it's done, just modify this line :完成后,只需修改这一行

...
DOMAIN=`hostname -d'
...

to something like this:像这样:

...
DOMAIN={{ .Values.domain }}
...

and remember to set domain variable in Values file to zookeeper.internal并记住将Values文件中的domain变量设置为zookeeper.internal

so in the end it should look like this:所以最后它应该是这样的:

DOMAIN=zookeeper.internal

and it should generate the folowing config:它应该生成以下配置:

...
server.1=release-name-zookeeper-1.zookeeper.internal:2888:3888
server.2=release-name-zookeeper-2.zookeeper.internal:2888:3888
server.3=release-name-zookeeper-3.zookeeper.internal:2888:3888
...

Let me know if it is helpful让我知道它是否有帮助

暂无
暂无

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

相关问题 Google Cloud GKE 多专区集群与区域集群 - Google Cloud GKE multi-zone cluster vs regional clusters Google Cloud Kubernetes - 如何创建区域或区域 GKE 集群? - Google Cloud Kubernetes - How to create Zonal or Regional GKE cluster? 如何使用 python 删除 GKE(Google Kubernetes 引擎)集群? - How to delete GKE (Google Kubernetes Engine) cluster using python? Google Kubernetes 引擎 - 如何使用标准网络层将集群部署到区域数据中心? - Google Kubernetes Engine - How to deploy a cluster into a regional data centre using a standard network tier? 如何使用动态和独特的Google凭据对多个GKE Kubernetes群集进行身份验证 - How to authenticate to multiple GKE Kubernetes clusters using dynamic and unique google credentials 在 Google Kubernetes Engine 中将单区域集群转换为区域集群 - Convert single zone cluster to regional cluster in Google Kubernetes Engine 使用 kubernetes 安装 kafka 和 zookeeper 集群 - Installing kafka and zookeeper cluster using kubernetes 当GKE(Google容器引擎)Kubernetes集群上只有一个Deployment时,Horizo​​ntalPodAutoscaler是否有意义? - Does HorizontalPodAutoscaler make sense when there is only one Deployment on GKE (Google Container Engine) Kubernetes cluster? 在Google Kubernetes Engine(GKE)中使用sysctls - Using sysctls in Google Kubernetes Engine (GKE) 如何删除Google Kubernetes集群中的GPU(GKE) - How to remove GPU in Google Kubernetes cluster (GKE)
 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM