简体   繁体   English

无法连接到卡夫卡经纪人

[英]Not able to connect to kafka brokers

I've deployed https://github.com/confluentinc/cp-helm-charts/tree/master/charts/cp-kafka on my on prem k8s cluster.我已经在我的本地 k8s 集群上部署了https://github.com/confluentinc/cp-helm-charts/tree/master/charts/cp-kafka I'm trying to expose it my using a TCP controller with nginx.我正在尝试使用带有 nginx 的 TCP controller 来公开它。

My TCP nginx configmap looks like我的 TCP nginx 配置图看起来像

data:
  "<zookeper-tcp-port>": <namespace>/cp-zookeeper:2181
  "<kafka-tcp-port>": <namespace>/cp-kafka:9092

And i've made the corresponding entry in my nginx ingress controller我已经在我的 nginx 入口 controller

  - name: <zookeper-tcp-port>-tcp
    port: <zookeper-tcp-port>
    protocol: TCP
    targetPort: <zookeper-tcp-port>-tcp
  - name: <kafka-tcp-port>-tcp
    port: <kafka-tcp-port>
    protocol: TCP
    targetPort: <kafka-tcp-port>-tcp

Now I'm trying to connect to my kafka instance.现在我正在尝试连接到我的 kafka 实例。 When i just try to connect to the IP and port using kafka tools, I get the error message当我尝试使用 kafka 工具连接到 IP 和端口时,我收到错误消息

Unable to determine broker endpoints from Zookeeper.
One or more brokers have multiple endpoints for protocol PLAIN...
Please proved bootstrap.servers value in advanced settings
[<cp-broker-address-0>.cp-kafka-headless.<namespace>:<port>][<ip>]

When I enter, what I assume are the correct broker addresses (I've tried them all...) I get a time out.当我进入时,我假设是正确的经纪人地址(我都试过了......)我得到了一个超时。 There are no logs coming from the nginx controler excep没有来自 nginx 控制器的日志,但除外

[08/Apr/2020:15:51:12 +0000]TCP200000.000
[08/Apr/2020:15:51:12 +0000]TCP200000.000
[08/Apr/2020:15:51:14 +0000]TCP200000.001

From the pod kafka-zookeeper-0 I'm gettting loads of从 pod kafka-zookeeper-0我得到了很多

[2020-04-08 15:52:02,415] INFO Accepted socket connection from /<ip:port> (org.apache.zookeeper.server.NIOServerCnxnFactory)
[2020-04-08 15:52:02,415] WARN Unable to read additional data from client sessionid 0x0, likely client has closed socket (org.apache.zookeeper.server.NIOServerCnxn)
[2020-04-08 15:52:02,415] INFO Closed socket connection for client /<ip:port>  (no session established for client) (org.apache.zookeeper.server.NIOServerCnxn)

Though I'm not sure these have anything to do with it?虽然我不确定这些与它有什么关系?

Any ideas on what I'm doing wrong?关于我做错了什么的任何想法? Thanks in advance.提前致谢。

TL;DR: TL;博士:

  • Change the value nodeport.enabled to true inside cp-kafka/values.yaml before deploying.在部署之前,在cp-kafka/values.yaml中将值nodeport.enabled更改为true
  • Change the service name and ports in you TCP NGINX Configmap and Ingress object.更改 TCP NGINX Configmap 和 Ingress object 中的服务名称和端口。
  • Set bootstrap-server on your kafka tools to <Cluster_External_IP>:31090将您的 kafka 工具上的bootstrap-server设置为<Cluster_External_IP>:31090

Explanation:解释:

The Headless Service was created alongside the StatefulSet. Headless Service是与 StatefulSet 一起创建的。 The created service will not be given a clusterIP , but will instead simply include a list of Endpoints .创建的服务不会被赋予一个clusterIP ,而是简单地包含一个Endpoints列表。 These Endpoints are then used to generate instance-specific DNS records in the form of: <StatefulSet>-<Ordinal>.<Service>.<Namespace>.svc.cluster.local这些Endpoints随后用于生成特定于实例的 DNS 记录,格式为: <StatefulSet>-<Ordinal>.<Service>.<Namespace>.svc.cluster.local

It creates a DNS name for each pod, eg:它为每个 pod 创建一个 DNS 名称,例如:

[ root@curl:/ ]$ nslookup my-confluent-cp-kafka-headless
Server:    10.0.0.10
Address 1: 10.0.0.10 kube-dns.kube-system.svc.cluster.local

Name:      my-confluent-cp-kafka-headless
Address 1: 10.8.0.23 my-confluent-cp-kafka-1.my-confluent-cp-kafka-headless.default.svc.cluster.local
Address 2: 10.8.1.21 my-confluent-cp-kafka-0.my-confluent-cp-kafka-headless.default.svc.cluster.local
Address 3: 10.8.3.7 my-confluent-cp-kafka-2.my-confluent-cp-kafka-headless.default.svc.cluster.local
  • This is what makes this services connect to each other inside the cluster.这就是使这些服务在集群内相互连接的原因。

I've gone through a lot of trial and error, until I realized how it was supposed to be working.我经历了很多试验和错误,直到我意识到它应该如何工作。 Based your TCP Nginx Configmap I believe you faced the same issue.基于您的 TCP Nginx Configmap 我相信您遇到了同样的问题。

  • The Nginx ConfigMap asks for: <PortToExpose>: "<Namespace>/<Service>:<InternallyExposedPort>" . Nginx ConfigMap要求: <PortToExpose>: "<Namespace>/<Service>:<InternallyExposedPort>"
  • I realized that you don't need to expose the Zookeeper, since it's a internal service and handled by kafka brokers.我意识到您不需要公开 Zookeeper,因为它是内部服务并由 kafka 代理处理。
  • I also realized that you are trying to expose cp-kafka:9092 which is the headless service, also only used internally, as I explained above.我还意识到您正试图公开cp-kafka:9092这是无头服务,也仅在内部使用,正如我在上面解释的那样。
  • In order to get outside access you have to set the parameters nodeport.enabled to true as stated here: External Access Parameters .为了获得外部访问权限,您必须将参数nodeport.enabled设置为true ,如下所述: 外部访问参数
  • It adds one service to each kafka-N pod during chart deployment.它在图表部署期间向每个 kafka-N pod 添加一项服务。
  • Then you change your configmap to map to one of them:然后将您的 configmap 更改为 map 到其中之一:
data:
"31090": default/demo-cp-kafka-0-nodeport:31090

Note that the service created has the selector statefulset.kubernetes.io/pod-name: demo-cp-kafka-0 this is how the service identifies the pod it is intended to connect to.请注意,创建的服务具有选择器statefulset.kubernetes.io/pod-name: demo-cp-kafka-0这是服务识别它打算连接到的 pod 的方式。

  • Edit the nginx-ingress-controller:编辑 nginx-ingress-controller:
- containerPort: 31090
  hostPort: 31090
  protocol: TCP
  • Set your kafka tools to <Cluster_External_IP>:31090将您的 kafka 工具设置为<Cluster_External_IP>:31090

Reproduction: - Snippet edited in cp-kafka/values.yaml :复制: - 在cp-kafka/values.yaml中编辑的片段:

nodeport:
  enabled: true
  servicePort: 19092
  firstListenerPort: 31090
  • Deploy the chart:部署图表:
$ helm install demo cp-helm-charts
$ kubectl get pods
NAME                                       READY   STATUS    RESTARTS   AGE
demo-cp-control-center-6d79ddd776-ktggw    1/1     Running   3          113s
demo-cp-kafka-0                            2/2     Running   1          113s
demo-cp-kafka-1                            2/2     Running   0          94s
demo-cp-kafka-2                            2/2     Running   0          84s
demo-cp-kafka-connect-79689c5c6c-947c4     2/2     Running   2          113s
demo-cp-kafka-rest-56dfdd8d94-79kpx        2/2     Running   1          113s
demo-cp-ksql-server-c498c9755-jc6bt        2/2     Running   2          113s
demo-cp-schema-registry-5f45c498c4-dh965   2/2     Running   3          113s
demo-cp-zookeeper-0                        2/2     Running   0          112s
demo-cp-zookeeper-1                        2/2     Running   0          93s
demo-cp-zookeeper-2                        2/2     Running   0          74s

$ kubectl get svc
NAME                         TYPE        CLUSTER-IP    EXTERNAL-IP   PORT(S)             AGE
demo-cp-control-center       ClusterIP   10.0.13.134   <none>        9021/TCP            50m
demo-cp-kafka                ClusterIP   10.0.15.71    <none>        9092/TCP            50m
demo-cp-kafka-0-nodeport     NodePort    10.0.7.101    <none>        19092:31090/TCP     50m
demo-cp-kafka-1-nodeport     NodePort    10.0.4.234    <none>        19092:31091/TCP     50m
demo-cp-kafka-2-nodeport     NodePort    10.0.3.194    <none>        19092:31092/TCP     50m
demo-cp-kafka-connect        ClusterIP   10.0.3.217    <none>        8083/TCP            50m
demo-cp-kafka-headless       ClusterIP   None          <none>        9092/TCP            50m
demo-cp-kafka-rest           ClusterIP   10.0.14.27    <none>        8082/TCP            50m
demo-cp-ksql-server          ClusterIP   10.0.7.150    <none>        8088/TCP            50m
demo-cp-schema-registry      ClusterIP   10.0.7.84     <none>        8081/TCP            50m
demo-cp-zookeeper            ClusterIP   10.0.9.119    <none>        2181/TCP            50m
demo-cp-zookeeper-headless   ClusterIP   None          <none>        2888/TCP,3888/TCP   50m
  • Create the TCP configmap:创建 TCP 配置映射:
$ cat nginx-tcp-configmap.yaml 
apiVersion: v1
kind: ConfigMap
metadata:
  name: tcp-services
  namespace: kube-system
data:
  31090: "default/demo-cp-kafka-0-nodeport:31090"

$ kubectl apply -f nginx-tcp.configmap.yaml
configmap/tcp-services created
  • Edit the Nginx Ingress Controller:编辑 Nginx 入口 Controller:
$ kubectl edit deploy nginx-ingress-controller -n kube-system

$kubectl get deploy nginx-ingress-controller -n kube-system -o yaml
{{{suppressed output}}}
        ports:
        - containerPort: 31090
          hostPort: 31090
          protocol: TCP
        - containerPort: 80
          name: http
          protocol: TCP
        - containerPort: 443
          name: https
          protocol: TCP
  • My ingress is on IP 35.226.189.123 , now let's try to connect from outside the cluster.我的入口在 IP 35.226.189.123上,现在让我们尝试从集群外部连接。 For that I'll connect to another VM where I have a minikube, so I can use kafka-client pod to test:为此,我将连接到另一个拥有 minikube 的 VM,因此我可以使用kafka-client pod 进行测试:
user@minikube:~$ kubectl get pods
NAME           READY   STATUS    RESTARTS   AGE
kafka-client   1/1     Running   0          17h

user@minikube:~$ kubectl exec kafka-client -it -- bin/bash

root@kafka-client:/# kafka-console-consumer --bootstrap-server 35.226.189.123:31090 --topic demo-topic --from-beginning --timeout-ms 8000 --max-messages 1
Wed Apr 15 18:19:48 UTC 2020
Processed a total of 1 messages
root@kafka-client:/# 

As you can see, I was able to access the kafka from outside.如您所见,我能够从外部访问卡夫卡。

  • If you need external access to Zookeeper as well I'll leave a service model for you:如果您还需要对 Zookeeper 的外部访问,我将为您提供 model 服务:

zookeeper-external-0.yaml

apiVersion: v1
kind: Service
metadata:
  labels:
    app: cp-zookeeper
    pod: demo-cp-zookeeper-0
  name: demo-cp-zookeeper-0-nodeport
  namespace: default
spec:
  externalTrafficPolicy: Cluster
  ports:
  - name: external-broker
    nodePort: 31181
    port: 12181
    protocol: TCP
    targetPort: 31181
  selector:
    app: cp-zookeeper
    statefulset.kubernetes.io/pod-name: demo-cp-zookeeper-0
  sessionAffinity: None
  type: NodePort
  • It will create a service for it:它将为它创建一个服务:
NAME                           TYPE        CLUSTER-IP    EXTERNAL-IP   PORT(S)             AGE
demo-cp-zookeeper-0-nodeport   NodePort    10.0.5.67     <none>        12181:31181/TCP     2s
  • Patch your configmap:修补您的配置图:
data:
  "31090": default/demo-cp-kafka-0-nodeport:31090
  "31181": default/demo-cp-zookeeper-0-nodeport:31181
  • Add the Ingress rule:添加入口规则:
        ports:
        - containerPort: 31181
          hostPort: 31181
          protocol: TCP
  • Test it with your external IP:使用您的外部 IP 对其进行测试:
pod/zookeeper-client created
user@minikube:~$ kubectl exec -it zookeeper-client -- /bin/bash
root@zookeeper-client:/# zookeeper-shell 35.226.189.123:31181
Connecting to 35.226.189.123:31181
Welcome to ZooKeeper!
JLine support is disabled

If you have any doubts, let me know in the comments!如果您有任何疑问,请在评论中告诉我!

暂无
暂无

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

相关问题 使用多个代理配置 kafka 连接 - Configuring kafka connect with multi brokers Kafka 使用完全写入的存储中断 - Kafka brokers down with fully written storages 通过AWS上的ELB在Kubernetes上公开单独的Kafka经纪人 - Expose individual Kafka brokers on Kubernetes through an ELB on AWS 使用 Google click to deploy 环境公开 Kafka 代理 - Exposing Kafka brokers using Google click to deploy environment 将 Jolokia 附加到使用 Strimzi 运算符部署的 kafka 代理 - Attaching Jolokia to kafka brokers deployed using Strimzi operator 在所有 kafka-pod 升级后,java 中的 kafka 消费者客户端无法重新连接到 kubernetes kafka 代理 - kafka consumer client in java can't reconnect to kubernetes kafka brokers after all of kafka-pods are upgraded Kafka 无法与 zookeeper 连接,并出现错误“超时等待连接状态:CONNECTING” - Kafka not able to connect with zookeeper with error "Timed out waiting for connection while in state: CONNECTING" 错误 Init Kafka Client: kafka: client has run out of available brokers to talk to: dial tcp: lookup kafka on 10.96.0.10:53: no such host - Error Init Kafka Client: kafka: client has run out of available brokers to talk to: dial tcp: lookup kafka on 10.96.0.10:53: no such host 在 AKS 上部署 Kafka 并能够对其进行测试 - Deploy Kafka on AKS and able to test it Kafka重新启动后,kubernetes上的``分区中的领导者经纪人没有匹配的侦听器&#39;&#39; - 'partitions have leader brokers without a matching listener' on kubernetes after kafka restart
 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM