[英]Not able to connect to kafka brokers
I've deployed https://github.com/confluentinc/cp-helm-charts/tree/master/charts/cp-kafka on my on prem k8s cluster.我已经在我的本地 k8s 集群上部署了https://github.com/confluentinc/cp-helm-charts/tree/master/charts/cp-kafka 。 I'm trying to expose it my using a TCP controller with nginx.
我正在尝试使用带有 nginx 的 TCP controller 来公开它。
My TCP nginx configmap looks like我的 TCP nginx 配置图看起来像
data:
"<zookeper-tcp-port>": <namespace>/cp-zookeeper:2181
"<kafka-tcp-port>": <namespace>/cp-kafka:9092
And i've made the corresponding entry in my nginx ingress controller我已经在我的 nginx 入口 controller
- name: <zookeper-tcp-port>-tcp
port: <zookeper-tcp-port>
protocol: TCP
targetPort: <zookeper-tcp-port>-tcp
- name: <kafka-tcp-port>-tcp
port: <kafka-tcp-port>
protocol: TCP
targetPort: <kafka-tcp-port>-tcp
Now I'm trying to connect to my kafka instance.现在我正在尝试连接到我的 kafka 实例。 When i just try to connect to the IP and port using kafka tools, I get the error message
当我尝试使用 kafka 工具连接到 IP 和端口时,我收到错误消息
Unable to determine broker endpoints from Zookeeper.
One or more brokers have multiple endpoints for protocol PLAIN...
Please proved bootstrap.servers value in advanced settings
[<cp-broker-address-0>.cp-kafka-headless.<namespace>:<port>][<ip>]
When I enter, what I assume are the correct broker addresses (I've tried them all...) I get a time out.当我进入时,我假设是正确的经纪人地址(我都试过了......)我得到了一个超时。 There are no logs coming from the nginx controler excep
没有来自 nginx 控制器的日志,但除外
[08/Apr/2020:15:51:12 +0000]TCP200000.000
[08/Apr/2020:15:51:12 +0000]TCP200000.000
[08/Apr/2020:15:51:14 +0000]TCP200000.001
From the pod kafka-zookeeper-0
I'm gettting loads of从 pod
kafka-zookeeper-0
我得到了很多
[2020-04-08 15:52:02,415] INFO Accepted socket connection from /<ip:port> (org.apache.zookeeper.server.NIOServerCnxnFactory)
[2020-04-08 15:52:02,415] WARN Unable to read additional data from client sessionid 0x0, likely client has closed socket (org.apache.zookeeper.server.NIOServerCnxn)
[2020-04-08 15:52:02,415] INFO Closed socket connection for client /<ip:port> (no session established for client) (org.apache.zookeeper.server.NIOServerCnxn)
Though I'm not sure these have anything to do with it?虽然我不确定这些与它有什么关系?
Any ideas on what I'm doing wrong?关于我做错了什么的任何想法? Thanks in advance.
提前致谢。
TL;DR: TL;博士:
nodeport.enabled
to true
inside cp-kafka/values.yaml
before deploying.cp-kafka/values.yaml
中将值nodeport.enabled
更改为true
。bootstrap-server
on your kafka tools to <Cluster_External_IP>:31090
bootstrap-server
设置为<Cluster_External_IP>:31090
Explanation:解释:
The Headless Service was created alongside the StatefulSet.
Headless Service是与 StatefulSet 一起创建的。 The created service will not be given a
clusterIP
, but will instead simply include a list ofEndpoints
.创建的服务不会被赋予一个
clusterIP
,而是简单地包含一个Endpoints
列表。 TheseEndpoints
are then used to generate instance-specific DNS records in the form of:<StatefulSet>-<Ordinal>.<Service>.<Namespace>.svc.cluster.local
这些
Endpoints
随后用于生成特定于实例的 DNS 记录,格式为:<StatefulSet>-<Ordinal>.<Service>.<Namespace>.svc.cluster.local
It creates a DNS name for each pod, eg:它为每个 pod 创建一个 DNS 名称,例如:
[ root@curl:/ ]$ nslookup my-confluent-cp-kafka-headless
Server: 10.0.0.10
Address 1: 10.0.0.10 kube-dns.kube-system.svc.cluster.local
Name: my-confluent-cp-kafka-headless
Address 1: 10.8.0.23 my-confluent-cp-kafka-1.my-confluent-cp-kafka-headless.default.svc.cluster.local
Address 2: 10.8.1.21 my-confluent-cp-kafka-0.my-confluent-cp-kafka-headless.default.svc.cluster.local
Address 3: 10.8.3.7 my-confluent-cp-kafka-2.my-confluent-cp-kafka-headless.default.svc.cluster.local
I've gone through a lot of trial and error, until I realized how it was supposed to be working.我经历了很多试验和错误,直到我意识到它应该如何工作。 Based your TCP Nginx Configmap I believe you faced the same issue.
基于您的 TCP Nginx Configmap 我相信您遇到了同样的问题。
<PortToExpose>: "<Namespace>/<Service>:<InternallyExposedPort>"
. <PortToExpose>: "<Namespace>/<Service>:<InternallyExposedPort>"
。cp-kafka:9092
which is the headless service, also only used internally, as I explained above.cp-kafka:9092
这是无头服务,也仅在内部使用,正如我在上面解释的那样。nodeport.enabled
to true
as stated here: External Access Parameters .nodeport.enabled
设置为true
,如下所述: 外部访问参数。data:
"31090": default/demo-cp-kafka-0-nodeport:31090
Note that the service created has the selector statefulset.kubernetes.io/pod-name: demo-cp-kafka-0
this is how the service identifies the pod it is intended to connect to.请注意,创建的服务具有选择器
statefulset.kubernetes.io/pod-name: demo-cp-kafka-0
这是服务识别它打算连接到的 pod 的方式。
- containerPort: 31090
hostPort: 31090
protocol: TCP
<Cluster_External_IP>:31090
<Cluster_External_IP>:31090
Reproduction: - Snippet edited in cp-kafka/values.yaml
:复制: - 在
cp-kafka/values.yaml
中编辑的片段:
nodeport:
enabled: true
servicePort: 19092
firstListenerPort: 31090
$ helm install demo cp-helm-charts
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
demo-cp-control-center-6d79ddd776-ktggw 1/1 Running 3 113s
demo-cp-kafka-0 2/2 Running 1 113s
demo-cp-kafka-1 2/2 Running 0 94s
demo-cp-kafka-2 2/2 Running 0 84s
demo-cp-kafka-connect-79689c5c6c-947c4 2/2 Running 2 113s
demo-cp-kafka-rest-56dfdd8d94-79kpx 2/2 Running 1 113s
demo-cp-ksql-server-c498c9755-jc6bt 2/2 Running 2 113s
demo-cp-schema-registry-5f45c498c4-dh965 2/2 Running 3 113s
demo-cp-zookeeper-0 2/2 Running 0 112s
demo-cp-zookeeper-1 2/2 Running 0 93s
demo-cp-zookeeper-2 2/2 Running 0 74s
$ kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
demo-cp-control-center ClusterIP 10.0.13.134 <none> 9021/TCP 50m
demo-cp-kafka ClusterIP 10.0.15.71 <none> 9092/TCP 50m
demo-cp-kafka-0-nodeport NodePort 10.0.7.101 <none> 19092:31090/TCP 50m
demo-cp-kafka-1-nodeport NodePort 10.0.4.234 <none> 19092:31091/TCP 50m
demo-cp-kafka-2-nodeport NodePort 10.0.3.194 <none> 19092:31092/TCP 50m
demo-cp-kafka-connect ClusterIP 10.0.3.217 <none> 8083/TCP 50m
demo-cp-kafka-headless ClusterIP None <none> 9092/TCP 50m
demo-cp-kafka-rest ClusterIP 10.0.14.27 <none> 8082/TCP 50m
demo-cp-ksql-server ClusterIP 10.0.7.150 <none> 8088/TCP 50m
demo-cp-schema-registry ClusterIP 10.0.7.84 <none> 8081/TCP 50m
demo-cp-zookeeper ClusterIP 10.0.9.119 <none> 2181/TCP 50m
demo-cp-zookeeper-headless ClusterIP None <none> 2888/TCP,3888/TCP 50m
$ cat nginx-tcp-configmap.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: tcp-services
namespace: kube-system
data:
31090: "default/demo-cp-kafka-0-nodeport:31090"
$ kubectl apply -f nginx-tcp.configmap.yaml
configmap/tcp-services created
$ kubectl edit deploy nginx-ingress-controller -n kube-system
$kubectl get deploy nginx-ingress-controller -n kube-system -o yaml
{{{suppressed output}}}
ports:
- containerPort: 31090
hostPort: 31090
protocol: TCP
- containerPort: 80
name: http
protocol: TCP
- containerPort: 443
name: https
protocol: TCP
35.226.189.123
, now let's try to connect from outside the cluster.35.226.189.123
上,现在让我们尝试从集群外部连接。 For that I'll connect to another VM where I have a minikube, so I can use kafka-client
pod to test:kafka-client
pod 进行测试:user@minikube:~$ kubectl get pods
NAME READY STATUS RESTARTS AGE
kafka-client 1/1 Running 0 17h
user@minikube:~$ kubectl exec kafka-client -it -- bin/bash
root@kafka-client:/# kafka-console-consumer --bootstrap-server 35.226.189.123:31090 --topic demo-topic --from-beginning --timeout-ms 8000 --max-messages 1
Wed Apr 15 18:19:48 UTC 2020
Processed a total of 1 messages
root@kafka-client:/#
As you can see, I was able to access the kafka from outside.如您所见,我能够从外部访问卡夫卡。
zookeeper-external-0.yaml
apiVersion: v1
kind: Service
metadata:
labels:
app: cp-zookeeper
pod: demo-cp-zookeeper-0
name: demo-cp-zookeeper-0-nodeport
namespace: default
spec:
externalTrafficPolicy: Cluster
ports:
- name: external-broker
nodePort: 31181
port: 12181
protocol: TCP
targetPort: 31181
selector:
app: cp-zookeeper
statefulset.kubernetes.io/pod-name: demo-cp-zookeeper-0
sessionAffinity: None
type: NodePort
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
demo-cp-zookeeper-0-nodeport NodePort 10.0.5.67 <none> 12181:31181/TCP 2s
data:
"31090": default/demo-cp-kafka-0-nodeport:31090
"31181": default/demo-cp-zookeeper-0-nodeport:31181
ports:
- containerPort: 31181
hostPort: 31181
protocol: TCP
pod/zookeeper-client created
user@minikube:~$ kubectl exec -it zookeeper-client -- /bin/bash
root@zookeeper-client:/# zookeeper-shell 35.226.189.123:31181
Connecting to 35.226.189.123:31181
Welcome to ZooKeeper!
JLine support is disabled
If you have any doubts, let me know in the comments!如果您有任何疑问,请在评论中告诉我!
声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.