简体   繁体   English

srimzi - 无法从外部 m/c 访问 Kafka 代理

[英]strimzi - unable to access Kafka broker from external m/c

i've setup kafka (strimzi) on GKE(GCP), following link below:我已经在 GKE(GCP) 上设置了 kafka (strimzi),链接如下:

https://snourian.com/kafka-kubernetes-strimzi-part-1-creating-deploying-strimzi-kafka/ https://snourian.com/kafka-kubernetes-strimzi-part-1-creating-deploying-strimzi-kafka/

Access using Kafka producer/consumer within GKE is working fine, however when i try to use Kafka producer/consumer from external client - it is failing.在 GKE 中使用 Kafka 生产者/消费者的访问工作正常,但是当我尝试从外部客户端使用 Kafka 生产者/消费者时 - 它失败了。

here is the yaml to create the single node Kafka cluster, it has an extrenal nodeport defined - port 9094这是用于创建单节点 Kafka 集群的 yaml,它定义了一个外部节点端口 - 端口 9094

apiVersion: kafka.strimzi.io/v1beta2
kind: Kafka
metadata:
  name: my-cluster #1
spec:
  kafka:
    version: 3.0.0
    replicas: 1
    listeners:
      - name: plain
        port: 9092
        type: internal
        tls: false
      - name: tls
        port: 9093
        type: internal
        tls: true
      - name: external
        port: 9094
        type: nodeport
        tls: false  
    config:
      offsets.topic.replication.factor: 1
      transaction.state.log.replication.factor: 1
      transaction.state.log.min.isr: 1
      log.message.format.version: "3.0"
      inter.broker.protocol.version: "3.0"
    storage:
      type: jbod
      volumes:
      - id: 0
        type: persistent-claim
        size: 2Gi
        deleteClaim: false
    logging: #9
      type: inline
      loggers:
        kafka.root.logger.level: "INFO"
  zookeeper:
    replicas: 1
    storage:
      type: persistent-claim
      size: 2Gi
      deleteClaim: false
    resources:
      requests:
        memory: 1Gi
        cpu: "1"
      limits:
        memory: 2Gi
        cpu: "1.5"
    logging:
      type: inline
      loggers:
        zookeeper.root.logger: "INFO"
  entityOperator: #11
    topicOperator: {}
    userOperator: {}

Output of custom Kafka resource, to get the bootstrap server and port;自定义Kafka资源的Output,获取引导服务器和端口;

apiVersion: kafka.strimzi.io/v1beta2
kind: Kafka
metadata:
  creationTimestamp: "2021-11-17T18:10:44Z"
  generation: 2
  name: my-cluster
  namespace: kafka
  resourceVersion: "108620"
  uid: e14c3351-5433-44d9-bc10-32e2087abd0f
spec:
  entityOperator:
    topicOperator: {}
    userOperator: {}
  kafka:
    config:
      inter.broker.protocol.version: "3.0"
      log.message.format.version: "3.0"
      offsets.topic.replication.factor: 1
      transaction.state.log.min.isr: 1
      transaction.state.log.replication.factor: 1
    listeners:
    - name: plain
      port: 9092
      tls: false
      type: internal
    - name: tls
      port: 9093
      tls: true
      type: internal
    - name: external
      port: 9094
      tls: false
      type: nodeport
    logging:
      loggers:
        kafka.root.logger.level: INFO
      type: inline
    replicas: 1
    storage:
      type: jbod
      volumes:
      - deleteClaim: false
        id: 0
        size: 2Gi
        type: persistent-claim
    version: 3.0.0
  zookeeper:
    logging:
      loggers:
        zookeeper.root.logger: INFO
      type: inline
    replicas: 1
    resources:
      limits:
        cpu: "1.5"
        memory: 2Gi
      requests:
        cpu: "1"
        memory: 1Gi
    storage:
      deleteClaim: false
      size: 2Gi
      type: persistent-claim
status:
  clusterId: NbxD1VTWSOWc_t6pr3y83A
  conditions:
  - lastTransitionTime: "2021-11-17T22:57:00.651Z"
    status: "True"
    type: Ready
  listeners:
  - addresses:
    - host: my-cluster-kafka-bootstrap.kafka.svc
      port: 9092
    bootstrapServers: my-cluster-kafka-bootstrap.kafka.svc:9092
    type: plain
  - addresses:
    - host: my-cluster-kafka-bootstrap.kafka.svc
      port: 9093
    bootstrapServers: my-cluster-kafka-bootstrap.kafka.svc:9093
    certificates:
    - |
      -----BEGIN CERTIFICATE-----
      Wd+ilHpL0ehDzbkAQOdxsYR/AhIzVH2hC9AopUFIllVPiLoEgB6FJfcbbXBwKCss
      dLG2rF3jCnizKi+VX+NUGETZNw45LFzZ1SOUUpRrjRpM
      -----END CERTIFICATE-----
    type: tls
  - addresses:
    - host: 34.136.145.53
      port: 31045
    bootstrapServers: 34.136.145.53:31045
    type: external
  observedGeneration: 2

$CONFLUENT_HOME/bin/kafka-console-producer --broker-list 34.136.145.53:31045 --topic my-topic
>hello from external producer[2021-11-17 15:42:33,135] WARN [Producer clientId=console-producer] Bootstrap broker 34.136.145.53:31045 (id: -1 rack: null) disconnected (org.apache.kafka.clients.NetworkClient)
[2021-11-17 15:42:56,603] WARN [Producer clientId=console-producer] Bootstrap broker 34.136.145.53:31045 (id: -1 rack: null) disconnected (org.apache.kafka.clients.NetworkClient)
[2021-11-17 15:43:23,590] WARN [Producer clientId=console-producer] Bootstrap broker 34.136.145.53:31045 (id: -1 rack: null) disconnected (org.apache.kafka.clients.NetworkClient)

Any ideas on what needs to be done to fix this or debug this?关于需要做什么来解决这个问题或调试这个的任何想法? tia !蒂亚!

Another thing - i'm able to ping the IP & port from local m/c另一件事-我可以从本地 m/c ping IP 和端口

nc -vnzu 34.136.145.53 31045
Connection to 34.136.145.53 port 31045 [udp/*] succeeded!

PING 34.136.145.53 (34.136.145.53): 56 data bytes
64 bytes from 34.136.145.53: icmp_seq=0 ttl=56 time=63.156 ms
64 bytes from 34.136.145.53: icmp_seq=1 ttl=56 time=62.426 ms
64 bytes from 34.136.145.53: icmp_seq=2 ttl=56 time=63.191 ms 

Update: Doing a telnet seems to not succeed, the ping happens though Does the port need to be opened up?更新:做一个 telnet 似乎没有成功,虽然 ping 发生了是否需要打开端口?

Karans-MacBook-Pro:~ karanalang$ telnet 34.136.145.53 31045
Trying 34.136.145.53...

update: i had to create a firewall to provide access to the port, and now i'm able to telnet更新:我必须创建一个防火墙来提供对端口的访问,现在我可以远程登录

gcloud compute firewall-rules create test-node-port --allow tcp:31045

However, another error now.. topic is not accessible, do i need to open additional ports?但是,现在另一个错误.. 无法访问主题,我需要打开其他端口吗?

[2021-11-17 22:44:58,237] ERROR Error when sending message to topic my-topic with key: null, value: 5 bytes with error: (org.apache.kafka.clients.producer.internals.ErrorLoggingCallback)
org.apache.kafka.common.errors.TimeoutException: Topic my-topic not present in metadata after 60000 ms.

There were 2 nodeports which were open, both had to to be provided access by creating firewall rules in GCP.有 2 个节点端口是打开的,都必须通过在 GCP 中创建防火墙规则来提供访问权限。

That fixed the issue.这解决了这个问题。

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM