简体   繁体   English

如何使用 jhipster 在 Kafka 集群上设置多个代理

[英]How to set up multiple brokers on the Kafka Cluster with jhipster

I created a basic app with jhipster and added Apache Kafka.我用 jhipster 创建了一个基本的应用程序并添加了 Apache Kafka。 I have no problem producing and consuming even with another solution (from my app to a php client for kafka).即使使用另一种解决方案(从我的应用程序到 kafka 的 php 客户端),我也没有问题。 Now, I want to create multiple brokers on the cluster, but from java not the .sh files.现在,我想在集群上创建多个代理,但从 java 而不是 .sh 文件。

I know the cluster is setup with the server.properties file where the id of the brokers, the log dir and other things are implied.我知道集群是使用 server.properties 文件设置的,其中隐含了代理的 ID、日志目录和其他内容。 But in my jhipster app the broker id is declared in the kafka.yml so I guess I have to edit the .yml files to declare another broker.但是在我的 jhipster 应用程序中,代理 ID 是在 kafka.yml 中声明的,所以我想我必须编辑 .yml 文件来声明另一个代理。

version: '2'
services:
  zookeeper:
    image: confluentinc/cp-zookeeper:5.2.1
    environment:
      ZOOKEEPER_CLIENT_PORT: 2181
      ZOOKEEPER_TICK_TIME: 2000
      ZOOKEEPER_SYNC_LIMIT: 2
    ports:
      - 2181:2181
  kafka:
    image: confluentinc/cp-kafka:5.2.1
    environment:
      KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://localhost:9092
      KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181
      KAFKA_BROKER_ID: 2
      KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 1
    ports:
      - 9092:9092

The goal is to have one jhipster app with kafka, creating multiple brokers in the cluster instead of one.目标是拥有一个带有 kafka 的 jhipster 应用程序,在集群中创建多个代理而不是一个。 Therefore I would have multiple topics.因此,我会有多个主题。 I don't have any results我没有任何结果

With this docker-compose.yml you get a cluster with three brokers.使用此docker-compose.yml您将获得一个包含三个代理的集群。 The brokers are accessible from inside docker as kafka1:9092, kafka2:9092, kafka3:9092 and from docker host as localhost:19092,localhost:29092,localhost:39092 :可以从kafka1:9092, kafka2:9092, kafka3:9092内部以kafka1:9092, kafka2:9092, kafka3:9092和从kafka1:9092, kafka2:9092, kafka3:9092主机以localhost:19092,localhost:29092,localhost:39092 kafka1:9092, kafka2:9092, kafka3:9092访问代理:

version: "3.7"
services:
  zookeeper:
    image: confluentinc/cp-zookeeper:5.4.0
    hostname: zookeeper
    container_name: zookeeper
    ports:
      - "2181:2181"
    environment:
      ZOOKEEPER_CLIENT_PORT: 2181
      ZOOKEEPER_TICK_TIME: 2000

  kafka1:
    image: confluentinc/cp-server:5.4.0
    hostname: kafka1
    container_name: kafka1
    depends_on:
      - zookeeper
    ports:
      - "19092:19092"
    environment:
      KAFKA_BROKER_ID: 1
      KAFKA_ZOOKEEPER_CONNECT: 'zookeeper:2181'
      KAFKA_LISTENER_SECURITY_PROTOCOL_MAP:PLAINTEXT:PLAINTEXT,PLAINTEXT_HOST:PLAINTEXT
      KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://kafka1:9092,PLAINTEXT_HOST://localhost:19092
      KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 2
      KAFKA_GROUP_INITIAL_REBALANCE_DELAY_MS: 0
      KAFKA_CONFLUENT_LICENSE_TOPIC_REPLICATION_FACTOR: 1
      KAFKA_AUTO_CREATE_TOPICS_ENABLE: 'false'

  kafka2:
    image: confluentinc/cp-server:5.4.0
    hostname: kafka2
    container_name: kafka2
    depends_on:
      - zookeeper
    ports:
      - "29092:29092"
    environment:
      KAFKA_BROKER_ID: 102
      KAFKA_ZOOKEEPER_CONNECT: 'zookeeper:2181'
      KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: PLAINTEXT:PLAINTEXT,PLAINTEXT_HOST:PLAINTEXT
      KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://kafka2:9092,PLAINTEXT_HOST://localhost:29092
      KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 2
      KAFKA_GROUP_INITIAL_REBALANCE_DELAY_MS: 0
      KAFKA_CONFLUENT_LICENSE_TOPIC_REPLICATION_FACTOR: 1
      KAFKA_AUTO_CREATE_TOPICS_ENABLE: 'false'

  kafka3:
    image: confluentinc/cp-server:5.4.0
    hostname: kafka3
    container_name: kafka3
    depends_on:
      - zookeeper
    ports:
      - "39092:39092"
    environment:
      KAFKA_BROKER_ID: 3
      KAFKA_ZOOKEEPER_CONNECT: 'zookeeper:2181'
      KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: PLAINTEXT:PLAINTEXT,PLAINTEXT_HOST:PLAINTEXT
      KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://kafka3:9092,PLAINTEXT_HOST://localhost:39092
      KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 2
      KAFKA_GROUP_INITIAL_REBALANCE_DELAY_MS: 0
      KAFKA_CONFLUENT_LICENSE_TOPIC_REPLICATION_FACTOR: 1
      KAFKA_AUTO_CREATE_TOPICS_ENABLE: 'false'

you can create multiple broker with confluentince/cp-kafka by adding more brokers in your docker-compose.yml file.您可以通过在 docker-compose.yml 文件中添加更多代理来使用confluentince/cp-kafka创建多个代理。

version: '2'
services:
   zookeeper:
    image: confluentinc/cp-zookeeper:5.2.1
    environment:
      ZOOKEEPER_CLIENT_PORT: 2181
      ZOOKEEPER_TICK_TIME: 2000
      ZOOKEEPER_SYNC_LIMIT: 2
    ports:
      - 2181:2181
  kafka-1:
    image: confluentinc/cp-kafka:latest
    hostname: kafka-1
    ports:
      - "19092:19092"
    depends_on:
      - zookeeper
    environment:
      KAFKA_BROKER_ID: 1
      KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181
      KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://kafka-1:19092

  kafka-2:
    image: confluentinc/cp-kafka:latest
    hostname: kafka-2
    ports:
      - "29092:29092"
    depends_on:
       - zookeeper
    environment:
      KAFKA_BROKER_ID: 2
      KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181
      KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://kafka-2:29092

  kafka-3:
    image: confluentinc/cp-kafka:latest
    hostname: kafka-3
    ports:
      - "39092:39092"
    depends_on:
       - zookeeper
    environment:
      KAFKA_BROKER_ID: 3
      KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181
      KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://kafka-3:39092

Reference : https://better-coding.com/building-apache-kafka-cluster-using-docker-compose-and-virtualbox/参考: https : //better-coding.com/building-apache-kafka-cluster-using-docker-compose-and-virtualbox/

"Therefore I would have multiple topics" ==> Not sure to understand, but you don't need multiple brokers to have multiple topics, you can have multiple topics handled by only one broker. “因此我会有多个主题” ==> 不太明白,但是您不需要多个代理才能拥有多个主题,您可以仅由一个代理处理多个主题。

I don't really know jhipster, but it looks like your yml file is exactly like a docker compose file, so I'll give you my 2 cents as if it was all started by docker compose.我不太了解 jhipster,但看起来你的 yml 文件和 docker compose 文件完全一样,所以我会给你 2 美分,好像它都是由 docker compose 启动的。

You first need your brokers to connect to a same zookeeper cluster, should be fine from what I saw ( if you declare all your brokers in the same docker compose yml file)您首先需要您的经纪人连接到同一个 zookeeper 集群,从我所看到的应该没问题(如果您在同一个 docker compose yml 文件中声明所有经纪人)

You need to set your advertised listeners using the IP address your client will access, if you use localhost, they won't be able to connect to your broker:您需要使用您的客户端将访问的 IP 地址设置您的广告侦听器,如果您使用 localhost,它们将无法连接到您的代理:

KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://localhost:9092 KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://localhost:9092

Should be something like :应该是这样的:

KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://EXPOSEDIPADDRESS:9092 KAFKA_ADVERTISED_LISTENERS:PLAINTEXT://EXPOSEDIPADDRESS:9092

You might also add LISTENERS like : KAFKA_LISTENERS: PLAINTEXT://0.0.0.0:9092您还可以添加 LISTENERS,如:KAFKA_LISTENERS: PLAINTEXT://0.0.0.0:9092

Be sure to use different broker IDs for each brokers, and use different ports for each your brokers ( if they run in the same box behind docker)确保为每个代理使用不同的代理 ID,并为每个代理使用不同的端口(如果它们在 docker 后面的同一个盒子中运行)

Yannick雅尼克

暂无
暂无

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

相关问题 Kafka API - 如果 bootstrap.servers 属性设置为一系列已关闭但同一集群中有其他代理启动的代理,会发生什么情况? - Kafka API - What happens if bootstrap.servers property is set to a range of brokers that are down but there are other brokers up in the same cluster? 如何使用 Java API 列出 Kafka 集群中的所有可用代理? - How to list all available brokers in Kafka cluster using Java API? 使用带有Java的Apache Kafka 0.10.0 API创建Kafka代理群集 - Create a Kafka brokers cluster using Apache Kafka 0.10.0 API with Java 如何在Java中编写Kafka Consumer Client以使用来自多个代理的消息? - How to write Kafka Consumer Client in java to consume the messages from multiple brokers? 如何从与不同代理关联的多个 Kafka 主题中消费? - How can I consume from multiple Kafka topics that are associated with different brokers? 如何检测不可达的经纪人Kafka并重新连接? - How to detect unreachable brokers Kafka and reconnect? 如何解决 Kafka 代理中的网络和 memory 问题? - How to solve network and memory issues in Kafka brokers? 如何在 Apache Beam 中使用 KafkaIO 指定 kafka 代理 - how to specify kafka brokers with KafkaIO in Apache Beam 同一集群上的多个Kafka版本 - Multiple Kafka version on the same cluster 如何为不同的测试环境配置不同的 Kafka Brokers/endpoints? - How to configure different Kafka Brokers/endpoints for different test environments?
 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM