简体   繁体   English

Sprint 启动 kafka Consumer 无法连接到 kafka 容器

[英]Sprint boot kafka Consumer can not connect to the kafka container

I'm trying to deploy 2 Spring boot Apps (kafka Producer and Consumer).我正在尝试部署 2 个 Spring 引导应用程序(kafka 生产者和消费者)。 When I deploy the Producer to docker is all ok but when I deploy my Consumer doesn't work because doesn't have the connection with kafka container.当我将 Producer 部署到 docker 时一切正常,但是当我部署我的 Consumer 时不起作用,因为没有与 kafka 容器的连接。

The log show me this error日志告诉我这个错误

2019-11-17 05:32:22.644  WARN 1 --- [main] o.a.k.c.NetworkClient: [Consumer clientId=consumer-1, groupId=exampleGroup] Connection to node -1 could not be established. Broker may not be available.

my docker-compose.yml is我的 docker-compose.yml 是

version: '3'

services:

  zookeeper:
    image: wurstmeister/zookeeper
    container_name: zookeeper
    restart: always
    ports:
      - 2181:2181

  kafka:
    image: wurstmeister/kafka
    container_name: kafka
    restart: always
    ports:
      - 9092:9092
    depends_on:
      - zookeeper
    links:
      - zookeeper:zookeeper
    environment:
      KAFKA_ADVERTISED_HOST_NAME: localhost
      KAFKA_ADVERTISED_PORT: 9092
      KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181
      KAFKA_CREATE_TOPICS: "topic1:1:1"

On my KafkaConfig class:在我的 KafkaConfig class 上:

@EnableKafka
@Configuration
public class KafkaConfig {

    @Bean
    public ConsumerFactory<String, String> consumerFactory(){
        Map<String, Object> config = new HashMap<>();

        config.put(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG, KafkaConstants.KAFKA_BROKERS);
        config.put(ConsumerConfig.GROUP_ID_CONFIG, "exampleGroup");
        config.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class);
        config.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class);
      //  config.put(ConsumerConfig.ENABLE_AUTO_COMMIT_CONFIG, KafkaConstants.ENABLE_AUTO_COMMIT_CONFIG);
        config.put(ConsumerConfig.AUTO_OFFSET_RESET_CONFIG, KafkaConstants.OFFSET_RESET_EARLIER);
       // config.put(ConsumerConfig.SESSION_TIMEOUT_MS_CONFIG, KafkaConstants.SESSION_TIMEOUT_MS);

        return new DefaultKafkaConsumerFactory<>(config);
    }

And the constants class和常数 class

public class KafkaConstants {

    public static String KAFKA_BROKERS = "localhost:9092";
    public static Integer MESSAGE_COUNT=1000;
    public static String TOPIC_NAME="demo";
    public static String GROUP_ID_CONFIG="exampleGroup";
    public static Integer MAX_NO_MESSAGE_FOUND_COUNT=100;
    public static String OFFSET_RESET_LATEST="latest";
    public static String OFFSET_RESET_EARLIER="earliest";
    public static Integer MAX_POLL_RECORDS=1;
    public static Integer SESSION_TIMEOUT_MS = 180000;
    public static Integer REQUEST_TIMEOUT_MS_CONFIG = 181000;
    public static String ENABLE_AUTO_COMMIT_CONFIG = "false";
    public static Integer AUTO_COMMIT_INTERVAL_MS_CONFIG = 8000;
}

When I install zookepper and kafka on my computer and run this 2 spring boot apps with intellij works fine.当我在我的计算机上安装 zookepper 和 kafka 并使用 intellij 运行这 2 个 spring 启动应用程序时工作正常。 the problem is when I deploy to my local docker.问题是当我部署到本地 docker 时。

Can you please help me?你能帮我么?

UPDATE更新

Updating my docker-compose:更新我的 docker-compose:

version: '3'

services:

  zookeeper:
    image: wurstmeister/zookeeper
    container_name: zookeeper
    restart: always
    ports:
      - 2181:2181

  kafka:
    image: wurstmeister/kafka
    container_name: kafka
    restart: always
    ports:
      - 9092:9092
    depends_on:
      - zookeeper
    links:
      - zookeeper:zookeeper
    environment:
      KAFKA_ADVERTISED_HOST_NAME: kafka
      KAFKA_ADVERTISED_PORT: 9092
      KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181
      KAFKA_CREATE_TOPICS: "ACC_GROUP_CREATE:1:1"

  consumer:
    image: micro1
    container_name: micro1
    depends_on:
      - kafka
    restart: always
    ports:
      - 8088:8088
    depends_on:
      - kafka
    links:
      - kafka:kafka

  producer:
    image: micro2
    container_name: micro2
    depends_on:
      - kafka
    restart: always
    ports:
      - 8087:8087
    depends_on:
      - kafka
    links:
      - kafka:kafka

Works fine!工作正常! based on the response of @hqt but I don't know why I need to add these lines of Consumer/producer基于@hqt 的响应,但我不知道为什么需要添加这些消费者/生产者行

The problem because of the KAFKA_ADVERTISED_HOST_NAME attribute.由于KAFKA_ADVERTISED_HOST_NAME属性的问题。 Here is the documentation which explains why Kafka needs the advertised address.这是解释为什么 Kafka 需要广告地址的文档

The key thing is that when you run a client, the broker you pass to it is just where it's going to go and get the metadata about brokers in the cluster from.关键是当你运行一个客户端时,你传递给它的代理就是它要去的地方 go 并从中获取关于集群中代理的元数据。 The actual host & IP that it will connect to for reading/writing data is based on the data that the broker passes back in that initial connection—even if it's just a single node and the broker returned is the same as the one connected to.它将连接到以读取/写入数据的实际主机和 IP 是基于代理在初始连接中传回的数据- 即使它只是一个节点并且返回的代理与连接到的代理相同。

When you set KAFKA_ADVERTISED_HOST_NAME to the localhost:当您将KAFKA_ADVERTISED_HOST_NAME设置为本地主机时:

  • Your app which runs on "Intellij", that means run on the host environment.您的应用程序在“Intellij”上运行,这意味着在主机环境中运行。 This host creates the Kafka's container so the access from localhost:9092 will point to the Kafka's container.该主机创建 Kafka 的容器,因此来自 localhost:9092 的访问将指向 Kafka 的容器。
  • When your app runs inside the container, localhost:9092 means the container itself.当您的应用在容器内运行时,localhost:9092 表示容器本身。 So it is meaningless.所以毫无意义。 (This container even doesn't have any process which listens on port 9092) (这个容器甚至没有任何监听 9092 端口的进程)

Updating the KAFKA_ADVERTISED_HOST_NAME attribute to kafka would work when running the web app inside the container environment.在容器环境中运行 web 应用程序时,将KAFKA_ADVERTISED_HOST_NAME属性更新为kafka将起作用。 Noted that both of your web app and the kafka container must be on the same the docker's network.请注意,您的 web 应用程序和 kafka 容器都必须在 docker 的网络上。

Here is the proposed docker-compose for running the Kafka cluster using the Wurstmeister's image.这是建议的 docker-compose,用于使用 Wurstmeister 的图像运行 Kafka 集群。

version: "2"
services:
  zookeeper:
    image: wurstmeister/zookeeper
    container_name: zookeeper
    ports:
      - 2181:2181

  kafka:
    image: wurstmeister/kafka
    container_name: kafka
    ports:
      - 9092:9092
    depends_on:
      - zookeeper
    environment:
      KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181
      KAFKA_ADVERTISED_HOST_NAME: kafka
      KAFKA_ADVERTISED_PORT: 9092
      KAFKA_CREATE_TOPICS: "topic1:1:1"

   web_app:
    # your definition of the web_app goes  here

Then you can connect to the Kafka brokers on the address kafka:9092 inside the container environment.然后你可以连接到容器环境内地址kafka:9092上的 Kafka 代理。

This is a common problem and the authoritative documentation you need to read and understand is https://www.confluent.io/blog/kafka-listeners-explained这是一个常见问题,您需要阅读和理解的权威文档是https://www.confluent.io/blog/kafka-listeners-explained

I'm copying its tl;dr: for reference:我正在复制它的 tl;dr: 以供参考:

You need to set advertised.listeners (or >KAFKA_ADVERTISED_LISTENERS if you're using Docker >images) to the external address (host/IP) so that clients >can correctly connect to it.您需要将 Advertisementd.listeners (或 >KAFKA_ADVERTISED_LISTENERS 如果您使用 Docker >images)设置为外部地址(主机/IP),以便客户端 > 可以正确连接到它。 Otherwise, they'll try to connect >to the internal host address—and if that's not reachable, >then problems ensue."否则,他们会尝试连接 > 到内部主机地址——如果无法访问,那么就会出现问题。”

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM