简体   繁体   English

我想用 docker 与我的客户和卡夫卡经纪人沟通

[英]I want to communicate my client and kafka broker with docker compose

There are client, kafka and zookeeper in the same network, I am trying to connect from client to kafka with SERVICE_NAME:PORT but在同一个网络中有客户端、kafka 和 zookeeper,我正在尝试使用 SERVICE_NAME:PORT 从客户端连接到 kafka,但是

driver-service-container | 2022-07-24 09:00:05.076 WARN 1 --- [| adminclient-1] org.apache.kafka.clients.NetworkClient : [AdminClient clientId=adminclient-1] Connection to node 1 (localhost/127.0.0.1:9092) could not be established. Broker may not be available.

I get an error.我得到一个错误。

I know that I can easily communicate containers in the same network using the service name, but I don't understand why it doesn't work.我知道我可以使用服务名称轻松地在同一网络中通信容器,但我不明白为什么它不起作用。

The name of my client trying to communicate with kafka is driver-service我的客户尝试与 kafka 通信的名称是driver-service

I looked through these resources but according to them my method should work:我查看了这些资源,但根据它们,我的方法应该有效:

Connect to Kafka running in Docker 连接到运行在 Docker 中的 Kafka

My Python/Java/Spring/Go/Whatever Client Won't Connect to My Apache Kafka Cluster in Docker/AWS/My Brother's Laptop. 我的 Python/Java/Spring/Go/Whatever 客户端无法连接到 Docker/AWS/我兄弟笔记本电脑中的 Apache Kafka 集群。 Please Help 请帮忙

driver-service githup repositorie 驱动程序服务 GitHub 存储库

My docker-compose file:我的 docker-compose 文件:

version: '3'
services:

  gateway-server:
    image: gateway-server-image
    container_name: gateway-server-container
    ports:
      - '5555:5555'
    environment:
      - SECURE_KEY_USERNAME=randomSecureKeyUsername!
      - SECURE_KEY_PASSWORD=randomSecureKeyPassword!
      - PASSENGER_SERVICE_URL=172.24.2.4:4444
      - DRIVER_SERVICE_URL=172.24.2.5:3333
    networks:
      microservicesNetwork:
        ipv4_address: 172.24.2.6

  driver-service:
    image: driver-service-image
    container_name: driver-service-container
    ports:
      - '3333:3333'
    environment:
      - NOTIFICATION_SERVICE_URL=172.24.2.3:8888
      - PAYMENT_SERVICE_URL=172.24.2.2:7777
      - SECURE_KEY_USERNAME=randomSecureKeyUsername!
      - SECURE_KEY_PASSWORD=randomSecureKeyPassword!
      - KAFKA_GROUP_ID=driver-group-id
      - KAFKA_BOOTSTRAP_SERVERS=broker:29092
      - kafka.consumer.group.id=driver-group-id
      - kafka.consumer.enable.auto.commit=true
      - kafka.consumer.auto.commit.interval.ms=1000
      - kafka.consumer.auto.offset.reset=earliest
      - kafka.consumer.max.poll.records=1
    networks:
      microservicesNetwork:
        ipv4_address: 172.24.2.5

  passenger-service:
    image: passenger-service-image
    container_name: passenger-service-container
    ports:
      - '4444:4444'
    environment:
      - PAYMENT_SERVICE_URL=172.24.2.2:7777
      - SECURE_KEY_USERNAME=randomSecureKeyUsername!
      - SECURE_KEY_PASSWORD=randomSecureKeyPassword!
    networks:
      microservicesNetwork:
        ipv4_address: 172.24.2.4

  notification-service:
    image: notification-service-image
    container_name: notification-service-container
    ports:
      - '8888:8888'
    environment:
      - SECURE_KEY_USERNAME=randomSecureKeyUsername!
      - SECURE_KEY_PASSWORD=randomSecureKeyPassword!
    networks:
      microservicesNetwork:
        ipv4_address: 172.24.2.3

  payment-service:
    image: payment-service-image
    container_name: payment-service-container
    ports:
      - '7777:7777'
    environment:
      - SECURE_KEY_USERNAME=randomSecureKeyUsername!
      - SECURE_KEY_PASSWORD=randomSecureKeyPassword!
    networks:
      microservicesNetwork:
        ipv4_address: 172.24.2.2

  zookeeper:
    image: confluentinc/cp-zookeeper:7.0.1
    container_name: zookeeper
    ports:
      - "2181:2181"
      - "2888:2888"
      - "3888:3888"
    environment:
      ZOOKEEPER_CLIENT_PORT: 2181
      ZOOKEEPER_TICK_TIME: 2000
    networks:
      - microservicesNetwork

  broker:
    image: confluentinc/cp-kafka:7.0.1
    container_name: broker
    ports:
      - "9092:9092"
    depends_on:
      - zookeeper
    environment:
      KAFKA_BROKER_ID: 1
      KAFKA_ZOOKEEPER_CONNECT: 'zookeeper:2181'
      KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: PLAINTEXT:PLAINTEXT,PLAINTEXT_INTERNAL:PLAINTEXT
      KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://localhost:9092,PLAINTEXT_INTERNAL://broker:29092
      KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 1
      KAFKA_TRANSACTION_STATE_LOG_MIN_ISR: 1
      KAFKA_TRANSACTION_STATE_LOG_REPLICATION_FACTOR: 1
      GROUP_ID: driver-group-id
      KAFKA_CREATE_TOPICS: "product"
    networks:
      - microservicesNetwork

  kafka-ui:
    image: provectuslabs/kafka-ui
    container_name: kafka-ui
    ports:
      - "8080:8080"
    restart: always
    environment:
      - KAFKA_CLUSTERS_0_NAME=broker
      - KAFKA_CLUSTERS_0_BOOTSTRAPSERVERS=broker:29092
      - KAFKA_CLUSTERS_0_ZOOKEEPER=zookeeper:2181
      - KAFKA_CLUSTERS_0_READONLY=true
    networks:
      - microservicesNetwork


  elasticsearch:
    image: docker.elastic.co/elasticsearch/elasticsearch:7.8.0
    platform: linux/x86_64
    environment:
      - discovery.type=single-node
      - max_open_files=65536
      - max_content_length_in_bytes=100000000
      - transport.host= elasticsearch
    volumes:
      - $HOME/app:/var/app
    ports:
      - "9200:9200"
      - "9300:9300"
    networks:
      - microservicesNetwork

  postgresql:
    image: postgres:11.1-alpine
    platform: linux/x86_64
    container_name: postgresql
    volumes:
      - ./postgresql/:/var/lib/postgresql/data/
    environment:
      - POSTGRES_PASSWORD=123456
      - POSTGRES_USER=postgres
      - POSTGRES_DB=cqrs_db
    ports:
      - "5432:5432"
    networks:
      - microservicesNetwork

networks:
  microservicesNetwork:
    driver: bridge
    ipam:
      driver: default
      config:
        - subnet: 172.24.2.0/16
          gateway: 172.24.2.1

application.prod.properties -> application.prod.properties ->

#datasource
spring.datasource.url=jdbc:h2:mem:db_driver
spring.datasource.username=root
spring.datasource.password=1234
spring.datasource.driver-class-name=org.h2.Driver
spring.jpa.database-platform=org.hibernate.dialect.H2Dialect
#need spring-security config.
spring.h2.console.enabled=false
spring.h2.console.path=/h2-console
spring.jpa.show-sql=true
service.security.secure-key-username=${SECURE_KEY_USERNAME}
service.security.secure-key-password=${SECURE_KEY_PASSWORD}

payment.service.url=${PAYMENT_SERVICE_URL}
notification.service.url=${NOTIFICATION_SERVICE_URL}

#kafka configs
kafka.bootstrap.servers=${KAFKA_BOOTSTRAP_SERVERS}
kafka.group.id =${KAFKA_GROUP_ID}
spring.cache.cache-names=driver
spring.jackson.serialization.fail-on-empty-beans= false
spring.http.multipart.max-file-size=10MB
spring.http.multipart.max-request-size=11MB

If the error says localhost/127.0.0.1:9092 , then your environment variable isn't being used.如果错误显示localhost/127.0.0.1:9092 ,那么您的环境变量没有被使用。

In the startup logs from the container, look at AdminClientConfig or ConsumerConfig sections, and you'll see the real bootstrap address that's used在容器的启动日志中,查看 AdminClientConfig 或 ConsumerConfig 部分,您将看到使用的真实引导地址

KAFKA_BOOTSTRAP_SERVERS=broker:29092 is correct based on your KAFKA_ADVERTISED_LISTENERS KAFKA_BOOTSTRAP_SERVERS=broker:29092根据您的KAFKA_ADVERTISED_LISTENERS是正确的

But, in your properties, it's unclear how this is used without showing your config class但是,在您的属性中,不清楚如何在不显示您的配置 class 的情况下使用它

kafka.bootstrap.servers=${KAFKA_BOOTSTRAP_SERVERS

If you read the spring kafka documentation closely, you'll see it needs to be spring.kafka.bootstrap.servers in order to be wired in automatically如果您仔细阅读 spring kafka 文档,您会发现它需要是spring.kafka.bootstrap.servers才能自动连接

Sidenote: All those kafka.consumer.旁注:所有这些kafka.consumer. attributes would need to be set as JVM properties , not container environment variables.属性需要设置为JVM 属性,而不是容器环境变量。

Also, Docker services should be configured to communicate with each other by service names, not assigned IP addresses此外,Docker 服务应配置为通过服务名称相互通信,而不是分配 IP 地址

problem solved 😊问题解决了😊

If I run driver-service on the local computer, it actually connects from localhost:9092, but if driver-service and kafka are in the same docker network, it needs to connect from "KAFKA_IP:29092" (service name can be used instead of KAFKA_IP), kafka is different for such different network environments.如果我在本地计算机上运行 driver-service,它实际上是从 localhost:9092 连接的,但是如果 driver-service 和 kafka 在同一个 docker 网络中,它需要从“KAFKA_IP:29092”连接(可以使用服务名称代替的KAFKA_IP),对于这种不同的网络环境,kafka是不同的。 it expects us to configure ( Source ), when I ran my driver-service application on my local computer, kafka and driver-service could communicate, but they could not communicate in the same docker network.它希望我们配置( Source ),当我在本地计算机上运行我的驱动程序服务应用程序时,kafka 和驱动程序服务可以通信,但它们无法在同一个 docker 网络中通信。 That is, the driver-service was not using the Kafka connection address that I defined in the application.prod.properties file that my application should use while running in docker.也就是说,驱动程序服务没有使用我在 application.prod.properties 文件中定义的 Kafka 连接地址,我的应用程序在 docker 中运行时应该使用该地址。 The problem was in my spring kafka integration, I was trying to give my client application the address to connect to kafka using the kafka.bootstrap.servers key in my properties file, I was defining this key in my properties file and pulling and assigning the value of this key in KafkaBean class, but the client did not see it.and it was persistently trying to connect to localhost:9092, first I specified my active profile in my dockerfile with the "ENTRYPOINT ["java", "-Dspring.profiles.active=prod", "-jar", "driver-service-0.0.2-SNAPSHOT.jar"]" command to use my application.prod.properties file while working in docker environment and then, if we use the key "spring.kafka.bootstrap-servers" instead of "kafka.bootstrap.servers" as stated in the spring Kafka document( SOURCE ), spring can automatically detect from which addres问题出在我的 spring kafka 集成中,我试图使用我的属性文件中的 kafka.bootstrap.servers 键为我的客户端应用程序提供连接到 kafka 的地址,我在我的属性文件中定义了这个键并提取和分配KafkaBean class 中此键的值,但客户端没有看到它。它一直在尝试连接到 localhost:9092,首先我在 dockerfile 中使用“ENTRYPOINT [”java”,“-Dspring”指定了我的活动配置文件。 profiles.active=prod", "-jar", "driver-service-0.0.2-SNAPSHOT.jar"]" 命令在 docker 环境中工作时使用我的 application.prod.properties 文件,然后,如果我们使用密钥如 spring Kafka 文档( SOURCE )中所述,“spring.kafka.bootstrap-servers”而不是“kafka.bootstrap.servers”,spring 可以自动检测来自哪些地址s it can connect to Kafka. s 它可以连接到 Kafka。 I just had to give the producer also the Kafka address using the @Value annotation so that the driver-service and Kafka could communicate seamlessly in the docker network 😇我只需要使用 @Value 注释给生产者也提供 Kafka 地址,以便驱动程序服务和 Kafka 可以在 docker 网络中无缝通信😇

Thank you very much @OneCricketeer and @Svend for their help.非常感谢 @OneCricketeer 和 @Svend 的帮助。

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

相关问题 带有 Docker Compose 的 Eureka 客户端 - Eureka Client with Docker Compose 用于模式注册表和代理的 Kafka 客户端消费者配置 - Kafka Client Consumer Config for Schema Registry and Broker 如何在docker撰写中将spring app传递给mysql? - How to communicate spring app to mysql in docker compose? 当我重新启动我的kafka经纪人时,为什么或如何丢失一些消息? - why or how we lose few messages when i restart my kafka broker? Keycloak 客户端密码和 Spring Boot docker compose - Keycloak client secret and spring boot docker compose 我如何让我的两个 docker 容器相互通信? - How do i get my two docker containers to communicate with each other? Docker与Zookeeper,Kafka,Redis和Java Spring Boot组合 - Docker Compose with Zookeeper, Kafka, Redis, and Java Spring Boot 将 sprig 连接到 kafka 开始使用 docker compose 进行本地主机开发 - connecting sprig to kafka started using docker compose for localhost development Kafka Broker的问题-UnknownServerException - Issue with Kafka Broker - UnknownServerException Spring 引导 Kafka Junit 与来自 docker 的应用程序定义的 kafka 服务器组成 - Spring Boot Kafka Junit with application defined kafka server from docker compose
 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM