简体   繁体   中英

I want to communicate my client and kafka broker with docker compose

There are client, kafka and zookeeper in the same network, I am trying to connect from client to kafka with SERVICE_NAME:PORT but

driver-service-container | 2022-07-24 09:00:05.076 WARN 1 --- [| adminclient-1] org.apache.kafka.clients.NetworkClient : [AdminClient clientId=adminclient-1] Connection to node 1 (localhost/127.0.0.1:9092) could not be established. Broker may not be available.

I get an error.

I know that I can easily communicate containers in the same network using the service name, but I don't understand why it doesn't work.

The name of my client trying to communicate with kafka is driver-service

I looked through these resources but according to them my method should work:

Connect to Kafka running in Docker

My Python/Java/Spring/Go/Whatever Client Won't Connect to My Apache Kafka Cluster in Docker/AWS/My Brother's Laptop. Please Help

driver-service githup repositorie

My docker-compose file:

version: '3'
services:

  gateway-server:
    image: gateway-server-image
    container_name: gateway-server-container
    ports:
      - '5555:5555'
    environment:
      - SECURE_KEY_USERNAME=randomSecureKeyUsername!
      - SECURE_KEY_PASSWORD=randomSecureKeyPassword!
      - PASSENGER_SERVICE_URL=172.24.2.4:4444
      - DRIVER_SERVICE_URL=172.24.2.5:3333
    networks:
      microservicesNetwork:
        ipv4_address: 172.24.2.6

  driver-service:
    image: driver-service-image
    container_name: driver-service-container
    ports:
      - '3333:3333'
    environment:
      - NOTIFICATION_SERVICE_URL=172.24.2.3:8888
      - PAYMENT_SERVICE_URL=172.24.2.2:7777
      - SECURE_KEY_USERNAME=randomSecureKeyUsername!
      - SECURE_KEY_PASSWORD=randomSecureKeyPassword!
      - KAFKA_GROUP_ID=driver-group-id
      - KAFKA_BOOTSTRAP_SERVERS=broker:29092
      - kafka.consumer.group.id=driver-group-id
      - kafka.consumer.enable.auto.commit=true
      - kafka.consumer.auto.commit.interval.ms=1000
      - kafka.consumer.auto.offset.reset=earliest
      - kafka.consumer.max.poll.records=1
    networks:
      microservicesNetwork:
        ipv4_address: 172.24.2.5

  passenger-service:
    image: passenger-service-image
    container_name: passenger-service-container
    ports:
      - '4444:4444'
    environment:
      - PAYMENT_SERVICE_URL=172.24.2.2:7777
      - SECURE_KEY_USERNAME=randomSecureKeyUsername!
      - SECURE_KEY_PASSWORD=randomSecureKeyPassword!
    networks:
      microservicesNetwork:
        ipv4_address: 172.24.2.4

  notification-service:
    image: notification-service-image
    container_name: notification-service-container
    ports:
      - '8888:8888'
    environment:
      - SECURE_KEY_USERNAME=randomSecureKeyUsername!
      - SECURE_KEY_PASSWORD=randomSecureKeyPassword!
    networks:
      microservicesNetwork:
        ipv4_address: 172.24.2.3

  payment-service:
    image: payment-service-image
    container_name: payment-service-container
    ports:
      - '7777:7777'
    environment:
      - SECURE_KEY_USERNAME=randomSecureKeyUsername!
      - SECURE_KEY_PASSWORD=randomSecureKeyPassword!
    networks:
      microservicesNetwork:
        ipv4_address: 172.24.2.2

  zookeeper:
    image: confluentinc/cp-zookeeper:7.0.1
    container_name: zookeeper
    ports:
      - "2181:2181"
      - "2888:2888"
      - "3888:3888"
    environment:
      ZOOKEEPER_CLIENT_PORT: 2181
      ZOOKEEPER_TICK_TIME: 2000
    networks:
      - microservicesNetwork

  broker:
    image: confluentinc/cp-kafka:7.0.1
    container_name: broker
    ports:
      - "9092:9092"
    depends_on:
      - zookeeper
    environment:
      KAFKA_BROKER_ID: 1
      KAFKA_ZOOKEEPER_CONNECT: 'zookeeper:2181'
      KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: PLAINTEXT:PLAINTEXT,PLAINTEXT_INTERNAL:PLAINTEXT
      KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://localhost:9092,PLAINTEXT_INTERNAL://broker:29092
      KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 1
      KAFKA_TRANSACTION_STATE_LOG_MIN_ISR: 1
      KAFKA_TRANSACTION_STATE_LOG_REPLICATION_FACTOR: 1
      GROUP_ID: driver-group-id
      KAFKA_CREATE_TOPICS: "product"
    networks:
      - microservicesNetwork

  kafka-ui:
    image: provectuslabs/kafka-ui
    container_name: kafka-ui
    ports:
      - "8080:8080"
    restart: always
    environment:
      - KAFKA_CLUSTERS_0_NAME=broker
      - KAFKA_CLUSTERS_0_BOOTSTRAPSERVERS=broker:29092
      - KAFKA_CLUSTERS_0_ZOOKEEPER=zookeeper:2181
      - KAFKA_CLUSTERS_0_READONLY=true
    networks:
      - microservicesNetwork


  elasticsearch:
    image: docker.elastic.co/elasticsearch/elasticsearch:7.8.0
    platform: linux/x86_64
    environment:
      - discovery.type=single-node
      - max_open_files=65536
      - max_content_length_in_bytes=100000000
      - transport.host= elasticsearch
    volumes:
      - $HOME/app:/var/app
    ports:
      - "9200:9200"
      - "9300:9300"
    networks:
      - microservicesNetwork

  postgresql:
    image: postgres:11.1-alpine
    platform: linux/x86_64
    container_name: postgresql
    volumes:
      - ./postgresql/:/var/lib/postgresql/data/
    environment:
      - POSTGRES_PASSWORD=123456
      - POSTGRES_USER=postgres
      - POSTGRES_DB=cqrs_db
    ports:
      - "5432:5432"
    networks:
      - microservicesNetwork

networks:
  microservicesNetwork:
    driver: bridge
    ipam:
      driver: default
      config:
        - subnet: 172.24.2.0/16
          gateway: 172.24.2.1

application.prod.properties ->

#datasource
spring.datasource.url=jdbc:h2:mem:db_driver
spring.datasource.username=root
spring.datasource.password=1234
spring.datasource.driver-class-name=org.h2.Driver
spring.jpa.database-platform=org.hibernate.dialect.H2Dialect
#need spring-security config.
spring.h2.console.enabled=false
spring.h2.console.path=/h2-console
spring.jpa.show-sql=true
service.security.secure-key-username=${SECURE_KEY_USERNAME}
service.security.secure-key-password=${SECURE_KEY_PASSWORD}

payment.service.url=${PAYMENT_SERVICE_URL}
notification.service.url=${NOTIFICATION_SERVICE_URL}

#kafka configs
kafka.bootstrap.servers=${KAFKA_BOOTSTRAP_SERVERS}
kafka.group.id =${KAFKA_GROUP_ID}
spring.cache.cache-names=driver
spring.jackson.serialization.fail-on-empty-beans= false
spring.http.multipart.max-file-size=10MB
spring.http.multipart.max-request-size=11MB

If the error says localhost/127.0.0.1:9092 , then your environment variable isn't being used.

In the startup logs from the container, look at AdminClientConfig or ConsumerConfig sections, and you'll see the real bootstrap address that's used

KAFKA_BOOTSTRAP_SERVERS=broker:29092 is correct based on your KAFKA_ADVERTISED_LISTENERS

But, in your properties, it's unclear how this is used without showing your config class

kafka.bootstrap.servers=${KAFKA_BOOTSTRAP_SERVERS

If you read the spring kafka documentation closely, you'll see it needs to be spring.kafka.bootstrap.servers in order to be wired in automatically

Sidenote: All those kafka.consumer. attributes would need to be set as JVM properties , not container environment variables.

Also, Docker services should be configured to communicate with each other by service names, not assigned IP addresses

problem solved 😊

If I run driver-service on the local computer, it actually connects from localhost:9092, but if driver-service and kafka are in the same docker network, it needs to connect from "KAFKA_IP:29092" (service name can be used instead of KAFKA_IP), kafka is different for such different network environments. it expects us to configure ( Source ), when I ran my driver-service application on my local computer, kafka and driver-service could communicate, but they could not communicate in the same docker network. That is, the driver-service was not using the Kafka connection address that I defined in the application.prod.properties file that my application should use while running in docker. The problem was in my spring kafka integration, I was trying to give my client application the address to connect to kafka using the kafka.bootstrap.servers key in my properties file, I was defining this key in my properties file and pulling and assigning the value of this key in KafkaBean class, but the client did not see it.and it was persistently trying to connect to localhost:9092, first I specified my active profile in my dockerfile with the "ENTRYPOINT ["java", "-Dspring.profiles.active=prod", "-jar", "driver-service-0.0.2-SNAPSHOT.jar"]" command to use my application.prod.properties file while working in docker environment and then, if we use the key "spring.kafka.bootstrap-servers" instead of "kafka.bootstrap.servers" as stated in the spring Kafka document( SOURCE ), spring can automatically detect from which address it can connect to Kafka. I just had to give the producer also the Kafka address using the @Value annotation so that the driver-service and Kafka could communicate seamlessly in the docker network 😇

Thank you very much @OneCricketeer and @Svend for their help.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM