简体   繁体   English

从另一个 docker 容器连接到 Kafka

[英]Connect to Kafka from another docker container

This is my compose setup:这是我的撰写设置:

version: "3"

services:
  zoo:
    image: confluentinc/cp-zookeeper:7.2.1
    hostname: zoo
    container_name: zoo
    ports:
      - "2181:2181"
    environment:
      ZOOKEEPER_CLIENT_PORT: 2181
      ZOOKEEPER_SERVER_ID: 1
      ZOOKEEPER_SERVERS: zoo:2888:3888

  kafka:
    image: confluentinc/cp-kafka:7.2.1
    hostname: kafka
    container_name: kafka
    ports:
      - "9092:9092"
      - "29092:29092"
      - "9999:9999"
    environment:
      KAFKA_ADVERTISED_LISTENERS: INTERNAL://kafka:19092,EXTERNAL://${DOCKER_HOST_IP:-127.0.0.1}:9092,DOCKER://host.docker.internal:29092
      KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: INTERNAL:PLAINTEXT,EXTERNAL:PLAINTEXT,DOCKER:PLAINTEXT
      KAFKA_INTER_BROKER_LISTENER_NAME: INTERNAL
      KAFKA_ZOOKEEPER_CONNECT: "zoo:2181"
      KAFKA_BROKER_ID: 1
      KAFKA_LOG4J_LOGGERS: "kafka.controller=INFO,kafka.producer.async.DefaultEventHandler=INFO,state.change.logger=INFO"
      KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 1
      KAFKA_TRANSACTION_STATE_LOG_REPLICATION_FACTOR: 1
      KAFKA_TRANSACTION_STATE_LOG_MIN_ISR: 1
      KAFKA_JMX_PORT: 9999
      KAFKA_JMX_HOSTNAME: ${DOCKER_HOST_IP:-127.0.0.1}
      KAFKA_AUTHORIZER_CLASS_NAME: kafka.security.authorizer.AclAuthorizer
      KAFKA_ALLOW_EVERYONE_IF_NO_ACL_FOUND: "true"
    depends_on:
      - zoo

  fastapi:
    build: ./apiservice
    image: entryservice
    ports:
      - "8000:8000"
    restart: always
    depends_on:
      - zoo
      - kafka

My FastAPI Service looks like this:我的 FastAPI 服务如下所示:

from typing import Union
from confluent_kafka.admin import AdminClient, NewTopic
from confluent_kafka import Producer
from fastapi import FastAPI
from pydantic import BaseModel

admin = AdminClient({'bootstrap.servers': 'kafka:9092'})
producer = Producer({'bootstrap.servers': 'kafka:9092'})

new_topic = NewTopic("topic1", num_partitions=3, replication_factor=1)
new_topic2 = NewTopic("topic2", num_partitions=3, replication_factor=1)

fs = admin.create_topics([new_topic, new_topic2])

class Message(BaseModel):
    description: str

app = FastAPI()

@app.on_event("startup")
async def startup_event():
    for topic, f in fs.items():
        try:
            f.result()
            print(f"Topic {topic} wurde erstellt")
        except Exception as e:
            print(f"Error: Topic {topic} konnte nicht erstellt werden: {e}")

@app.post("/")
def create_message(message: Message):
    producer.produce('topic1', message.description.encode('utf-8'))
    producer.flush()
    return {"message": message.description}

When I start my service I get the following:当我开始我的服务时,我得到以下信息:

fastapi_1  | %3|1667288143.941|FAIL|rdkafka#producer-1| [thrd:127.0.0.1:9092/1]: 127.0.0.1:9092/1: Connect to ipv4#127.0.0.1:9092 failed: Connection refused (after 0ms in state CONNECT, 1 identical error(s) suppressed)

Running the same code (only change kafka:9092 to localhost:9092 ) locally works totally fine.在本地运行相同的代码(仅将kafka:9092更改为localhost:9092 )完全可以正常工作。 I assume that something in my kafka configuration does not allow a connection other than from localhost.我假设我的 kafka 配置中的某些内容不允许来自 localhost 以外的连接。 I took the setup from a github repo, so to be honest I don´t really too much about it.我从 github 仓库中获取了设置,所以老实说我并不太关心它。 Does anyone know what to change to allow my other service connect to it?有谁知道要更改什么以允许我的其他服务连接到它?

It seems you are missing the.network in your docker-compose. Try to create a.network in this way.看来您的 docker-compose 中缺少 .network。请尝试以这种方式创建一个 .network。 You can learn more about.networks from docker doc您可以从docker 文档了解更多关于 .networks 的信息

edit编辑

I tried your code and I found that the problem is just the kafka port number you are trying to connect to.我尝试了您的代码,发现问题出在您尝试连接的 kafka 端口号上。 You have to use the INTERNAL port (19092) and not the EXTERNAL one (9092).您必须使用内部端口 (19092) 而不是外部端口 (9092)。 These ports are configured in KAFKA_ADVERTISED_LISTENERS env variable.这些端口在 KAFKA_ADVERTISED_LISTENERS 环境变量中配置。 If you want to reach your dockerized kafka from pycharm, and so from outside docker (external), you have to use 9092, but if you want to reach it from inside another container (internal) you have to use the internal one如果你想从 pycharm 到达你的 dockerized kafka,那么从 docker(外部)之外,你必须使用 9092,但是如果你想从另一个容器(内部)到达它,你必须使用内部容器

version: "3"

services:
  zoo:
    image: confluentinc/cp-zookeeper:7.2.1
    hostname: zoo
    container_name: zoo
    ports:
      - "2181:2181"
    environment:
      ZOOKEEPER_CLIENT_PORT: 2181
      ZOOKEEPER_SERVER_ID: 1
      ZOOKEEPER_SERVERS: zoo:2888:3888
    networks:
      - kafka_default

  kafka:
    image: confluentinc/cp-kafka:7.2.1
    hostname: kafka
    container_name: kafka
    ports:
      - "9092:9092"
      - "29092:29092"
      - "9999:9999"
    environment:
      KAFKA_ADVERTISED_LISTENERS: INTERNAL://kafka:19092,EXTERNAL://${DOCKER_HOST_IP:-127.0.0.1}:9092,DOCKER://host.docker.internal:29092
      KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: INTERNAL:PLAINTEXT,EXTERNAL:PLAINTEXT,DOCKER:PLAINTEXT
      KAFKA_INTER_BROKER_LISTENER_NAME: INTERNAL
      KAFKA_ZOOKEEPER_CONNECT: "zoo:2181"
      KAFKA_BROKER_ID: 1
      KAFKA_LOG4J_LOGGERS: "kafka.controller=INFO,kafka.producer.async.DefaultEventHandler=INFO,state.change.logger=INFO"
      KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 1
      KAFKA_TRANSACTION_STATE_LOG_REPLICATION_FACTOR: 1
      KAFKA_TRANSACTION_STATE_LOG_MIN_ISR: 1
      KAFKA_JMX_PORT: 9999
      KAFKA_JMX_HOSTNAME: ${DOCKER_HOST_IP:-127.0.0.1}
      KAFKA_AUTHORIZER_CLASS_NAME: kafka.security.authorizer.AclAuthorizer
      KAFKA_ALLOW_EVERYONE_IF_NO_ACL_FOUND: "true"
    depends_on:
      - zoo
    networks:
      - kafka_default

  fastapi:
    build: ./apiservice
    image: entryservice
    ports:
      - "8000:8000"
    restart: always
    depends_on:
      - zoo
      - kafka
    networks:
      - kafka_default

networks:
  kafka_default:
    driver: bridge

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM