[英]Cannot connect to kafka docker container from logstash docker container
I am trying to connect to a kafka docker container from a logstash docker container but I always get the following message:我正在尝试从 logstash docker 容器连接到 kafka docker 容器,但我总是收到以下消息:
Connection to node 0 (localhost/127.0.0.1:9092) could not be established. Broker may not be available.
My docker-compose.yml file is我的 docker-compose.yml 文件是
version: '3.2'
services:
elasticsearch:
build:
context: elasticsearch/
args:
ELK_VERSION: $ELK_VERSION
volumes:
- type: bind
source: ./elasticsearch/config/elasticsearch.yml
target: /usr/share/elasticsearch/config/elasticsearch.yml
read_only: true
- type: volume
source: elasticsearch
target: /usr/share/elasticsearch/data
ports:
- "9200:9200"
- "9300:9300"
environment:
ES_JAVA_OPTS: "-Xmx256m -Xms256m"
ELASTIC_PASSWORD: changeme
networks:
- elk
depends_on:
- kafka
logstash:
build:
context: logstash/
args:
ELK_VERSION: $ELK_VERSION
volumes:
- type: bind
source: ./logstash/config/logstash.yml
target: /usr/share/logstash/config/logstash.yml
read_only: true
- type: bind
source: ./logstash/pipeline
target: /usr/share/logstash/pipeline
read_only: true
ports:
- "5000:5000"
- "9600:9600"
links:
- kafka
environment:
LS_JAVA_OPTS: "-Xmx256m -Xms256m"
networks:
- elk
depends_on:
- elasticsearch
kibana:
build:
context: kibana/
args:
ELK_VERSION: $ELK_VERSION
volumes:
- type: bind
source: ./kibana/config/kibana.yml
target: /usr/share/kibana/config/kibana.yml
read_only: true
ports:
- "5601:5601"
networks:
- elk
depends_on:
- elasticsearch
zookeeper:
image: strimzi/kafka:0.11.3-kafka-2.1.0
container_name: zookeeper
command: [
"sh", "-c",
"bin/zookeeper-server-start.sh config/zookeeper.properties"
]
ports:
- "2181:2181"
networks:
- elk
environment:
LOG_DIR: /tmp/logs
kafka:
image: strimzi/kafka:0.11.3-kafka-2.1.0
command: [
"sh", "-c",
"bin/kafka-server-start.sh config/server.properties --override listeners=$${KAFKA_LISTENERS} --override advertised.listeners=$${KAFKA_ADVERTISED_LISTENERS} --override zookeeper.connect=$${KAFKA_ZOOKEEPER_CONNECT}"
]
depends_on:
- zookeeper
ports:
- "9092:9092"
networks:
- elk
environment:
LOG_DIR: "/tmp/logs"
KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://localhost:9092
KAFKA_LISTENERS: PLAINTEXT://0.0.0.0:9092
KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181
networks:
elk:
driver: bridge
volumes:
elasticsearch:
and my logstash.conf file is我的 logstash.conf 文件是
input {
kafka{
bootstrap_servers => "kafka:9092"
topics => ["logs"]
}
}
## Add your filters / logstash plugins configuration here
output {
elasticsearch {
hosts => "elasticsearch:9200"
user => "elastic"
password => "changeme"
}
}
All my containers are running normally and I can send messages to Kafka topics outside of the containers.我所有的容器都在正常运行,我可以向容器外的 Kafka 主题发送消息。
You need to define your listener based on the hostname at which it can be resolved from the client.您需要根据可以从客户端解析的主机名来定义您的侦听器。 If the listener is
localhost
then the client (logstash) will try to resolve it as localhost
from its own container, hence the error.如果侦听器是
localhost
,则客户端(logstash)将尝试从其自己的容器将其解析为localhost
,因此会出现错误。
I've written about this in detail here but in essence you need this:我在这里详细写过这个,但本质上你需要这个:
KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://localhost:9092, PLAINTEXT://kafka:29092
KAFKA_LISTENERS: PLAINTEXT://0.0.0.0:9092, PLAINTEXT://kafka:29092
Then any container on the Docker network uses kafka:29092
to reach it, so logstash config becomes然后 Docker 网络上的任何容器都使用
kafka:29092
到达它,所以 logstash config 变为
bootstrap_servers => "kafka:29092"
Any client on the host machine itself continues to use localhost:9092
.主机本身的任何客户端继续使用
localhost:9092
。
You can see this in action with Docker Compose here: https://github.com/confluentinc/demo-scene/blob/master/build-a-streaming-pipeline/docker-compose.yml#L40您可以通过 Docker 看到这一点 在此处撰写: https://github.com/confluentinc/demo-scene/blob/master/build-a-streaming-pipeline/docker-compose.yml#L40
The Kafka advertised listers should be defined like this Kafka 广告列表应该这样定义
KAFKA_ADVERTISED_LISTENERS:PLAINTEXT://kafka:9092
KAFKA_LISTENERS: PLAINTEXT://kafka:9092
You can use the HOST machines IP address for Kafka advertised listeners that way your docker services as well as the services which are running outside your docker network can access it.您可以将主机机器 IP 地址用于 Kafka 广告侦听器,这样您的 docker 服务以及在 docker 网络之外运行的服务都可以访问它。
KAFKA_ADVERTISED_LISTENERS:PLAINTEXT://$HOST_IP:9092 KAFKA_ADVERTISED_LISTENERS:PLAINTEXT://$HOST_IP:9092
KAFKA_LISTENERS: PLAINTEXT://$HOST_IP:9092 KAFKA_LISTENERS: PLAINTEXT://$HOST_IP:9092
For reference you can go through this article https://rmoff.net/2018/08/02/kafka-listeners-explained/作为参考,您可以通过这篇文章go https://rmoff.net/2018/08/02/kafka-listeners-explained/
声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.