简体   繁体   中英

About an error running a docker-compose file with kafka, zookeeper and elastic search

Good afternoon, I'm trying to debug the execution of a docker-compose file that uses a couple of microservices developed by me, a kafka node, another zookeeper node, one more for elasticsearch and finally kibana-sense.

When I run the docker-compose up command, the exception appears:

demo-kafka-elastic_1  | Caused by: org.elasticsearch.client.transport.NoNodeAvailableException: None of the configured nodes are available: [{#transport#-1}{elastic}{172.21.0.5:9300}]
...
web - 2018-05-14 16:47:12,750 [elasticsearch[Neurotap][generic][T#1]] WARN  org.elasticsearch.client.transport - [Neurotap] node {#transport#-1}{elastic}{172.21.0.5:9300} not part of the cluster Cluster [elasticsearch_aironman], ignoring...

This is the link of the full output, it is large...

This is my actual docker-compose.yml file:

version: '3.6'
services:
demo-kafka-elastic:
image: aironman/demo-kafka-elastic:0.0.1-SNAPSHOT
deploy:
  replicas: 5
  resources:
    limits:
      cpus: "0.5"
      memory: 512M
  restart_policy:
      condition: on-failure

 demo-quartz:
 image: aironman/demo-quartz:0.0.1-SNAPSHOT
 deploy:
  replicas: 5
  resources:
    limits:
      cpus: "0.5"
      memory: 512M
  restart_policy:
      condition: on-failure

 zookeeper:
 image: wurstmeister/zookeeper
 ports:
  - "2181:2181"

 kafka:
 image: wurstmeister/kafka
 ports:
  - "9092:9092"
 environment:
  KAFKA_BROKER_ID: 1
  KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181
  KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://kafka:9092
  KAFKA_LISTENERS: PLAINTEXT://0.0.0.0:9092
 volumes:
  - /var/run/docker.sock:/var/run/docker.sock

 elastic:
  image: elasticsearch:2.4.0
  container_name: elastic
  environment:
    - cluster.name=elastic-cluster
    - http.host=0.0.0.0
    - network.publish_host=127.0.0.1
    - transport.tcp.port=9700
    - discovery.type=single-node
    - xpack.security.enabled=false
    - client.transport.sniff=false
  volumes: 
    - ./esdata/:/usr/share/elasticsearch/data/
  ports:
    - "9600:9200"
    - "9700:9700"

 kibana:
 image: seeruk/docker-kibana-sense:4.5
 ports:
  - "5601:5601"
 environment:
  - ELASTICSEARCH_URL=http://elastic:9200      

I am sure that i am pushing into the kafka topic, i can listen from that kafka topic, but i cannot insert the recovered json into elasticsearch-2.4.0, my question is why not running with docker-compose? because i can do it when i run the same processes in local mode, with kafka, zookeeper and elastic running locally. What do i have to change within docker-compose.yml file in order to work?

I have seen this link in stack overflow, but it does not work.

This is the link of demo-kafka-elastic and this one is demo-quartz

UPDATE!

i have read this possible solution from official forum, using elastic-head plugin i can see what cluster name is being assigned by default by docker container and i have tried to match with cluster.name environment variable within docker-compose file, but it doesn't work.

Ok, got it! the problem was that in my application.properties file there were a field named

spring.data.elasticsearch.cluster-name=elasticsearch_aironman

interfering with the environment variable

cluster.name

that i am using in docker-compose.yml.

Deleting that line from application.properties file did the trick.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM