简体   繁体   中英

Clustering Dockerized Elasticsearch with multiple Docker Host

Trying to make it clustering with docker compose. I have two elasticsearch docker containers which are deployed in different Docker Hosts.

docker version: 18.06.3-ce
elasticsearch : 6.5.2

docker-compose.yml for docker-container-1

 services:
   elasticsearch:
     restart: always
     hostname: elasticsearch
     image: docker-elk/elasticsearch:1.0.0
     build:
       context: elasticsearch
       dockerfile: Dockerfile
     environment:
       discovery.type: zen
     ports:
       - 9200:9200
       - 9300:9300
     env_file:
       - ./elasticsearch/elasticsearch.env
     volumes:
       - elasticsearch_data:/usr/share/elasticsearch/data

docker-compose.yml for docker-container-2

 services:
   elasticsearch:
     restart: always
     hostname: elasticsearch
     image: docker-elk/elasticsearch:1.0.0
     build:
       context: elasticsearch
       dockerfile: Dockerfile
     environment:
       discovery.type: zen
     ports:
       - 9200:9200
       - 9300:9300
     env_file:
       - ./elasticsearch/elasticsearch.env
     volumes:
       - elasticsearch_data:/usr/share/elasticsearch/data

elasticsearch.yml on the elasticsearch-docker-container-1 on the Docker-Host 1

 xpack.security.enabled: true
 cluster.name: es-cluster
 node.name: es1
 network.host: 0.0.0.0
 node.master: true
 node.data: true
 transport.tcp.port: 9300
 path.data: /usr/share/elasticsearch/data
 path.logs: /usr/share/elasticsearch/logs
 discovery.zen.minimum_master_nodes: 2
 gateway.recover_after_nodes: 1
 discovery.zen.ping.unicast.hosts: ["host1:9300", "host2:9300","host1:9200", "host2:9200"]
 network.publish_host: host1

elasticsearch.yml on the elasticsearch-docker-container-2 on the Docker-Host 2

 xpack.security.enabled: true
 cluster.name: es-cluster
 node.name: es2
 network.host: 0.0.0.0
 node.master: true
 node.data: true
 transport.tcp.port: 9300
 path.data: /usr/share/elasticsearch/data
 path.logs: /usr/share/elasticsearch/logs
 discovery.zen.minimum_master_nodes: 2
 gateway.recover_after_nodes: 1
 discovery.zen.ping.unicast.hosts: ["host1:9300", "host2:9300","host1:9200", "host2:9200"]
 network.publish_host: host2

Below is the result of GET /_cluster/health?pretty and it shows that there is only one node.

 {
   "cluster_name" : "dps_geocluster",
   "status" : "yellow",
   "timed_out" : false,
   "number_of_nodes" : 1,
   "number_of_data_nodes" : 1,
   "active_primary_shards" : 33,
   "active_shards" : 33,
   "relocating_shards" : 0,
   "initializing_shards" : 0,
   "unassigned_shards" : 30,
   "delayed_unassigned_shards" : 0,
   "number_of_pending_tasks" : 0,
   "number_of_in_flight_fetch" : 0,
   "task_max_waiting_in_queue_millis" : 0,
   "active_shards_percent_as_number" : 52.38095238095239
 }

According to the document below at least three elasticsearch nodes are required. https://www.elastic.co/guide/en/elasticsearch/reference/6.5/modules-node.html

Each elasticsearch container should be at different Docker host?

The following was the cause of error. After increasing the value of vm.max_map_count into 262144 with sysctl, it works fine.

elasticsearch_1  | [1]: max virtual memory areas vm.max_map_count [65530] is too low, increase to at least [262144]

Now number_of_nodes is 2.

{
  "cluster_name" : "es-cluster",
  "status" : "yellow",
  "timed_out" : false,
  "number_of_nodes" : 2,
  "number_of_data_nodes" : 2,
  "active_primary_shards" : 35,
  "active_shards" : 37,
  "relocating_shards" : 0,
  "initializing_shards" : 2,
  "unassigned_shards" : 31,
  "delayed_unassigned_shards" : 0,
  "number_of_pending_tasks" : 0,
  "number_of_in_flight_fetch" : 0,
  "task_max_waiting_in_queue_millis" : 0,
  "active_shards_percent_as_number" : 52.85714285714286
}

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM