繁体   English   中英

将Dockerized Elasticsearch与多个Docker主机集群化

[英]Clustering Dockerized Elasticsearch with multiple Docker Host

试图使其与docker compose集群。 我有两个Elasticsearch Docker容器,它们分别部署在不同的Docker主机中。

docker version: 18.06.3-ce
elasticsearch : 6.5.2

docker-container-1的docker-compose.yml

 services:
   elasticsearch:
     restart: always
     hostname: elasticsearch
     image: docker-elk/elasticsearch:1.0.0
     build:
       context: elasticsearch
       dockerfile: Dockerfile
     environment:
       discovery.type: zen
     ports:
       - 9200:9200
       - 9300:9300
     env_file:
       - ./elasticsearch/elasticsearch.env
     volumes:
       - elasticsearch_data:/usr/share/elasticsearch/data

docker-container-2的docker-compose.yml

 services:
   elasticsearch:
     restart: always
     hostname: elasticsearch
     image: docker-elk/elasticsearch:1.0.0
     build:
       context: elasticsearch
       dockerfile: Dockerfile
     environment:
       discovery.type: zen
     ports:
       - 9200:9200
       - 9300:9300
     env_file:
       - ./elasticsearch/elasticsearch.env
     volumes:
       - elasticsearch_data:/usr/share/elasticsearch/data

Docker-Host 1上elasticsearch-docker-container-1上的elasticsearch.yml

 xpack.security.enabled: true
 cluster.name: es-cluster
 node.name: es1
 network.host: 0.0.0.0
 node.master: true
 node.data: true
 transport.tcp.port: 9300
 path.data: /usr/share/elasticsearch/data
 path.logs: /usr/share/elasticsearch/logs
 discovery.zen.minimum_master_nodes: 2
 gateway.recover_after_nodes: 1
 discovery.zen.ping.unicast.hosts: ["host1:9300", "host2:9300","host1:9200", "host2:9200"]
 network.publish_host: host1

Docker-Host 2上elasticsearch-docker-container-2上的elasticsearch.yml

 xpack.security.enabled: true
 cluster.name: es-cluster
 node.name: es2
 network.host: 0.0.0.0
 node.master: true
 node.data: true
 transport.tcp.port: 9300
 path.data: /usr/share/elasticsearch/data
 path.logs: /usr/share/elasticsearch/logs
 discovery.zen.minimum_master_nodes: 2
 gateway.recover_after_nodes: 1
 discovery.zen.ping.unicast.hosts: ["host1:9300", "host2:9300","host1:9200", "host2:9200"]
 network.publish_host: host2

以下是GET / _cluster / health?pretty的结果,它显示只有一个节点。

 {
   "cluster_name" : "dps_geocluster",
   "status" : "yellow",
   "timed_out" : false,
   "number_of_nodes" : 1,
   "number_of_data_nodes" : 1,
   "active_primary_shards" : 33,
   "active_shards" : 33,
   "relocating_shards" : 0,
   "initializing_shards" : 0,
   "unassigned_shards" : 30,
   "delayed_unassigned_shards" : 0,
   "number_of_pending_tasks" : 0,
   "number_of_in_flight_fetch" : 0,
   "task_max_waiting_in_queue_millis" : 0,
   "active_shards_percent_as_number" : 52.38095238095239
 }

根据下面的文档,至少需要三个elasticsearch节点。 https://www.elastic.co/guide/zh-CN/elasticsearch/reference/6.5/modules-node.html

每个elasticsearch容器应位于不同的Docker主机上吗?

以下是错误的原因。 使用sysctl将vm.max_map_count的值增加到262144后,它可以正常工作。

elasticsearch_1  | [1]: max virtual memory areas vm.max_map_count [65530] is too low, increase to at least [262144]

现在number_of_nodes是2。

{
  "cluster_name" : "es-cluster",
  "status" : "yellow",
  "timed_out" : false,
  "number_of_nodes" : 2,
  "number_of_data_nodes" : 2,
  "active_primary_shards" : 35,
  "active_shards" : 37,
  "relocating_shards" : 0,
  "initializing_shards" : 2,
  "unassigned_shards" : 31,
  "delayed_unassigned_shards" : 0,
  "number_of_pending_tasks" : 0,
  "number_of_in_flight_fetch" : 0,
  "task_max_waiting_in_queue_millis" : 0,
  "active_shards_percent_as_number" : 52.85714285714286
}

暂无
暂无

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM