简体   繁体   English

在docker stack / swarm中具有弹性

[英]Elastic in docker stack/swarm

I have swarm of two nodes 我有两个节点

[ra@speechanalytics-test ~]$ docker node ls
ID                            HOSTNAME                  STATUS              AVAILABILITY        MANAGER STATUS      ENGINE VERSION
mlwwmkdlzbv0zlapqe1veq3uq     speechanalytics-preprod   Ready               Active                                  18.09.3
se717p88485s22s715rdir9x2 *   speechanalytics-test      Ready               Active              Leader              18.09.3

I am trying to run container with elastic in stack. 我正在尝试在堆栈中运行带有弹性的容器。 Here is my docker-compose.yml file 这是我docker-compose.yml文件

version: '3.4'
services:
  elastic:
    image: docker.elastic.co/elasticsearch/elasticsearch:6.7.0
    environment:
      - cluster.name=single-node
      - bootstrap.memory_lock=true
      - "ES_JAVA_OPTS=-Xms512m -Xmx512m"
    ulimits:
      memlock:
        soft: -1
        hard: -1
    volumes:
      - esdata:/usr/share/elasticsearch/data
    deploy:
      placement:
        constraints:
          - node.hostname==speechanalytics-preprod

volumes:
  esdata:
    driver: local

after start with docker stack 从docker stack开始之后

docker stack deploy preprod -c docker-compose.yml

container crashes in 20 seconds 集装箱在20秒内坠毁

docker service logs preprod_elastic 
...
   | OpenJDK 64-Bit Server VM warning: Option UseConcMarkSweepGC was deprecated in version 9.0 and will likely be removed in a future release.
   | OpenJDK 64-Bit Server VM warning: UseAVX=2 is not supported on this CPU, setting it to UseAVX=0
   | [2019-04-03T16:41:30,044][WARN ][o.e.b.JNANatives         ] [unknown] Unable to lock JVM Memory: error=12, reason=Cannot allocate memory
   | [2019-04-03T16:41:30,049][WARN ][o.e.b.JNANatives         ] [unknown] This can result in part of the JVM being swapped out.
   | [2019-04-03T16:41:30,049][WARN ][o.e.b.JNANatives         ] [unknown] Increase RLIMIT_MEMLOCK, soft limit: 16777216, hard limit: 16777216
   | [2019-04-03T16:41:30,050][WARN ][o.e.b.JNANatives         ] [unknown] These can be adjusted by modifying /etc/security/limits.conf, for example:
   | OpenJDK 64-Bit Server VM warning: Option UseConcMarkSweepGC was deprecated in version 9.0 and will likely be removed in a future release.
   |     # allow user 'elasticsearch' mlockall
   | OpenJDK 64-Bit Server VM warning: UseAVX=2 is not supported on this CPU, setting it to UseAVX=0
   |     elasticsearch soft memlock unlimited
   | [2019-04-03T16:41:02,949][WARN ][o.e.b.JNANatives         ] [unknown] Unable to lock JVM Memory: error=12, reason=Cannot allocate memory
   |     elasticsearch hard memlock unlimited
   | [2019-04-03T16:41:02,954][WARN ][o.e.b.JNANatives         ] [unknown] This can result in part of the JVM being swapped out.
   | [2019-04-03T16:41:30,050][WARN ][o.e.b.JNANatives         ] [unknown] If you are logged in interactively, you will have to re-login for the new limits to take effect.
   | [2019-04-03T16:41:02,954][WARN ][o.e.b.JNANatives         ] [unknown] Increase RLIMIT_MEMLOCK, soft limit: 16777216, hard limit: 16777216
preprod

on both nodes I have 在两个节点上

ra@speechanalytics-preprod:~$ sysctl vm.max_map_count
vm.max_map_count = 262144

Any ideas how to fix ? 任何想法如何解决?

The memlock errors you're seeing from Elasticsearch is a common issue not unique to having used Docker, but occurs when Elasticsearch is told to lock its memory, but is unable to do so. 您从Elasticsearch中看到的memlock错误是一个常见问题,不仅是使用Docker所独有的,而是在告诉Elasticsearch锁定其内存但无法这样做时发生的。 You can circumvent the error by removing the following environment variable from the docker-compose.yml file: 您可以通过从docker-compose.yml文件中删除以下环境变量来避免该错误:

- bootstrap.memory_lock=true

Memlock may be used with Docker Swarm Mode, but with some caveats. Memlock 可能与Docker Swarm模式一起使用,但有一些警告。

Not all options that work with docker-compose (Docker Compose) work with docker stack deploy (Docker Swarm Mode), and vice versa, despite both sharing the docker-compose YAML syntax. 尽管共享了docker-compose YAML语法,但并非所有适用于docker-compose (Docker Compose)的选项都适用于docker stack deploy (Docker Swarm Mode),反之亦然。 One such option is ulimits: , which when used with docker stack deploy, will be ignored with a warning message, like so: 其中一个选项是ulimits:当与docker stack deploy一起使用时,将被警告消息忽略,如下所示:

Ignoring unsupported options: ulimits

My guess is that with your docker-compose.yml file, Elasticsearch runs fine with docker-compose up , but not with docker stack deploy . 我的猜测是,使用您docker-compose.yml文件,Elasticsearch在docker-compose.yml docker-compose up运行良好,但在docker stack deploy运行docker-compose.yml

With Docker Swarm Mode, by default, the Elasticsearch instance as you have defined will have trouble with memlock. 默认情况下,在使用Docker Swarm Mode的情况下,您定义的Elasticsearch实例将无法使用内存锁。 Currently, setting of ulimits for docker swarm services is not yet officially supported. 当前, 尚未正式支持为docker swarm服务设置ulimits。 There are ways to get around the issue, though. 不过,有一些方法可以解决此问题。

If the host is Ubuntu , unlimited memlock can be enabled across the docker service (see here and here ). 如果主机是Ubuntu ,则可以在docker服务上启用无限内存锁(请参阅此处此处 )。 This can be achieved via the commands: 这可以通过以下命令来实现:

echo -e "[Service]\nLimitMEMLOCK=infinity" | SYSTEMD_EDITOR=tee systemctl edit docker.service
systemctl daemon-reload
systemctl restart docker

However, setting memlock to infinity is not without its drawbacks, as spelt out by Elastic themselves here . 但是,将memlock设置为infinity并非没有缺点,正如Elastic自己在此处阐明的那样。

Based on my testing, the solution works on Docker 18.06, but not on 18.09. 根据我的测试,该解决方案适用于Docker 18.06,但不适用于18.09。 Given the inconsistency and the possibility of Elasticsearch failing to start, the better option would be to not use memlock with Elasticsearch when deploying on Swarm. 考虑到不一致的情况以及Elasticsearch无法启动的可能性,更好的选择是在Swarm上部署时不要在Elasticsearch中使用内存锁。 Instead, you can opt for any of the other methods mentioned in Elasticsearch Docs to achieve similar results. 相反,您可以选择Elasticsearch Docs中提到的任何其他方法来获得相似的结果。

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM