简体   繁体   English

在集群/ Docker-Swarm中映射Docker卷

[英]Mapping Docker Volumes in a Cluster/Docker-Swarm

I am running Docker Swarm with 3-Masters and 3-Worker nodes. 我正在使用3-Masters和3-Worker节点运行Docker Swarm。 On this Swarm, I have an Elastic search container which reads data from multiple log files and then writes the data into a directory. 在这个Swarm上,我有一个Elastic搜索容器,该容器从多个日志文件读取数据,然后将数据写入目录。 Later it reads data from this directory and shows me the logs on a UI. 之后,它从该目录中读取数据,并在UI上向我显示日志。

Now the problem is that I am running only 1 instance of this Elastic Search Container and say for some reason it goes down then docker swarm start this on another machine. 现在的问题是,我只运行了这个Elastic Search Container的一个实例,并说由于某种原因它掉了,然后docker swarm在另一台机器上启动了它。 Since I have 6 machines I have created the particular directory on all the machines, but whenever I start the docker stack the ES container starts and starts reading/writing directory on the machine where it is running. 因为我有6台机器,所以我已经在所有机器上创建了特定目录,但是每当我启动docker堆栈时,ES容器就会启动并开始在正在运行的机器上读取/写入目录。

Is there a way that we can 有办法吗

  • Force docker swarm to run a container on a particular machine 强制Docker群在特定计算机上运行容器

or 要么

  • Map volume to shared/network drive 将卷映射到共享/网络驱动器

Both are available. 两者都可用。

Force docker swarm to run a container on a particular machine 强制Docker群在特定计算机上运行容器

Add --constraint flag when executing docker service create . 在执行--constraint docker service create时添加--constraint标志。 Some introduction . 一些介绍

Map volume to shared/network drive 将卷映射到共享/网络驱动器

Use docker volume with a driver that supports writing files to an external storage system like NFS or Amazon S3. 将Docker卷与支持将文件写入外部存储系统(例如NFS或Amazon S3)的驱动程序一起使用。 More introduction . 更多介绍

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM