简体   繁体   中英

Docker Swarm with data: shared volume vs clustering vs single instance

I make my first steps with Docker Swarm and wonder how to deal with services that use persistent data like redis, elasticsearch or a database.

I found a lot of tutorials on how to configure redis/elasticsearch/database clusters with docker swarm - but isn't it easier to use a shared storage? Eg, I work with Azure, so I simpy could use a single Azure File Storage as a redis/elasticsearch/database volume and let all my nodes mount this File Storage. Is this an acceptable approach or are there some significant disadvantages (for example, when two or more database instances try to write at the same time on that storage)?

Is it recommended at all to use such "data"-services in every node? Or should I use Docker Swarm just for frontend-services and have a single redis/elasticsearch/database service?

if you want to use a shared folder accessed from more than one application instance than the application itself need to be designed in a way to avoid data corruption (no given file is written at the same time from more than application. So called mutex locks)

All databases that I know of are not designed this way so you can't use them with shared storage.

What they normally do instead is to connect all database in a cluster and the sync is done at a software level.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM