简体   繁体   中英

Granularity of Docker containers

When designing application infrastructure & architecture using Docker, is it best practise to create one container per "service" or multiple containers for each process within a "service"?

For example a distributed PHP application that uses Nginx, PHP-FPM, Redis, MySQL and ElasticSearch.

Service containers:

  • Nginx + App + PHP-FPM (complete app as a "service" container)
  • Redis
  • MySQL

Process Containers:

  • Nginx
  • App
  • PHP-FPM
  • Redis
  • MySQL

From my perspective it seems more maintainable to use a "service" container approach as managing so many discreet containers for each process could become cumbersome.

Container are all about isolation (isolation of filesystem, CPU, memory).
That also include isolation of process (one per container).

One process per container is easier to debug in case of failure (as oppose to connect to a huge container with tons of processes and different log running).
The upgrade/rollback path is easier (you only stop/restart one container per process you want to change).

Plus, whenever you have multiple processes running, you must use an image specialized in dealing with how those processes will stop: see " PID 1 zombie reaping issue ".

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM