简体   繁体   中英

Continuous deployment with databases in Docker Swarm

I'm busy developing an API for my mobile app and I'm now looking at the deployment of the back-end solution. The components are fairly simple: nginx, .NET core app and postgresql for persistence. In case I need to scale quickly, I want to start out with Docker Swarm on a single node initially. Having a separate data volume for Postgresql seems the way to go, but I can't find any recommendations on upgrades and database migrations going forward. When I need to upgrade the Postgresql image (a minor upgrade not requiring pg_upgrade), will this have to be a manual operation, or can I manage this through rolling upgrades? The requirement will be to shutdown all app instances while this happens. Similarly, how do I manage database migrations, eg static data / schema changes? I will need all app instances to exit, complete the migration and then restart. Any ideas greatly appreciated.

So, having gone and done this, I thought I'd post details of what my solution ultimately looked like. Firstly, I used GoCD which is a fantastic open source continuous delivery server to automate the entire delivery from test to production. With Docker it's good to apply the single responsibility principle, so I created separate Docker Swarm stacks as follows:

  • Data: Consists of database containers, in my case postgresql
  • Data-admin: Hosts containers for cron based database backups, file backups and a container which handles application DB prerequisites (eg creating databases / users)
  • App: Scaleable container for the C# app API, as well as a container to managed database migrations using db-up
  • Web: Hosts a Traefik container - a reverse proxy server, configured to route traffic to the app stack
  • Monitoring: Hosts containers for logstash, logspout, kibana, grafana and portainer, with the app logging to elasticsearch and kibana / grafana providing visualisations of this data. Portainer supports basic management of the swarm.

Commits to BitBucket repositories on the master branch trigger updates to my test environment via GoCD, using a fair amount of bash scripting on CentOS to knit this all together. When testing looks good, I can push-button deploy into production. The images which hydrate the Docker containers are GoCD build artifacts and versioned as part of the build process, so it's easy to revert to and rediscover them if necessary.

Most of the information I've digested on database upgrades suggests backing up the app databases and restoring them to a container instance of the new version. See https://peter.grman.at/upgrade-postgres-9-container-to-10/ for details. For app database migrations, I created an app specific 'devops' user which has permission to make schema changes. This user is passed to the database migration container for the app and will not work on any other databases. The app container itself is only provided with a reduced permissions app user.

Suffice to say there were plenty of complexities to overcome, eg inter service dependency and startup, but the usual amount of googling and perseverance paid dividend. Now works like a charm!

Note: If you only want the app container to start serving when the database is guaranteed to be initialised (ie all migrations run), then one pattern to be aware of is providing a port on which other services can check service health. In my case I had the database migration container listening on a port to provide responses when all migrations had been run. It's not enough to try connecting to the database from the app container, as it won't know if the schema is up-to-date.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM