简体   繁体   中英

Azure deployment slots and database migrations

TLDR:

How can an app running in a staging slot know it is in staging and connect to a test database/run migrations on the test database? And, how can that same app in the staging slot automatically become aware it has been swapped to live production slot and is now responsible for live business operations? (So it can switch to using live db, migrate live db etc)

Long version:

The topic is covered partly by this question: Azure Web App deployment slots with database migration

But i didn't really get my answer there..

My app uses FluentMigrator, initiated by an event/code when the app is initialized, to runs database migrations. MSBuild used to do it, but now I'm fairly certain that it's called programmatically

It seems most sensible for:

  • production to have a slot-based app setting that points the code at the production db
  • staging to have a slot-based app setting that points the code at the staging db

I can't see any other way for production to remain functional while staging is being proven, if staging has migrations that wreck the DB for production's understanding; the two DBs have to be separate

So if the DBs are separate, surely the only (nearly) zero downtime way of switching the world to using the code in the staging slot is if the switch causes the app to reinitialise itself and changes it so it becomes pointing to the production DB, then fluentmigrator (which should also be called again) can apply the same set of migrations to production and the code in staging runs the business on the production db for a while..

..production codebase is updated and the swap back occurs. Production code never migrates the production db because it's already updated by the staging code by the time the new version in production fires up

The only other way I foresee things working out is to have two DBs, two slots, and you never perform a swap; you just deploy to staging, it updates the staging DB, you test and prove good, you deploy to production, it updates the produtcion DB, you verify app working.. and the world has suffered a minor amount of downtime while prod was building (or a major amount if the build failed)

Is there a mechanism for the former? When a swap occurs, how could the app be pointed to a new DB and how could migrations be run again?

If the latter is the only way, then deployment slots might as well just be another web app, right? A web app for staging and a web app for prod, to alleviate any confusion slots cause because of how they are represented in the portal..

It is possible to have a single production database shared by staging and production Azure App Service slots, and still having zero-downtime deployments.

To do this, you need to ensure that all your migrations are backwards compatible , such that the current and new versions of the web app can run simultaneously with the same database. This means you can deploy to the staging slot, perform your smoke tests against the production database, then swap the staging slot for the production slot.

Some rules that allow this to work:

  • New columns should be nullable or have a default value set
  • Columns cannot be dropped
  • Columns cannot be renamed

When you do need to make a destructive changes, such as dropping a column, you need to have 2 releases:

  1. A release that removes the dependency from the web app
  2. A release that performs the change to the database schema

This sounds like a pain, but in practice you probably won't find yourself making destructive changes that often.

I have recently been solving a similar problem. My situation is easier in the way that I only deploy to content holding databases while the data gathering are standing aside and not being touched (will be challenging once we have to upgrade them).

However, we came up with a solution that seems to be the best approach to us. We do slot deploy while having two sets of dbs in an elastic pool (what causes almost no extra cost for them), while each slot points to one set of dbs. When we do a deploy, we update code and the slot related databases, do testing to verify that it all works and then do the swap - the code and the db. It means that after the swap the code in the deployment slot goes live and it points to the dbs we have tested towards, while the previously live code and db set is not offline, ready for another deployment.

We can afford this comfort because the data in the dbs is purely under our control and therefore we can prevent to write the db between the deployment and the swap.

However, if your db is gathering data from the live application, you might consider to turn the app off for the time you do the upgrade, merge your live data to your staging db, upgrade your staging db (which already includes the latest data) and then do the swap. This seems to me to be the safest approach to prevent data loss. You can also migrate your staging db first and then turn the app off and do the data merge into the new schema, depends on the character of your app, without knowing your exact situation to give a generic advice is quite complicated here.

Hope these ideas helps you at least a bit, if you have figured out some better approach, I'm keen to hear about it.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM