简体   繁体   English

使用连续或自动部署时,如何部署数据库?

[英]When using Continuous or Automated Deployment, how do you deploy databases?

I'm looking at implementing Team City and Octopus Deploy for CI and Deployment on demand. 我正在考虑按需实施Team City和Octopus Deploy for CI和Deployment。 However, database deployment is going to be tricky as many are old .net applications with messy databases. 但是,数据库部署将变得棘手,因为许多旧的.net应用程序具有混乱的数据库。

Redgate seems to have a nice plug-in for Team City, but the price will probably be stumbling block Redgate似乎对Team City有一个很好的插件,但价格可能是绊脚石

What do you use? 你用什么? I'm happy to execute scripts, but it's the comparison aspect (ie what has changed) I'm struggling with. 我很高兴执行脚本,但这是比较方面(即已发生变化)我正在努力。

We utilize a free tool called RoundhousE for handling database changes with our project, and it was rather easy to use it with Octopus Deploy. 我们利用一个名为RoundhousE的免费工具来处理我们项目的数据库更改,并且使用Octopus Deploy很容易使用它。

We created a new project in our solution called DatabaseMigration, included the RoundhousE exe in the project, a folder where we keep the db change scripts for RoundhousE, and then took advantage of how Octopus can call powershell scripts before, during, and after deployment (PreDeploy.ps1, Deploy.ps1, and PostDeploy.ps1 respectively) and added a Deploy.ps1 to the project as well with the following in it: 我们在我们的解决方案中创建了一个名为DatabaseMigration的新项目,包括项目中的RoundhousE exe,我们保留RoundhousE的db更改脚本的文件夹,然后利用Octopus如何在部署之前,期间和之后调用powershell脚本(分别为PreDeploy.ps1,Deploy.ps1和PostDeploy.ps1)并将Deploy.ps1添加到项目中,其中包含以下内容:

$roundhouse_exe_path = ".\\rh.exe" $ roundhouse_exe_path =“。\\ rh.exe”

$scripts_dir = ".\\Databases\\DatabaseName" $ scripts_dir =“。\\ Databases \\ DatabaseName”

$roundhouse_output_dir = ".\\output" $ roundhouse_output_dir =“。\\ output”

if ($OctopusParameters) { if($ OctopusParameters){

$env = $OctopusParameters["RoundhousE.ENV"] $ env = $ OctopusParameters [“RoundhousE.ENV”]

$db_server = $OctopusParameters["SqlServerInstance"] $ db_server = $ OctopusParameters [“SqlServerInstance”]

$db_name = $OctopusParameters["DatabaseName"] $ db_name = $ OctopusParameters [“DatabaseName”]

} else { } else {

$env="LOCAL" $ ENV = “本地”

$db_server = ".\\SqlExpress" $ db_server =“。\\ SqlExpress”

$db_name = "DatabaseName" } $ db_name =“DatabaseName”}

&$roundhouse_exe_path -s $db_server -d $db_name -f $scripts_dir --env $env --silent -o > $roundhouse_output_dir &$ roundhouse_exe_path -s $ db_server -d $ db_name -f $ scripts_dir --env $ env --silent -o> $ roundhouse_output_dir

In there you can see where we check for any octopus variables (parameters) that are passed in when Octopus runs the deploy script, otherwise we have some default values we use, and then we simply call the RoundhousE executable. 在那里你可以看到我们检查Octopus运行部署脚本时传入的任何章节变量(参数)的位置,否则我们会使用一些默认值,然后我们只需调用RoundhousE可执行文件。

Then you just need to have that project as part of what gets packaged for Octopus, and then add a step in Octopus to deploy that package and it will execute that as part of each deployment. 然后你只需要将该项目作为Octopus打包的一部分,然后在Octopus中添加一个步骤来部署该包,它将作为每个部署的一部分执行。

We've looked at the RedGate solution and pretty much reached the same conclusion you have, unfortunately it's the cost that is putting us off that route. 我们已经看过RedGate解决方案并且几乎得出了相同的结论,不幸的是,这是让我们离开这条路线的成本。

The only things I can think of are to generate version controlled DB migration scripts based upon your existing database, and then execute these as part of your build process. 我能想到的唯一事情是根据您现有的数据库生成受版本控制的数据库迁移脚本,然后在构建过程中执行这些脚本。 If you're looking at .NET projects in future (that don't use a CMS), could potentially consider using entity framework code first migrations. 如果您将来要查看.NET项目(不使用CMS),可能会考虑使用实体框架代码进行首次迁移。

I remember looking into this a while back, and for me it seems that there's a whole lot of trust you'd have to get put into this sort of process, as auto-deploying to a Development or Testing server isn't so bad, as the data is probably replaceable... But the idea of auto-updating a UAT or Production server might send the willies up the backs of an Operations team, who might be responsible for the database, or at least restoring it if it wasn't quite right. 我记得有一段时间回顾这一点,对我而言,似乎有很多信任你必须投入到这种过程中,因为自动部署到开发或测试服务器并不是那么糟糕,因为数据可能是可替换的...但是,自动更新UAT或生产服务器的想法可能会将操作团队的后端发送给操作团队,他们可能负责数据库,或者至少可以恢复它。非常正确。

Having said that, I do think its the way to go, though, as its far too easy to be scared of database deployment scripts, and that's when things get forgotten or missed. 话虽如此,我确实认为它的方式可行,因为它太容易被数据库部署脚本吓到了,而且当事情被遗忘或遗漏时。

I seem to remember looking at using Red Gate's SQL Compare and SQL Data Compare tools, as (I think) there was a command-line way into it, which would work well with scripted deployment processes, like Team City, CruiseControl.Net, etc. 我似乎记得使用Red Gate的SQL Compare和SQL Data Compare工具,因为(我认为)有一个命令行方式,它可以很好地处理脚本部署过程,如Team City,CruiseControl.Net等。

The risk and complexity comes in more when using relational databases. 使用关系数据库时,风险和复杂性更多。 In a NoSQL database where everything is "document" I guess continuous deployment is not such a concern. 在NoSQL数据库中,一切都是“文档”,我想连续部署并不是一个问题。 Some objects will have the "old" data structure till they are updated via the newly released code. 某些对象将具有“旧”数据结构,直到通过新发布的代码更新它们。 In this situation your code would need to be able to support different data structures potentially. 在这种情况下,您的代码可能需要能够支持不同的数据结构。 Missing properties or those with a different type should probably be covered in a well written, defensively coded application anyway. 缺失属性或具有不同类型的属性应该包含在编写良好,防御性编码的应用程序中。

I can see the risk in running scripts against the production database, however the point of CI and Continuous Delivery is that these scripts will be run and tested in other environments first to iron out any "gotchas" :-) 我可以看到针对生产数据库运行脚本的风险,但CI和持续交付的重点是这些脚本将在其他环境中运行和测试,以消除任何“陷阱”:-)

This doesn't reduce the amount of finger crossing and wincing when you actually push the button to deploy though! 当您实际按下按钮进行部署时,这不会减少手指交叉和翘曲的数量!

Having database deploy automation is a real challenge especially when trying to perform the build once deploy many approach as being done to native application code. 拥有数据库部署自动化是一项真正的挑战,特别是在尝试将许多方法部署为本机应用程序代码时执行构建时。

In the build once deploy many, you compile the code and creates binaries and then copy them within the environments. 在构建一次部署很多时,您编译代码并创建二进制文件,然后在环境中复制它们。 From the database point of view, is the equivalent to generate the scripts once and execute them in all environments. 从数据库的角度来看,相当于生成脚本一次并在所有环境中执行它们。 This approach doesn't handle merges from different branches, out-of-process changes (critical fix in production) etc… 这种方法不处理来自不同分支的合并,进程外更改(生产中的关键修复)等...

What I know works for database deployment automation (disclaimer - I'm working at DBmaestro) as I hear this from my customers is using the build and deploy on demand approach. 我所知道的数据库部署自动化 (免责声明 - 我在DBmaestro工作)正如我从客户那里听到的那样是使用构建和按需部署方法。 With this method you build the database delta script as part of the deploy (execute) process. 使用此方法,您可以构建数据库增量脚本,作为部署(执行)过程的一部分。 Using base-line aware analysis the solution knows if to generate the deploy script for the change or protect the target and not revert it or pause and allow you to merge changes and resolve the conflict. 使用基线感知分析,解决方案知道是否为更改生成部署脚本或保护目标,而不是还原它或暂停,并允许您合并更改并解决冲突。

Consider a simple solution we have tried successfully at this thread - How to continuously delivery SQL-based app? 考虑一下我们在这个线程上成功尝试过的简单解决方案 - 如何持续交付基于SQL的应用程序?

Disclaimer - I work at CloudMunch 免责声明 - 我在CloudMunch工作

We using Octopus Deploy and database projects in visual studio solution. 我们在visual studio解决方案中使用Octopus Deploy和数据库项目。

  1. Build agent creates a nuget packages using octopack with a dacpac file and publish profiles inside and pushes it onto NuGet server. 构建代理使用带有dacpac文件的octopack创建nuget包,并在其中发布配置文件并将其推送到NuGet服务器。
  2. Then release process utilizes the SqlPackage.exe utility to generate the update script for the release environment and adds it as an artifact to the release. 然后,发布过程使用SqlPackage.exe实用程序为发布环境生成更新脚本,并将其作为工件添加到发行版中。
  3. Previously created script executed in the next step with SQLCMD.exe utility. 以前使用SQLCMD.exe实用程序在下一步中执行的脚本。

This separation of create and execute steps gives us a possibility to have a manual step in between, so that someone verifies before the script is executed on Live environment, not to mention, that script saved as an artifact in the release can always be referred to, at any later point. 创建和执行步骤的这种分离使我们有可能在两者之间进行手动步骤,以便有人在Live环境中执行脚本之前进行验证,更不用说,在发布中保存为工件的脚本始终可以参考,在任何以后。

Would there be a demand I would provide more details and step scripts. 是否有需求我会提供更多细节和步骤脚本。

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

相关问题 在Docker Swarm中使用数据库进行连续部署 - Continuous deployment with databases in Docker Swarm 如何在开发过程中管理数据库? - How do you manage databases during development? 如何使用MigratorDotNet向数据库添加索引? - How do you add indexes to databases with MigratorDotNet? 在应用程序中使用抽象层时,您是否需要数据库方面的深入知识 - Do you need deep knowledge in databases when using an abstract layer in your application 如何使用SQL Management Studio在数据库之间传输所有表? - How do you transfer all tables between databases using SQL Management Studio? 如何在 Laravel 迁移中使用多个数据库设置外键约束 - How do you set Foreign Key Constraint in Laravel Migration using Multiple Databases 使用数据库时,迭代器如何工作? - How do Iterators act when working with databases? SQLite-如何连接来自不同数据库的表? - SQLite - How do you join tables from different databases? 如何将SQL Server数据库上传到共享托管环境? - How do you upload SQL Server databases to shared hosting environments? 如何在Firefox中删除存储在计算机上的索引数据库? - How do you delete the indexed databases stored on your computer in Firefox?
 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM