简体   繁体   中英

Best way to manage code changes for application in Amazon EC2 with Auto Scaling

I have multiple instances running behind Load balancer with Auto Scaling in AWS.

Now, if I have to push some code changes to these instances and any new instances that might launch because of auto scaling policy, what's the best way to do this?

The only way I am aware of is, to create a new AMI with latest code, modify the auto scaling policy to use this new AMI and then terminate the existing instances. But this might involve a longer downtime and I am not sure whether the whole process can be automated.

Any pointers in this direction will be highly appreciated.

The way I do my code changes is to have a master server which I edit on the code on. All the slave servers which scale then rsync via ssh by a cron job to bring all the files up to date. All the servers sync every 30 minutes +- a few random seconds to keep from accessing it at the exact same second. (note I leave the Master off of the load balancer so users always have the same code being sent to them. Similarly, when I decide to publish my code changes, I do an rsync from my test server to my master server.

Using this approach, you merely have to put the sync command in the start-up and you don't have to worry about what the code state was on the slave image as it will be up to date after it boots.

EDIT: We have stopped using this method now and started using the new service AWS CodeDeploy which is made for this exact purpose:

http://aws.amazon.com/codedeploy/

Hope this helps.

We configure our Launch Configuration to use a "clean" off-the-shelf AMI - we use these: http://aws.amazon.com/amazon-linux-ami/

One of the features of these AMIs is CloudInit - https://help.ubuntu.com/community/CloudInit

This feature enables us to deliver to the newly spawned plain vanilla EC2 instance some data. Specifically, we give the instance a script to run.
The script (in a nutshell) does the following:

  1. Upgrades itself (to make sure all security patches and bug fixes are applied).
  2. Installs Git and Puppet.
  3. Clones a Git repo from Github.
  4. Applies a puppet script (which is part of the repo) to configure itself. Puppet installs the rest of the needed software modules.

It does take longer than booting from a pre-configured AMI, but we skip the process of actually making these AMIs every time we update the software (a couple of times a week) and the servers are always "clean" - no manual patches, all software modules are up to date etc.

Now, to upgrade the software, we use a local boto script. The script kills the servers running the old code one by one. The Auto Scaling mechanism launches new (and upgraded) servers.

Make sure to use as-terminate-instance-in-auto-scaling-group because using ec2-terminate-instance will cause the ELB to continue to send traffic to the shutting-down instance, until it fails the health check.

Interesting related blog post: http://blog.codento.com/2012/02/hello-ec2-part-1-bootstrapping-instances-with-cloud-init-git-and-puppet/

It appears you can manually double auto scaling group size, it will create EC2 instances using AMI from current Launch Configuration. Now if you decrease auto scaling group back to the previous size, old instances will be killed and only instances created from a new AMI will survive.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM