简体   繁体   English

如何在相对复杂的基础架构中正确自动扩展AWS EC2 Instances组?

[英]How to properly auto-scale AWS EC2 Instances group in a relatively complex infrastructures?

I'm working to migrate our servers on the Amazon Cloud, reasons are auto-scaling possibilities, costs, services, and more, obviously. 我正在努力在Amazon Cloud上迁移我们的服务器,理由是自动扩展可能性,成本,服务等等。

So far, I'm experimenting hard and trying to dive in the full-featured documentations, but with no previous experience I've much questions. 到目前为止,我正在努力尝试并试图深入研究全功能的文档,但由于没有以前的经验,我有很多问题。

The envisaged infrastructure is the following: 设想的基础设施如下:

                                  +-----+
                                  | ELB |
                                  +--+--+
                                     |
                +--------------------|--------------------+
                |            Auto-Scaling Group           |
                |--------------------|--------------------|
                |                    |                    |
                |  +---------+       |       +---------+  |
                |  | varnish |<------+------>| varnish |  |
                |  +----+----+               +---------+  |
                |       |                         |       |
                +-----------------------------------------+
                        |                         |
                        |                         |
                        |     +------------+      |
                        +---->|Internal ELB|<-----+
                              +------+-----+
                                     |
                +-----------------------------------------+
                |            Auto-Scaling Group           |
                |-----------------------------------------|
                |  +---------+       |       +---------+  |
                |  | Apache  |<------+------>| Apache  |  |
                |  +----+----+               +----+----+  |
                |       |                         |       |
                +-----------------------------------------+
                        |         +-----+         |
                        +-------->| RDS |<--------+
                                  +-----+

In words, I would have Elastic LoadBalancer which will send traffic to the Varnish instances, which will in turn send the traffic to an internal Elastic LoadBalancer which will send the traffic to the Apache frontends. 换句话说,我会使用Elastic LoadBalancer将流量发送到Varnish实例,Varnish实例又将流量发送到内部Elastic LoadBalancer,后者将流量发送到Apache前端。

For now, I've discovered the AWS tools, like the CloudFormation service which seems able to bootstrap instance given a template, this seems great, but it seems able to bootstrap only. 现在,我发现了AWS工具,比如CloudFormation服务似乎能够在给定模板的情况下引导实例,这看起来很棒,但它似乎只能引导。

Having a little experience with Puppet (and given the recommandation of AWS on the subject) I dove in the Puppet Master thing which is a great tool. Puppet有一点经验(并考虑到AWS在这个问题上的推荐)我参加了Puppet Master这个很棒的工具。

My idea, which may not be viable or realistic, is to create a "Puppet Node Stack" using CloudFormation templates, which will configure the instance as required and connect the puppet master to be provisioned. 我的想法,可能不可行或不现实,是使用CloudFormation模板创建一个“Puppet节点堆栈”,它将根据需要配置实例并连接要配置的puppet master。

Once I've a stack ready, I'm wondering how to configure/create Auto-Scaling group for both Varnish and Apache instances. 一旦我准备好堆栈,我就想知道如何VarnishApache实例配置/创建Auto-Scaling组。

It seems that CFN has resources to configure the auto-scaling groups & policies, so I guess I could create two different templates for each. CFN似乎有资源配置自动扩展组和策略,所以我想我可以为每个创建两个不同的模板。

But would the AS feature run trough the CFN service, and then does all the init things (and executes the user-data )? 但AS功能是否会通过CFN服务运行,然后执行所有init事务(并执行user-data )?

I also read here and there that Puppet can make use of the EC2 Tags, maybe a generic stack template with corresponding tags (like roles) could do the trick? 我也在这里和那里读到Puppet可以使用EC2标签,也许一个通用的堆栈模板与相应的标签(如角色)可以做到这一点?

Is this architecture realistic and viable? 这种架构是否真实可行? Do you have any feedbacks? 你有什么反馈意见吗?

Thanks for your advices. 谢谢你的建议。

Auto-scaling creates new nodes based on the launch configuration. 自动缩放基于启动配置创建新节点。 So you would have two separate auto scaling groups and two separate launch configurations. 因此,您将拥有两个单独的自动缩放组和两个单独的启动配置。 ie

"VarnishScalingGroup" : {
  "Type" : "AWS::AutoScaling::AutoScalingGroup",
  "Properties" : {
    "LaunchConfigurationName" : {"Ref" : "VarnishLaunchConfiguration" },
    "LoadBalancerNames" : {"Ref" : "ELB"},
    ...
  }
},
"VarnishLaunchConfiguration" : {
  "Type" : "AWS::AutoScaling::LaunchConfiguration",
  "Properties" : {
    ...
    "UserData" : {
      ....
    },
    "MetaData" : {
      ...
    }
 },
"ApacheScalingGroup" : {
  "Type" : "AWS::AutoScaling::AutoScalingGroup",
  "Properties" : {
    "LaunchConfigurationName" : {"Ref" : "ApacheLaunchConfiguration" },
    "LoadBalancerNames" : {"Ref" : "InternalELB"},
    ...
  }
},
"ApacheLaunchConfiguration" : {
  "Type" : "AWS::AutoScaling::LaunchConfiguration",
  "Properties" : {
    ...
    "UserData" : {
      ....
    },
    "MetaData" : {
      ...
    }
 }

The other thing you'd want to add is separate scaling policies for each scaling group, and appropriate CloudWatch metrics to match. 您要添加的另一件事是针对每个扩展组的单独扩展策略,以及要匹配的相应CloudWatch指标。

CloudFormation can also initiate updates to the stack. CloudFormation还可以启动堆栈更新。 If as part of the userdata you kick of cfn-hup, then it will periodically (you decide) check for changes in the stack meta data - and then execute whatever you prefer. 如果作为你使用cfn-hup的用户数据的一部分,那么它将定期(你决定)检查堆栈元数据的变化 - 然后执行你喜欢的任何内容。 I tend to kick off another version of cfn-init - which will parse and update any meta data. 我倾向于启动另一个版本的cfn-init - 它将解析和更新任何元数据。

Key point - if you go down the cfn-hup path, it will not execute userdata again, unless the CloudFormation stack requires dropping and creating new instances. 关键点 - 如果你沿着cfn-hup路径走,它将不会再次执行userdata,除非CloudFormation堆栈需要删除并创建新实例。

One other point, if you want updates to the LaunchConfiguration to be rolled out, you need to ensure that the LaunchConfiguration also has an UpdatePolicy applied to it. 另外一点,如果要推出LaunchConfiguration的更新,则需要确保LaunchConfiguration还应用了UpdatePolicy。

Instead of having a "Puppet Node Stack" you might want to consider pre-building your AMIs using a tool like packer ( http://www.packer.io/ ), which can provision a machine with puppet and create an AMI. 您可能需要考虑使用像packer( http://www.packer.io/ )这样的工具预先构建您的AMI,而不是使用“Puppet Node Stack”,这可以为机器配置puppet并创建AMI。 Then add the provisioned AMI to your cloudformation template. 然后将配置的AMI添加到您的云信息模板中。

As Peter H. says, cloudformation can handle updates to your stack. 正如Peter H.所说,cloudformation可以处理堆栈的更新。 So when you make changes to your puppet setup, you can build a new AMI and update your launch configuration in cloudformation. 因此,当您对puppet设置进行更改时,您可以构建新的AMI并在cloudformation中更新启动配置。 The autoscaling will start using the new AMI for autoscaling new instances. 自动扩展将开始使用新的AMI进行自动扩展新实例。

Taking puppet out of cloudformation gives you a separation of concerns between infrastructure and server config. 将puppet从cloudformation中取出可以让您分清基础架构和服务器配置之间的关注点。

Scaling up will happen faster with pre-built AMIs that already have your Apache/Varnish setup. 使用预先构建的AMI(已经有Apache / Varnish设置)可以更快地进行扩展。

There are also advantages to a masterless puppet setup. 无主木偶设置也有优势。 ie. 即。 Decentralized, not having puppetmaster as a point of failure, etc. See https://serverfault.com/questions/408261/pros-and-cons-of-a-decentralized-puppet-architecture 权力下放,没有将puppetmaster作为失败点等。请参阅https://serverfault.com/questions/408261/pros-and-cons-of-a-decentralized-puppet-architecture

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM