简体   繁体   English

EC2用于处理需求高峰

[英]EC2 for handling demand spikes

I'm writing the backend for a mobile app that does some cpu intensive work. 我正在编写一个移动应用程序的后端,它可以完成一些cpu密集型工作。 We anticipate the app will not have heavy usage most of the time, but will have occasional spikes of high demand. 我们预计该应用程序在大多数情况下不会有大量使用,但偶尔会有高需求的高峰。 I was thinking what we should do is reserve a couple of 24/7 servers to handle the steady-state of low demand traffic and then add and remove EC2 instances as needed to handle the spikes. 我在想我们应该做的是保留几个24/7服务器来处理低需求流量的稳定状态,然后根据需要添加和删除EC2实例来处理峰值。 The mobile app will first hit a simple load balancing server that does a simple round-robin user distribution among all the available processing servers. 移动应用程序将首先点击一个简单的负载平衡服务器,在所有可用的处理服务器之间进行简单的循环用户分发。 The load balancer will handle bringing new EC2 instances up and turning them back off as needed. 负载均衡器将处理新的EC2实例并根据需要将其关闭。

Some questions: 一些问题:

I've never written something like this before, does this sound like a good strategy? 我之前从未写过类似的东西,这听起来像是一个好策略吗?

What's the best way to handle bringing new EC2 instances up and back down? 处理新EC2实例的最佳方法是什么? I was thinking I could just create X instances ahead of time, set them up as needed (install software, etc), and then stop each instance. 我以为我可以提前创建X实例,根据需要设置它们(安装软件等),然后停止每个实例。 The load balancer will then start and stop the instances as needed (eg through boto ). 然后负载平衡器将根据需要启动和停止实例(例如通过boto )。 I think this should be a lot faster and easier than trying to create new instances and install everything through a script or something. 我认为这应该比尝试创建新实例并通过脚本或其他东西安装所有内容更快更容易。 Good idea? 好主意?

One thing I'm concerned about here is the cost of turning EC2 instances off and back on again. 我在这里关注的一件事是关闭和重新启动EC2实例的成本。 I looked at the AWS Usage Report and had difficulty interpreting it. 我查看了AWS使用情况报告,但难以解释。 I could see starting a stopped instance being a potentially costly operation. 我可以看到启动已停止的实例是一项潜在的高成本操作。 But it seems like since I'm just starting a stopped instance rather than provisioning a new one from scratch it shouldn't be too bad. 但似乎因为我刚刚开始停止实例而不是从头开始配置新实例,所以不应该太糟糕。 Does that sound right? 听起来不错吗?

This is a very reasonable strategy. 这是一个非常合理的策略。 I used it successfully before. 我之前成功使用过它。

You may want to look at Elastic Load Balancing (ELB) in combination with Auto Scaling . 您可能希望结合Auto Scaling查看Elastic Load Balancing (ELB)。 Conceptually the two should solve this exact problem. 从概念上讲,两者应该解决这个确切的问题。

Back when I did this around 2010, ELB had some problems with certain types of HTTP requests that prevented us from using it. 回到2010年左右,当我在某些类型的HTTP请求中阻止我们使用它时,ELB遇到了一些问题。 I understand those issues are resolved. 我理解这些问题已得到解决。

Since ELB was not an option, we manually launched instances from EBS snapshots as needed and manually added them to an NGinX load balancer. 由于ELB不是一个选项,我们根据需要手动从EBS快照启动实例,并手动将它们添加到NGinX负载均衡器。 That certainly could have been automated using the AWS APIs, but our peaks were so predictable (end of month) that we just tasked someone to spin up the new instances and didn't get around to automating the task. 这当然可以使用AWS API实现自动化,但我们的峰值是如此可预测(月末),我们只是要求某人启动新实例并且没有自动完成任务。

When an instance is stopped, I believe the only cost that you pay is for the EBS storage backing the instance and its data. 当实例停止时,我相信您支付的唯一成本是支持实例及其数据的EBS存储。 Unless your instances have a huge amount of data associated, the EBS storage charge should be minimal. 除非您的实例有大量数据关联,否则EBS存储费用应该是最小的。 Perhaps things have changed since I last used AWS, but I would be surprised if this changed much if at all. 自从我上次使用AWS以来,事情可能已经发生了变化,但如果发生变化,我会感到惊讶。

First with regards to costs, whether an instance is started from scratch or from a stopped state has no impact on cost. 首先,关于成本,实例是从头开始还是从停止状态开始,对成本没有影响。 You are billed for the amount of compute units you use over time, period. 您需要支付一段时间内使用的计算单位数量。

Second, what you are looking to do is called autoscaling. 其次,您要做的事情称为自动缩放。 What you do is setup up a launch config that specifies an AMI you are going to use (along with any user-data configs you are using, the ELB and availiabilty zones you are going to use, min and max number of instances, etc. You set up a scaling group using that launch config. Then you set up scaling policies to determine what scaling actions are going to be attached to the group. You then attach cloud watch alarms to each of those policies to trigger the scaling actions. 您所做的是设置一个启动配置,指定您要使用的AMI(以及您正在使用的任何用户数据配置,您将要使用的ELB和可用区域,最小和最大实例数等。您可以使用该启动配置设置扩展组。然后设置扩展策略以确定将要附加到组的扩展操作。然后,将云监视警报附加到每个策略以触发扩展操作。

You don't have servers in reserve that you attach to the ELB or anything like that. 您没有附加到ELB的保留服务器或类似的东西。 Everything is based on creating a single AMI that is used as the template for the servers you need. 一切都基于创建单个AMI,该AMI用作所需服务器的模板。

You should read up on autoscaling at the link below: 您应该通过以下链接阅读自动缩放:

http://aws.amazon.com/autoscaling/ http://aws.amazon.com/autoscaling/

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM