简体   繁体   English

在Amazon ECS上在Docker中运行Node API的最佳方式是什么?

[英]What is the optimal way to run a Node API in Docker on Amazon ECS?

With the advent of docker and scheduling & orchestration services like Amazon's ECS, I'm trying to determine the optimal way to deploy my Node API. 随着docker以及Amazon ECS等调度和编排服务的出现,我正在尝试确定部署Node API的最佳方式。 With Docker and ECS aside, I've wanted to take advantage of the Node cluster library to gracefully handle crashing the node app in the event of an asynchronous error as suggested in the documentation , by creating a master process and multiple worker processors. 除了Docker和ECS之外,我还想通过创建主进程和多个工作处理器,利用Node群集库来优雅地处理在文档中建议的异步错误的情况下崩溃节点应用程序。

One of the benefits of the cluster approach, besides gracefully handling errors, is creating a worker processor for each available CPU. 除了优雅地处理错误之外,集群方法的一个好处是为每个可用的CPU创建一个工作器处理器。 But does this make sense in the docker world? 但这在码头工人世界中是否有意义? Would it make sense to have multiple node processes running in a single docker container that was going to be scaled into a cluster of EC2 instances on ECS? 在单个docker容器中运行多个节点进程是否有意义,这个容器将被扩展到ECS上的EC2实例集群中?

Without the Node cluster approach, I'd lose the ability to gracefully handle errors and so I think that at a minimum, I should run a master and one worker processes per docker container. 如果没有Node集群方法,我将失去优雅处理错误的能力,因此我认为至少应该运行一个master和一个worker每个docker容器进行处理。 I'm still confused as to how many CPUs to define in the Task Definition for ECS. 我仍然对在ECS的任务定义中定义了多少CPU感到困惑。 The ECS documentation says something about each container instance having 1024 units per CPU; ECS文档说明了每个容器实例每个CPU有1024个单元; but that isn't the same thing as EC2 compute units, is it? 但这与EC2计算单元不同,是吗? And with that said, I'd need to pick EC2 instance types with the appropriate amount of vCPUs to achieve this right? 有了这个,我需要选择具有适当数量的vCPU的EC2实例类型来实现这一目标吗?

I understand that achieving the most optimal configuration may require some level of benchmarking my specific Node API application, but it would be awesome to have a better idea of where to start. 我知道实现最佳配置可能需要对我的特定Node API应用程序进行一定程度的基准测试,但是更好地了解从哪里开始会很棒。 Maybe there is some studying/research I need to do? 也许我需要做一些学习/研究? Any pointers to guide me on the path or recommendations would be most appreciated! 任何指导我的路径或建议的指针将非常感谢!

Edit: To recap my specific questions: 编辑:回顾我的具体问题:

  1. Does it make sense to run a master/worker cluster as described here inside a docker container to achieve graceful crashing? 是否有意义运行所描述的主/工人群在这里泊坞窗容器内,以实现优美的崩溃?

  2. Would it make sense to use nearly identical code as described in the Cluster docs, to 'scale' to available CPUs via require('os').cpus().length ? 使用与集群文档中描述的几乎完全相同的代码,通过require('os').cpus().length “缩放”到可用的CPU是否有意义require('os').cpus().length

  3. What does Amazon mean in the documentation for ECS Task Definitions, where it says for the cpus setting, that a container instance has 1024 units per CPU ? 亚马逊在ECS任务定义的文档中有什么意思,它说cpus设置, container instance has 1024 units per CPU And what would be a good starting point for the this setting? 什么是这个设置的一个很好的起点?

  4. What would be a good starting point for the instance type to use for an ECS cluster aimed at serving a Node API based on the above? 对于旨在基于上述服务Node API的ECS集群,实例类型的起点是什么? And how do the available vCPUs affect the previous questions? 可用的vCPU如何影响之前的问题?

All these technologies are new and best practices are still being established, so consider these to be tips from my experience only. 所有这些技术都是新技术,最佳实践仍在建立中,因此请将这些技术视为我的经验提示。

One-process-per-container is more of a suggestion than a hard and fast rule. 每个容器的一个进程更多的是一个建议,而不是一个硬性和快速的规则。 It's fine to run multiple processes in a container when you have a use for it, especially in this case where a master process forks workers. 在您使用容器时,可以在容器中运行多个进程,特别是在主进程分叉工作的情况下。 Just use a single container and allow it to fork one process per core, as you've suggested in the question. 只需使用一个容器,并允许它为每个核心分叉一个进程,正如您在问题中所建议的那样。

On EC2, instance types have a number of vCPUs, which will appear as a core to the OS. 在EC2上,实例类型具有多个vCPU,这些vCPU将作为OS的核心出现。 For the ECS cluster use an EC2 instance type such as the c3.xlarge with four vCPUs. 对于ECS群集,请使用EC2实例类型,例如带有四个vCPU的c3.xlarge。 In ECS this translates to 4096 CPU units. 在ECS中,这转换为4096个CPU单元。 If you want the app to make use of all 4 vCPUs, create a task definition that requires 4096 cpu units. 如果您希望应用程序使用所有4个vCPU,请创建需要4096个cpu单元的任务定义。

But if you're doing all this only to stop the app from crashing you could also just use a restart policy to restart the container if it crashes. 但是如果你正在做这一切只是为了阻止应用程序崩溃你也可以使用重启策略重启容器崩溃。 It appears that restart policies are not yet supported by ECS though. 看来ECS尚不支持重启策略。

That seems like a really good pattern. 这似乎是一个非常好的模式。 It's similar to what is done with Erlang/OTP, and I don't think anyone would argue that it's one of the most robust systems on the planet. 它类似于Erlang / OTP所做的,我认为没有人会认为它是地球上最强大的系统之一。 Now the question is how to implement. 现在的问题是如何实施。

I would leverage patterns from Heroku or other similar PaaS systems that have a little bit more maturity. 我会利用Heroku或其他类似PaaS系统的模式,这些模式更加成熟。 I'm not saying that amazon is the wrong place to do this, but simply that a lot of work has been done with this in other areas that you can translate. 我并不是说亚马逊是错误的地方,但只是在你可以翻译的其他方面做了很多工作。 For instance, this article has a recipe in it: https://devcenter.heroku.com/articles/node-cluster 例如,本文中有一个配方: https//devcenter.heroku.com/articles/node-cluster

As far as the relationships between vCPU and Compute Units, it looks like it's just a straight ratio of 1/1024. 至于vCPU和计算单元之间的关系,它看起来只是1/1024的直线比率。 It is a move toward microcharges based on CPU utilization. 它是基于CPU利用率向微型电池的转变。 They are taking these even farther with the lambda work. 他们正在将这些工作带到更远的地方。 They are charging you based on fractions of a second that you utilize. 他们会根据您使用的一小部分向您收取费用。

In the docker world you would run 1 nodejs per docker container but you would run many such containers on each of your ec2 instances. 在docker世界中,您将为每个docker容器运行1个nodejs,但是您将在每个ec2实例上运行许多此类容器。 If you use something like fig you can use fig scale <n> to run many redundant containers an an instance. 如果你使用fig之类的东西,你可以使用fig scale <n>来运行许多冗余容器和一个实例。 This way you don't have to have to define your nodejs count ahead of time and each of your nodejs processes is isolated from the others. 这样,您就不必提前定义nodejs计数,并且每个nodejs进程都与其他进程隔离。

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

相关问题 利用 Docker 多阶段构建的最佳方法是什么? - What is the optimal way to exploit Docker multi-stage builds? 在我的 ASP.NET MVC 网站上运行 Node.js 中内置的 web 刮板的最佳方式是什么? [等候接听] - What is the most optimal way to run a web scraper built in Node.js on my ASP.NET MVC website? [on hold] 在node.js中将一个流传输到两个流的最佳方法是什么 - What is optimal way to pipe one stream to two streams in node.js 如何配置 Nginx 和 Docker 以正确方式运行 Node 和 Mongo - How to configure Nginx with Docker to run Node and Mongo in the correct way 将docker-compose与mongodb,node和postman结合使用的正确方法是什么? - What is the correct way to use docker-compose with mongodb, node, and postman? 在PostgreSQL中存储Javascript编号的最佳方法是什么 - What is optimal way of storing Javascript Number in PostgreSQL 错误:使用 node-docker-api 时连接 ENOENT /var/run/docker.sock - Error: connect ENOENT /var/run/docker.sock when using node-docker-api 如何在AWS ECS上运行节点容器而不退出 - How do I run a node container on AWS ECS without exiting 运行 Node.js 项目的惯用方式是什么 - What is the idiomatic way to run a Node.js project 这是通过node.js对MongoDB执行操作的最佳方式吗? - Is this the optimal way to perform operations to MongoDB through a node.js?
 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM