[英]Using pm2 Inside of an Auto-Scaling Environment
I am planning to use the AWS EC2 Container Service to host an auto-scaling group of Node.js + Express instances that expose a REST API. I've seen multiple articles telling me that I should be using pm2
over forever.js
in order to ensure that my application restarts if it crashes, that I can have smooth application reloads, etc.我计划使用 AWS EC2 容器服务来托管一个由 Node.js + Express 实例组成的自动扩展组,这些实例公开了 REST API。我看过多篇文章告诉我应该使用
pm2
而不是forever.js
以确保我的应用程序在崩溃时重新启动,我可以顺利地重新加载应用程序,等等。
However, I'm a bit confused as to what configuration I should use with pm2
inside of the container.但是,对于容器内的
pm2
应该使用什么配置,我有点困惑。 As these instances will be scaled automatically, should I still be running the process manager in "cluster mode"?由于这些实例将自动缩放,我是否仍应以“集群模式”运行流程管理器? I want to be sure that I am getting the most out of my instances, and I can't seem to find any definitive answers about whether clustering is necessary in an auto-scaling environment like this (just that
pm2
comes with a load-balancer and scaling technique itself).我想确保我能充分利用我的实例,而且我似乎无法找到关于在这样的自动缩放环境中是否需要集群的任何明确答案(只是
pm2
带有负载均衡器和缩放技术本身)。
I would use systemd over pm2 in any case, as its native on most Linux distros now and effectively one less step (with pm2 you still need to make the pm2 daemon a service). 无论如何,我都会在pm2上使用systemd,因为它在大多数Linux发行版中都是本机,现在实际上减少了一步(使用pm2,您仍然需要使pm2守护程序成为服务)。
As for running cluster, etc, I think that depends a great deal on what your Node app is doing. 至于正在运行的群集等,我认为这很大程度上取决于您的Node应用程序在做什么。 As such,I'd probably deploy containers that don't use it, scale as a container rather than inside, and profile for a while.
因此,我可能会部署不使用它的容器,将其缩放为容器而不是内部,并进行一段时间的分析。 This keeps things inside each container as simple as possible and lets the EVS service manager do its job.
这样可以使每个容器中的内容尽可能简单,并使EVS服务管理器完成其工作。
When most folks use the cluster module, they make one worker or maybe two per CPU core. 当大多数人使用群集模块时,每个CPU核心只能雇用一个工人,或者两个。 Given that a container is sharing CPU cores with any other containers on the host, it seems like your not getting much bang for the additional complexity.
鉴于容器与主机上的任何其他容器共享CPU内核,看来您对额外的复杂性没有太大的兴趣。
We have the same situation with AWS EC2 cluster.我们对 AWS EC2 集群也有同样的情况。 We created 1 load balancer and 2 server with many CPUs and memory to manage all our applications.
我们创建了 1 个负载均衡器和 2 个具有许多 CPU 和 memory 的服务器来管理我们所有的应用程序。 Every node.js application has own container and minimum required memory (for example 1GB).
每个 node.js 应用程序都有自己的容器和最低要求 memory(例如 1GB)。
Inside every container we have PM2 with memory limit to restart every process (prevent memory leak) and no limit on CPU or memory for container.在每个容器中,我们都有 PM2,限制为 memory 以重新启动每个进程(防止 memory 泄漏),并且对容器的 CPU 或 memory 没有限制。 Every application has minimum 2 instances inside container (in common 4 instances for both servers).
每个应用程序在容器内至少有 2 个实例(两个服务器共有 4 个实例)。
Also i wrote a small PM2 plugin to automatically scale application inside containers depends on the load and it helps us to scale application to up to MAX CPUs-1.我还写了一个小的 PM2 插件来根据负载自动扩展容器内的应用程序,它帮助我们将应用程序扩展到 MAX CPUs-1。 So, you can try to use it https://www.npmjs.com/package/pm2-autoscale and share feedback.
因此,您可以尝试使用它https://www.npmjs.com/package/pm2-autoscale并分享反馈。
And we have autoscale configuration inside AWS cluster if cluster does not have enough power.如果集群没有足够的能力,我们会在 AWS 集群内进行自动缩放配置。
声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.