简体   繁体   English

Node.js事件循环 - nginx / apache

[英]The Node.js event loop - nginx/apache

Both nginx and Node.js have event loops to handle requests. nginx和Node.js都有事件循环来处理请求。 I put nginx in front of Node.js as has been recommended here 我把nginx放在Node.js前面,就像这里推荐的那样

Using Node.js only vs. using Node.js with Apache/Nginx 仅使用Node.js与使用Apache / Nginx的Node.js

with the setup shown here 使用此处显示的设置

Node.js + Nginx - What now? Node.js + Nginx - 现在怎么办?

  1. How do the two event loops play together? 两个事件循环如何一起玩? Is there any risk of conflicts between the two? 两者之间是否存在冲突风险? I wonder because Nginx may not be able to handle as many events per second as Node.js or vice versa. 我想知道因为Nginx可能无法每秒处理与Node.js一样多的事件,反之亦然。 For example, if Nginx can handle 1000 events per second but node.js only 500, won't that cause issues? 例如,如果Nginx每秒可以处理1000个事件但node.js只能处理500个,那么这不会导致问题吗? (I have no idea if 1000,500 are reasonable orders of magnitude, you could correct me on that.) (我不知道1000,500是否是合理的数量级,你可以纠正我。)

  2. What about putting Apache in front of Node.js? 把Apache放在Node.js面前怎么样? Apache has no event loop. Apache没有事件循环。 Just threads. 只是线程。 So won't putting Apache in front of Node.js defeat the purpose? 所以不会把Apache放在Node.js前面打败目的吗?

  3. In this 2010 talk , Node.js creator Ryan Dahl had vision to get rid of nginx/apache/whatever entirely and make node talk directly to the internet. 2010年的演讲中 ,Node.js的创建者Ryan Dahl有望彻底摆脱nginx / apache /无论是什么,让节点直接与互联网对话。 When do you think this will be reality? 你觉得什么时候会成为现实?

  1. Both nginx and Node use an asynchronous and event-driven approach. nginx和Node都使用异步和事件驱动的方法。 The communication between them will go more or less like this: 他们之间的沟通或多或少会像这样:

    • nginx receives a request nginx收到请求
    • nginx forwards the request to the Node process and immediately goes back to wait for more requests nginx将请求转发给Node进程并立即返回以等待更多请求
    • Node receives the request from nginx 节点从nginx接收请求
    • Node handles the request with minimal CPU usage, until at some point it needs to issue one or more I/O requests (read from a database, write the response, etc). Node以最小的CPU使用率处理请求,直到某个时候它需要发出一个或多个I / O请求(从数据库读取,写入响应等)。 At this point it launches all these I/O requests and goes back to wait for more requests. 此时它会启动所有这些I / O请求并返回以等待更多请求。
    • The above can repeat lots of times. 以上可以重复很多次。 You could have hundreds of thousands of requests all in a non-blocking wait state where nginx is waiting for Node and Node is waiting for I/O. 您可以在非阻塞等待状态下拥有数十万个请求,其中nginx正在等待节点并且Node正在等待I / O. And while this happens both nginx and Node are ready to accept even more requests! 虽然发生这种情况,nginx和Node都准备接受更多的请求!
    • Eventually async I/O started by the Node process will complete and a callback function will get invoked. 最终,Node进程启动的异步I / O将完成,并将调用回调函数。
    • If there are still I/O requests that haven't completed for this request, then Node goes back to its loop one more time. 如果仍有I / O请求尚未完成此请求,则Node再次返回其循环。 It can also happen that once an I/O operation completes this data is consumed by the Node callback and then new I/O needs to happen, so Node can start more async I/O requests before going back to the loop. 也可能发生这样的情况:一旦I / O操作完成,节点回调就会消耗这些数据,然后需要发生新的I / O,因此Node可以在返回循环之前启动更多的异步I / O请求。
    • Eventually all I/O operations started by Node for a particular request will be complete, including those that write the response back to nginx. 最终,Node为特定请求启动的所有I / O操作都将完成,包括那些将响应写回nginx的操作。 So Node ends this request, and then as always goes back to its loop. 所以Node结束了这个请求,然后一直回到它的循环。
    • nginx receives an event indicating that response data has arrived for a request, so it takes that data and writes it back to the client, once again in a non-blocking fashion. nginx接收一个事件,指示响应数据已经到达请求,因此它将获取该数据并再次以非阻塞方式将其写回客户端。 When the response has been written to the client and event will trigger and nginx will then end the request. 当响应已写入客户端并且事件将触发时,nginx将结束请求。

    You are asking about what would happen if nginx and Node can handle a different number of maximum connections. 您正在询问如果nginx和Node可以处理不同数量的最大连接会发生什么。 They really don't have a maximum, the maximum in general comes from operating system configuration, for example from the maximum number of open handles the system can have at a time or the CPU throughput. 它们实际上没有最大值,最大值通常来自操作系统配置,例如系统可以一次打开的最大打开句柄数或CPU吞吐量。 So your question does not really apply. 所以你的问题并不适用。 If the system is configured correctly and all processes are I/O bound, neither nginx or Node will ever block. 如果系统配置正确且所有进程都受I / O限制,则nginx或Node都不会阻塞。

  2. Putting Apache in front of Node will only work well if you can guarantee that your Apache never blocks (ie it never reaches its maximum connection limit). 将Apache置于Node之前只有在您可以保证Apache永远不会阻塞(即它永远不会达到其最大连接限制)时才能正常工作。 This is hard/impossible to achieve for large number of connections, because Apache uses an individual process or thread for each connection. 对于大量连接来说,这很难/不可能实现,因为Apache为每个连接使用单独的进程或线程。 nginx and Node scale really well, Apache does not. nginx和Node规模确实很好,Apache没有。

  3. Running Node without another server in front works fine and it should be okay for small/medium load sites. 前面没有其他服务器的运行节点工作正常,对于小型/中型负载站点应该没问题。 The reason putting a web server in front of it is preferred is that web servers like nginx come with features that Node does not have and you would need to implement yourself. 将Web服务器放在其前面的原因是首选的是像nginx这样的Web服务器具有Node没有的功能,您需要自己实现。 Things like caching, load balancing, running multiple apps from the same server, etc. 诸如缓存,负载平衡,从同一服务器运行多个应用程序等等。

I think your questions have been largely covered by some of the others answers, but there are a few pieces missing, and some that I disagree with, so here are mine: 我认为你的问题在很大程度上已经被其他一些答案所覆盖了,但是有一些部分缺失,有些部分我不同意,所以这里是我的:

  1. The event loops are isolated from each other at the process level, but do interact. 事件循环在进程级别彼此隔离,但确实相互作用。 The issues you're most likely to encounter are around the configuration of nginx response buffers, chunked data, etc. but this is optimisation rather than error resolution. 您最有可能遇到的问题是nginx响应缓冲区,分块数据等的配置,但这是优化而不是错误解决方案。

  2. As you point out, if you use Apache you're nullifying the benefit of using Node.js, ie massive concurrency and websockets. 正如您所指出的,如果您使用Apache,那么您将无效使用Node.js,即大规模并发和websockets。 I wouldn't recommend doing that. 我不建议这样做。

  3. People are already using Node.js at the front of their stack. 人们已经在他们的堆栈前面使用Node.js. Searching for benchmarks returns some reasonable-looking results in Node's favour, so performance to my mind isn't an issue. 搜索基准测试在Node中获得一些看似合理的结果 ,因此我认为性能不是问题。 However, there are still reasons to put Nginx in front of Node. 但是,仍然有理由将Nginx放在Node之前。

    1. Security - Node has been given increasing scrutiny, but it's still young. 安全性 - 节点已经受到越来越多的关注,但它仍然很年轻。 You may not have problems here, but caution is often your friend. 你可能没有问题,但谨慎通常是你的朋友。

    2. Training - Ops staff that you hire will know how to manage Nginx, but the configuration and management of your custom Node app will only ever be understood by those people your developers successfully communicate it to. 培训 - 您雇佣的Ops员工将知道如何管理Nginx,但您的开发人员成功传达的人员只能理解您的自定义Node应用程序的配置和管理。 In some companies this is nobody. 在一些公司,这是没有人。

    3. Operational Flexibility - If you reach scale you might want to split out the serving of static content, purely to reduce the load on your app servers. 操作灵活性 - 如果达到规模,您可能希望拆分静态内容的服务,纯粹是为了减少应用服务器的负载。 You might want to split content amongst different domains and have it managed separately, or have different SSL or proxying behaviour for different domains or URL patterns. 您可能希望在不同域之间拆分内容并将其单独管理,或者针对不同的域或URL模式具有不同的SSL或代理行为。 These are the things that are easy for Ops guys to configure in Nginx, but you'd have to code manually in a Node app. 这些是Ops家伙在Nginx中轻松配置的东西,但您必须在Node应用程序中手动编码。

  1. The event loops are independent. 事件循环是独立的。 Event loops are implemented at the application level, so neither cares what sort of architecture the other uses. 事件循环在应用程序级别实现,因此既不关心另一个使用的架构。

  2. NodeJS is good at many things, but there are some places where it still falters. NodeJS很擅长很多东西,但有些地方仍然存在动摇。 Once example is serving static files. 一旦示例提供静态文件。 At the moment, nodejs performs fairly poorly in this test, so having a dedicated web server for your static files greatly improves response time. 目前,nodejs在此测试中的表现相当差,因此为静态文件配备专用的Web服务器可以大大缩短响应时间。 Also, nodejs is still in its infancy, and has not been "tested and hardened" in the matters of security like Apache on nginX. 此外,nodejs仍处于起步阶段,并且在nginX上的Apache等安全问题上尚未经过“测试和强化”。

  3. It'll take a long time for people to consider fronting nodejs all by itself. 人们需要花费很长时间才能自己考虑将nodejs全部考虑在内。 The cluster module is a step in the right direction, but it'll take a long time even after it reaches v1 before it happens. 群集模块是向正确方向迈出的一步,但即使在它发生之前达到v1也需要很长时间。

  1. Both event loops are unrelated. 两个事件循环都是不相关的。 They don't play together. 他们不在一起玩。
  2. Yes, it is pretty useless. 是的,这很没用。 Apache is not a load balancer. Apache不是负载均衡器。
  3. What Ryan Dahl said may be applicable already. Ryan Dahl所说的可能已经适用。 The limit of concurrent users is definitely higher than that of Apache. 并发用户的限制肯定高于Apache。 Before node.js websites with fair amount of concurrent users had to use nginx to balance the load. 在具有相当数量的并发用户的node.js网站之前,必须使用nginx来平衡负载。 For small to medium sized businesses it can be done with node.js alone. 对于中小型企业,可以单独使用node.js。 But ruling out nginx completely will take time. 但完全排除nginx需要时间。 Let node.js be stable before it can follow this ambitious dream. 让node.js在遵循这个雄心勃勃的梦想之前保持稳定。

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM