简体   繁体   English

node.js 本身还是用于提供静态文件的 nginx 前端?

[英]node.js itself or nginx frontend for serving static files?

Is there any benchmark or comparison which is faster: place nginx in front of node and let it serve static files directly or use just node and serve static files using it?有没有更快的基准或比较:将 nginx 放在节点前面并让它直接提供静态文件或仅使用节点并使用它提供静态文件?

nginx solution seems to be more manageable for me, any thoughts? nginx 解决方案对我来说似乎更易于管理,有什么想法吗?

I'll have to disagree with the answers here. 我不得不在这里不同意答案。 While Node will do fine, nginx will most definitely be faster when configured correctly. 虽然Node会很好,但正确配置后nginx肯定会更快。 nginx is implemented efficiently in C following a similar pattern (returning to a connection only when needed) with a tiny memory footprint. nginx在C中有效实现,遵循类似的模式(仅在需要时返回连接),内存占用空间很小。 Moreover, it supports the sendfile syscall to serve those files which is as fast as you can possibly get at serving files, since it's the OS kernel itself that's doing the job. 此外,它支持sendfile系统调用来提供那些尽可能快地提供服务文件的文件,因为它是正在执行工作的操作系统内核本身。

By now nginx has become the de facto standard as the frontend server. 到目前为止,nginx已成为事实上的标准作为前端服务器。 You can use it for its performance in serving static files, gzip, SSL, and even load-balancing later on. 您可以将其用于提供静态文件,gzip,SSL以及稍后进行负载平衡的性能。

PS: This assumes that files are really "static" as in at rest on disk at the time of the request. PS:这假定文件确实是“静态的”,就像在请求时磁盘上的静止一样。

I did a quick ab -n 10000 -c 100 for serving a static 1406 byte favicon.ico , comparing nginx, Express.js (static middleware) and clustered Express.js. 我做了一个快速ab -n 10000 -c 100用于提供静态1406字节favicon.ico ,比较nginx,Express.js(静态中间件)和集群Express.js。 Hope this helps: 希望这可以帮助:

在此输入图像描述

Unfortunately I can't test 1000 or even 10000 concurrent requests as nginx, on my machine, will start throwing errors. 不幸的是我无法测试1000甚至10000个并发请求,因为在我的机器上,nginx会开始抛出错误。

EDIT : as suggested by artvolk, here are the results of cluster + static middleware (slower): 编辑 :正如artvolk所建议的,这里是集群+ static中间件(更慢)的结果:

在此输入图像描述

I have a different interpretation of @gremo's charts. 我对@ gremo的图表有不同的解释。 It looks to me like both node and nginx scale at the same number of requests (between 9-10k). 它看起来像节点和nginx规模相同的请求数(9-10k之间)。 Sure the latency in the response for nginx is lower by a constant 20ms, but I don't think users will necessarily perceive that difference (if your app is built well). 当然,nginx的响应延迟会持续20ms,但我不认为用户必然会发现这种差异(如果您的应用程序构建良好)。 Given a fixed number of machines, it would take quite a significant amount of load before I would convert a node machine to nginx considering that node is where most of the load will occur in the first place. 给定一定数量的机器,在将节点机器转换为nginx之前需要相当大的负载,因为该节点是首先出现大部分负载的地方。 The one counterpoint to this is if you are already dedicating a machine to nginx for load balancing. 与此相反的一个问题是,如果您已经将一台机器专用于nginx以实现负载平衡。 If that is the case then you might as well have it serve your static content as well. 如果是这种情况,那么您也可以使用它来提供静态内容。

Either way, I'd setup Nginx to cache the static files ...you'll see a HUGE difference there. 无论哪种方式,我都设置Nginx来缓存静态文件 ......你会看到那里有巨大的差异。 Then, whether you serve them from node or not, you're basically getting the same performance and the same load-relief on your node app. 然后,无论您是否从节点提供服务,您基本上都可以在节点应用程序上获得相同的性能和相同的负载。

I personally don't like the idea of my Nginx frontend serving static assets in most cases, in that 我个人不喜欢在大多数情况下我的Nginx前端服务静态资产的想法

1) The project has to now be on the same machine - or has to be split into assets (on nginx machine) & web app (on multiple machines for scaling) 1)项目现在必须在同一台机器上 - 或者必须分成资产(在nginx机器上)和web应用程序(在多台机器上进行缩放)

2) Nginx config now has to maintain path locations for static assets / reload when they change. 2)Nginx配置现在必须维护静态资产/重新加载时的路径位置。

FWIW, I did a test on a rather large file download (~60 MB) on an AWS EC2 t2.medium instance, to compare the two approaches. FWIW,我在 AWS EC2 t2.medium 实例上对相当大的文件下载(~60 MB)进行了测试,以比较这两种方法。

Download time was roughly the same (~15s), memory usage was negligible in both cases (<= 0.2%), but I got a huge difference in CPU load during the download:下载时间大致相同(~15 秒),两种情况下的内存使用量都可以忽略不计(<= 0.2%),但我在下载过程中发现 CPU 负载存在巨大差异:

  • Using Node + express.static() : 3.0 ~ 5.0% (single node process)使用Node + express.static() : 3.0 ~ 5.0% (单节点进程)
  • Using nginx : 0.3 ~ 0.7% (nginx process)使用nginx : 0.3 ~ 0.7% (nginx 进程)

That's a tricky question to answer. 这是一个棘手的问题要回答。 If you wrote a really lightweight node server to just serve static files, it would most likely perform better than nginx, but it's not that simple. 如果您编写了一个非常轻量级的节点服务器来提供静态文件,那么它很可能比nginx更好,但并不是那么简单。 ( Here's a "benchmark" comparing a nodejs file server and lighttpd - which is similar in performance to ngingx when serving static files). 这是比较nodejs文件服务器和lighttpd 的“基准” - 在提供静态文件时性能与ngingx相似)。

Performance in regard to serving static files often comes down to more than just the web-server doing the work. 提供静态文件的性能通常不仅仅是完成工作的Web服务器。 If you want the highest performance possible, you'll be using a CDN to serve your files to reduce latency for end-users, and benefit from edge-caching. 如果您希望获得最高性能,那么您将使用CDN来提供文件,以减少最终用户的延迟,并从边缘缓存中受益。

If you're not worried about that, node can serve static files just fine in most situation. 如果您不担心这一点,节点可以在大多数情况下提供静态文件。 Node lends itself to asynchronous code, which it also relies on since it's single-threaded and any blocking i/o can block the whole process, and degrade your applications performance. Node适用于异步代码,它也依赖于异步代码,因为它是单线程的,任何阻塞i / o都可以阻止整个过程,并降低应用程序性能。 More than likely you're writing your code in a non-blocking fashion, but if you are doing anything synchronously, you may cause blocking, which would degrade how fast other clients can get their static files served. 您很可能以非阻塞方式编写代码,但如果您同步执行任何操作,则可能会导致阻塞,这会降低其他客户端获取其静态文件的速度。 The easy solution is to not write blocking code, but sometimes that's not a possibility, or you can't always enforce it. 简单的解决方案是不编写阻止代码,但有时这不是可能的,或者您不能总是强制执行它。

Use Nginx to cache static files served by Node.js.使用 Nginx 缓存 Node.js 提供的静态文件。 The Nginx server is deployed in front of the Node.js server(s) to perform: Nginx 服务器部署在 Node.js 服务器之前执行:

SSL Termination : Terminate HTTPS traffic from clients, relieving your upstream web and application servers of the computational load of SSL/TLS encryption. SSL 终止:终止来自客户端的 HTTPS 流量,减轻您的上游 Web 和应用程序服务器的 SSL/TLS 加密计算负载。

Configuring Basic Load Balancing with NGINX : set up NGINX Open Source or NGINX Plus as a load balancer in front of two Node.js servers. 使用 NGINX 配置基本负载平衡:将 NGINX Open Source 或 NGINX Plus 设置为两个 Node.js 服务器前面的负载平衡器。

Content Caching : Caching responses from your Node.js app servers can both improve response time to clients and reduce load on the servers, because eligible responses are served immediately from the cache instead of being generated again on the server. 内容缓存:缓存来自 Node.js 应用程序服务器的响应既可以缩短对客户端的响应时间,又可以减少服务器上的负载,因为符合条件的响应会立即从缓存中提供,而不是在服务器上再次生成。

I am certain that purely node.js can outperform nginx in a lot of aspect. 我确信在很多方面,纯粹的node.js可以胜过nginx。

All said I have to stay NginX has an in-built cache, whereas node.js doesn't come with it factory installed (YOU HAVE TO BUILD YOUR OWN FILE CACHE). 所有人说我必须保持NginX有一个内置的缓存,而node.js并没有在工厂安装它(你必须建立自己的文件缓存)。 The custom file cache does outperform nginx and any other server in the market as it is super simple. 自定义文件缓存的性能优于nginx和市场上的任何其他服务器,因为它非常简单。

Also Nginx runs on multiple cores. Nginx也运行在多个核心上。 To use the full potential of Node you have to cluster node servers. 要充分发挥Node的潜力,您必须集群节点服务器。 If you are interested to know how then please pm. 如果您有兴趣知道如何,请下午。

You need to deep digger to achieve performance nirvana with node, that is the only problem. 你需要深挖掘器来实现节点的性能必杀技,这是唯一的问题。 Once done hell yeah... it beats Nginx. 一旦完成了地狱,是的......它击败了Nginx。

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM