简体   繁体   English

Node.js流写入循环

[英]Node.js stream write in loop

I was trying to benchmark some node.js native functionalities lately and found out some creepy results I cannot understand. 我最近试图对一些node.js本机功能进行基准测试,并发现了一些我无法理解的令人毛骨悚然的结果。 Here's a simple code that I benchmarked and benchmark results: 这是一个简单的代码,我对基准测试和基准测试结果:

http://pastebin.com/0eeBGSV9 http://pastebin.com/0eeBGSV9

You can see that it did healthy 8553 requests per second on 100k requests with 200 concurrency. 您可以看到它在100k请求和200并发时每秒执行健康的8553个请求。 I was then instructed by a friend, that I shouldn't be using async in this case, as this loop isn't big enough to obstruct event loop of node, so I refactored code to use for loop and it increased the benchmark result even higher: 然后我被朋友指示,在这种情况下我不应该使用异步,因为这个循环不足以阻碍节点的事件循环,所以我重构了用于循环的代码,它甚至增加了基准测试结果更高:

http://pastebin.com/0jgRPNEC http://pastebin.com/0jgRPNEC

Here we have 9174 requests per second. 这里我们每秒有9174个请求。 Neat. 整齐。 (for loop version was consistently faster than async version even when I changed the amount of iterations to 10k, curiously enough). (对于循环版本来说,即使我将迭代量更改为10k,也非常奇怪,因此循环版本始终比异步版本更快)。

But then my friend wandered if this result could be pushed even further by use of streaming instead of dumping all data after loop has finished. 但是,如果这个结果可以通过使用流而不是在循环结束后转储所有数据来进一步推动,那么我的朋友就会徘徊。 Once again, I refactored code to use res.write to handle data output: 我再次重构了使用res.write处理数据输出的代码:

http://pastebin.com/wM0x5nh9 http://pastebin.com/wM0x5nh9

aaaaand we have 2860 requests per second. aaaa我们每秒有2860个请求。 What happened here? 这里发生了什么? Why is stream writing so sluggish? 为什么流写作如此迟钝? Is there some kind of error in my code or is this how node actually works with streams? 我的代码中是否存在某种错误,或者节点是如何实际使用流的?

Node version is 0.10.25 on ubuntu with default settings from apt installation. ubuntu上的节点版本为0.10.25,默认设置来自apt安装。

I also tested the same code against JXCore and HHVM (using async.js version of node code) in the beginning with results here: http://pastebin.com/6tuYGhYG and got a curious result of node cluster being faster than latest jxcore 2.3.2. 我还在开头用JXCore和HHVM(使用async.js版本的节点代码)测试了相同的代码,结果如下: http ://pastebin.com/6tuYGhYG并得到了一个奇怪的结果,节点集群比最新的jxcore 2.3更快0.2。

Any critique would be greatly appreciated. 任何批评都将不胜感激。

EDIT: @Mscdex, I was curious if calling res.write() might have been the issue, so I changed the way I pushed out data to a new stream made for consumption by res. 编辑:@Mscdex,我很好奇,如果调用res.write()可能是问题,所以我改变了我将数据推送到res为消费的新流的方式。 I believed, naively, that maybe this way node will somehow optimize output buffering and stream data in an effective way. 我天真地相信,这种节点可能会以某种方式优化输出缓冲并以有效的方式传输数据。 While this solution worked too, it was even slower than before: 虽然这个解决方案也有效,但它比以前更慢:

http://pastebin.com/erF6YKS5 http://pastebin.com/erF6YKS5

My guess would be the overhead involved with having many separate write() syscalls. 我的猜测是拥有许多单独的write()系统调用所涉及的开销。

In node v0.12+, "corking" functionality has been added so that you can do res.write() as much as you want, but you can cork and uncork the stream so that all of those writes only result in a single write() syscall. 在节点v0.12 +,“塞上”的功能已经被添加 ,让你可以做res.write()只要你想多了,但你可以软木和拔去塞子流,使所有这些写入的结果只能在一个write()系统调用。 This is essentially what you're doing now with the concatenating of the output, except the corking will do that for you. 这基本上是你现在正在做的连接输出,除了软木塞将为你做这件事。 In some places in node core, this corking functionality may also be used automatically behind the scenes so that you don't have to explicitly cork/uncork to get good performance. 在节点核心的某些地方,这个软木塞功能也可以在幕后自动使用,这样您就不必明确地软木塞/取出来获得良好的性能。

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM