简体   繁体   中英

Which request would NodeJS serve first, if it receives n requests at same time?

I am working on NodeJS. I have a doubt that if nodejs receives many requests, it processes them one after the other in queue. But, if it receives n requests for example say 4 requests reached nodejs at same time without any gap in time, then which one will nodejs pick first to serve? What is the criteria and reason to select any request from many requests at same time?

Since all four requests arrive on the same physical internet connection, one of the request's packets will get there before the others. As the packets converge on the last router before your server, one of them will get processed by a router slightly before the other and that packet will arrive at your server before the other. That packet will then get to the TCP stack in the OS first which will notify node.js about it first. Nodejs will start to process that first request. Since the main thread in nodejs is single threaded, if the request handler doesn't call something asynchronous, then it will send a response for the first request before it gets to start processing the second request.

If the first request has non-blocking, asynchronous portions to its request handling code, then as soon as it makes an asynchronous call and returns control back to the nodejs event loop, then the 2nd request will get to start processing.

But, if it receives n requests for example say 4 requests reached nodejs at same time without any gap in time, then which one will nodejs pick first to serve?

This is not possible. As the packets from each of the requests converge on the last router before your server, they will eventually get sequenced one after the other on the ethernet connection connected to your server. The ethernet connection doesn't send 4 requests in parallel. It sends packets one after the other.

So, your server will see one of the incoming packets before the others. Also, keep in mind an incoming http request is not just a single packet. It consists of establishing a TCP connection (with the back and forth that that entails) and then the client sends the actually http request over the TCP connection that has been established. If you're using https, there is even more involved in establishing the connection. So, the whole notion of four incoming connections arriving at exactly the same moment is not possible. Even if it were (imagine you had four network cards with four physical connections to the internet), the underlying operating system is going to end up servicing one of the incoming network cards before the others. Whether it's a hardware interrupt at the lowest level or a polling loop, one of the network cards is going to be found to have incoming data before the others.

What is the criteria and reason to select any request from many requests at same time?

It doesn't work that way. The OS doesn't suddenly realize it has four requests that arrived at exactly the same moment and then it has to implement some algorithm to choose which request to serve first. It doesn't work that way. Instead, some low level hardware/software element (probably in an upstream router) will have forced the incoming packets into an order (either based on minute timing or just based on how it's software works - like it checks hardware portA and then hardware portB and then hardware portC, for example) and one will physically arrive before the other on your server. This is not something your server gets to decide.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM