简体   繁体   中英

Efficient preforked server design with NBIO like epoll, kqueue using libevent

I am planning on writing a 'comet' server for 'streaming' data to clients. I have enhanced one in the past to take advantage of the multi-core CPUs but now I'm starting from scratch. I am planning to use epoll/kqueue or libevent to power the server.

One of the issues I have been weighting over is what server design to use? I have several options available since I am planning to use a multi-process model to take advantage of all the CPU cores.

  1. Pre-forked multi-process - each process doing it's own accept
  2. Pre-forked multi-process with master - master process accepts and then uses descriptor passing to pass the accepted socket to a process
  3. Pre-forked multi-process with different ports - Each process listens on a different port on the same system. A loadbalancer decides which process gets the next connection based on some load feedback from the individual daemon processes

Design #2 is most complicated. Design #3 is simple but involves additional hardware that I will need irrespective of the design since I'll have this running on several machines and would require a loadbalancer anyway. Design #1 has the thundering herd issue but I guess thundering herd isn't a big deal with 8 processes but it becomes a big deal when clients constantly connect and disconnecting (which should be rare since this is a comet server).

As I see it, #2 is complicated and requires 2 additional system calls due to descriptor passing between the master & slave processes for each accept. Is it better to have this overhead opposed to the thundering herd problem? If I have 8 processes waking up and executing an accept am I potentially going to see 8 accept calls incase I go with Design #1?

What are the pros and cons of my design choices? What would you recommend?

If it weren't processes but threads I'd go for option 2. Anyhow for processes this looks expensive to me, so we are to choose between 1 and 3.

I'd prefer 1, if it is possible to somehow estimate the expected load. Could you set an upper limit for the size of the sleeping herd, will say the preforked processes? How fast do you need to be able to accept a new connections?

So if you're going to go the Tom Dunson way, and bring the big herd fast over the Red River down to Kansas you probably need to choose the 3rd way. So as the resources are available anyway ...

If you aim to make a very large-scaled, high-throughput HTTP daemon, none of #1, #2, and #3 is appropriate. You'd better use 1-to-m or m-to-n models with multi-threading if you wanted to get scalability, like the way nginx/lighttp do.

In fact, if you expect the program to handle less than a hundred connection within a second, then #1, #2, and #3 may not make any visible difference.

However, I would go for #2 in case you may scale up your program in the future by switching a process to a thread since it can be easily integrated into 1-to-m, or m-to-n processing models.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM