简体   繁体   English

我有一台服务器在套接字上侦听,有什么好方法可以处理具有多个线程的CPU绑定请求?

[英]I have a server listening on sockets, whats a good approach to service CPU-bound requests with multiple threads?

I've got an application, written in C++, that uses boost::asio. 我有一个用C ++编写的应用程序,它使用boost :: asio。 It listens for requests on a socket, and for each request does some CPU-bound work (eg no disk or network i/o), and then responds with a response. 它侦听套接字上的请求,并且为每个请求执行一些CPU绑定的工作(例如,没有磁盘或网络I / O),然后以响应进行响应。

This application will run on a multi-core system, so I plan to have (at least) 1 thread per core, to process requests in parallel. 该应用程序将在多核系统上运行,因此我计划每个核(至少)具有1个线程,以并行处理请求。

Whats the best approach here? 最好的方法是什么? Things to think about: 要考虑的事情:

  • I'll need a fixed size thread pool (eg 1 thread per CPU) 我需要一个固定大小的线程池(例如,每个CPU 1个线程)
  • If more requests arrive than I have threads then they'll need to be queued (maybe in the o/s sockets layer?) 如果到达的请求多于我有线程的请求,则它们需要排队(也许在o / s套接字层中?)

Currently the server is single threaded: 当前服务器是单线程的:

  • It waits for a client request 等待客户请求
  • Once it receives a request, it performs the work, and writes the response back, then starts waiting for the next request 收到请求后,它将执行工作,并将响应写回,然后开始等待下一个请求

Update: 更新:

More specifically: what mechanism should I use to ensure that if the server is busy that incoming requests get queued up? 更具体地说:我应该使用什么机制来确保服务器繁忙时传入的请求排队? What mechanism should I use to distribute incoming requests among the N threads (1 per core)? 我应该使用哪种机制在N个线程(每个内核1个)之间分配传入请求?

I don't see that there is much to consider that you haven't already covered. 我认为您还没有涵盖很多内容。

If it is truly CPU-bound then adding threads beyond the number of cores doesn't help you much except if you are going to have a lot of requests. 如果确实是CPU限制的话,那么增加内核数量之外的线程并不能帮到您, 除非您有很多请求。 In that case the listen queue may or may not meet your needs and it might be better to have some threads to accept the connections and queue them up yourself. 在这种情况下,侦听队列可能会满足您的需求,也可能无法满足您的需求,因此最好让一些线程接受连接并自己排队。 Checkout the listen backlog values for your system and experiment a bit with the number of threads. 检出系统的侦听积压值,并尝试一些线程数。

UPDATE: 更新:

listen() has a second parameter that is your requested OS/TCP queue depth. listen()有第二个参数,它是您请求的OS / TCP队列深度。 You can set it up to the OS limit. 您可以将其设置为操作系统限制。 Beyond that you need to play with the system knobs. 除此之外,您还需要使用系统旋钮。 On my current system it is 128 so it is not huge but not trivial either. 在我当前的系统上,它是128,因此它并不庞大,但也不小。 Check your system and consider whether you realistically need something larger than the default. 检查系统,并考虑您是否实际需要比默认值大的东西。

Beyond that there are several directions you can go. 除此之外,您还可以选择几个方向。 Consider KISS - no complexity before it is actually needed. 考虑一下KISS-在实际需要之前没有复杂性。 Start off with something simple like just have a thread to accept connection (up to some limit) and plop them in a queue. 从简单的事情开始,例如只有一个线程接受连接(达到一定限制),然后将它们放入队列中。 Worker threads pick them up, process, write result, and close socket. 辅助线程将其拾取,处理,写入结果并关闭套接字。

At the current pace of my distro's Boost updates (and my lack of will to compile it myself) it will be 2012 before I play with ASIO - so I can't help with that. 以我的发行版Boost更新的当前速度(以及我自己缺乏编译意愿),在我与ASIO一起玩之前是2012年-所以我对此无能为力。

ACE http://www.cs.wustl.edu/~schmidt/ACE/book1/ ACE http://www.cs.wustl.edu/~schmidt/ACE/book1/

It has everything you need. 它具有您需要的一切。 Thread management and queues and as an added bonus a portable way of writing Socket Servers. 线程管理和队列以及作为一种额外好处的是编写套接字服务器的便携式方法。

If you are using the basic_socket_acceptor's overloaded constructor to bind and listen to a given endpoint, it uses SOMAXCONN as the backlog of pending connections in the call to listen(). 如果您使用basic_socket_acceptor的重载构造函数绑定并侦听给定的端点,则它将SOMAXCONN用作对listen()的调用中待处理连接的积压。 I think (not very sure) that this maps to 250 in Windows. 我认为(不太确定)这在Windows中映射为250。 So the network service provider will (silently) accept client connections up to this limit and queue them for your application to process. 因此,网络服务提供商将(静默地)接受不超过此限制的客户端连接,并将它们排队等待您的应用程序处理。 Your next accept call will pop a connection from this queue. 您的下一个接受呼叫将从该队列中弹出连接。

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM