简体   繁体   中英

boost::asio multi-threading problem

Ive got a server that uses boost::asio which I wish to make multi-threaded.

The server can be broken down into several "areas", with the sockets starting in a connect area, then once connected to a client being moved to a authentication area (ie login or register) before then moving between various other parts of the server depedning on what the client is doing.

I don't particularly want to just use a thread pool on a single io_service for all the sockets, since a large number of locks will be required, especially in areas with a large amount of interaction with common resources. However, instead I want to give each server component (like say authentication) their own thread.

However I'm not sure on how to do this. I considered the idea of giving each component its own io_service, so it could use whatever threads it wanted, however sockets area tied to an io_service, and I'm not sure how to then move a clients socket from one component to another.

You can solve this with asio::io_service::strand . Create a thread pool for io_service as usual. Once you've established a connection with the client, from there on wrap all async calls with a io_service::strand . One strand per client. This essentially guarantees that from the client's point of view it is single threaded.

First, I'd advocate considering the multi-process approach instead; it is a very straightforward, easy to reason about and debug, and easy to scale architecture.

A server design where you can scale horizontally - several instances of the server, where state within each does not need to be shared between servers (eg shared state can be in a common database (SQL, Voldemort (persistant) or Redis (sets and lists - very cool, I'm really excited about a persistent version), memcached (unreliable) or such) - is more easily scaleable.

You could, for example, have a single listener thread that balances between several server processes using UNIX sendmsg() to transfer the descriptor. This architecture would be straightforward to migrate to multi machine with hardware load-balancers later.

The area idea in the poster is intriguing. It could be that, rather than locking, you could do it all by message queues. Reason that disk IO - even with SSD and such - and the network are the real bottlenecks and it is not necessary to be as careful with CPU; the latencies of messages passing between threads is not such a big deal, and depending on your operating system the threads (or processes) could be scheduled to different cores in an SMP setup.

But ultimately, once you reach saturation, to scale up the area idea you need faster cores and not more of them. Here's an interesting monologue from one of our hosts about that.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM