简体   繁体   中英

Problems implementing a multi-threaded UDP server (threadpool?)

I am writing an audio streamer (client-server) as a project of mine (C/C++), and I decided to make a multi threaded UDP server for this project.

The logic behind this is that each client will be handled in his own thread. The problems I`m having are the interference of threads to one another.

The first thing my server does is create a sort of a thread-pool; it creates 5 threads that all are blocked automatically by a recvfrom() function, though it seems that, on most of the times when I connect another device to the server, more than one thread is responding and later on that causes the server to be blocked entirely and not operate further.

It's pretty difficult to debug this as well so I write here in order to get some advice on how usually multi-threaded UDP servers are implemented.

Should I use a mutex or semaphore in part of the code? If so, where?

Any ideas would be extremely helpful.

Take a step back: you say

each client will be handled in his own thread

but UDP isn't connection-oriented. If all clients use the same multicast address, there is no natural way to decide which thread should handle a given packet.


If you're wedded to the idea that each client gets its own thread (which I would generally counsel against, but it may make sense here), you need some way to figure out which client each packet came from.

That means either

  • using TCP (since you seem to be trying for connection-oriented behaviour anyway)
  • reading each packet, figuring out which logical client connection it belongs to, and sending it to the right thread. Note that since the routing information is global/shared state, these two are equivalent:

    1. keep a source IP -> thread mapping, protected by a mutex, read & access from all threads
    2. do all the reads in a single thread, use a local source IP -> thread mapping

    The first seems to be what you're angling for, but it's poor design. When a packet comes in you'll wake up one thread, then it locks the mutex and does the lookup, and potentially wakes another thread. The thread you want to handle this connection may also be blocked reading, so you need some mechanism to wake it.

    The second at least gives a seperation of concerns (read/dispatch vs. processing).


Sensibly, your design should depend on

  • number of clients
  • I/O load
  • amount of non-I/O processing (or IO:CPU ratio, or ...)

The first thing my server does is create a sort of a thread-pool; it creates 5 threads that all are blocked automatically by a recvfrom() function, though it seems that, on most of the times when I connect another device to the server, more than one thread is responding and later on that causes the server to be blocked entirely and not operate further

Rather than having all your threads sit on a recvfrom() on the same socket connection, you should protect the connection with a semaphore, and have your worker threads wait on the semaphore. When a thread acquires the semaphore, it can call recvfrom(), and when that returns with a packet, the thread can release the semaphore (for another thread to acquire) and handle the packet itself. When it's done servicing the packet, it can return to waiting on the semaphore. This way you avoid having to transfer data between threads.

Your recvfrom should be in the master thread and when it gets data you should pass the address IP:Port and data of the UDP client to the helper threads.

Passing the IP:port and data can be done by spawning a new thread everytime the master thread receives a UDP packet or can be passed to the helper threads through a message queue

I think that your main problem is the non-persistent udp connection. Udp is not keeping your connections alive, it exchanges only two datagrams per session. Depending on your application, in the worst case, it will have concurrent threads reading from the first available information, ie, recvfrom() will unblock even if it is not it's turn to do it.

I think the way to go is using select in the main thread and, with a concurrent buffer, manage what wich thread will do. In this solution, you can have one thread per client, or one thread per file, assuming that you keep the clients necessary information to make sure you're sending the right file part.

TCP is another way to do it, since it keeps the connection alive for every thread you run, but is not the best transmission way on data lost allowed applications.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM