I'm working on a windows/unix multithreaded server application network layer with Berkeley sockets and stumbled upon a problem:
Possible solution is to add a timeout to the select. I have seen that on sites addressing networking with select (dated 15 years back).
The question is:
Are there any other solutions? Waiting for timeout still leads to some level of starvation and takes CPU time from the select-waiter thread. I thought it would be possible to redesign the application but adding sockets is also done from threads that select-waiter thread has (and most definitely should have) no idea about, so the condition cannot be avoided.
If not, what sort of timeout should be chosen to achieve best performance / service quality?
Also note that I do realize that it would be better idea to use more advanced API (iocp, kqueue, ...) or a lib that would do it for me, but that is not an option for me at the given point.
Thanks
Create an additional socket pair and add one of these sockets to every select
. To interrupt a running select
, send a message to it via the other socket.
On the Unix side only, one can send any signal (eg SIGUSR1
) to the waiting thread with pthread_kill
. select
with then return a negative value, and errno
will be set to EINTR
. But there is nothing like that on the Windows side.
The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.