简体   繁体   中英

boost asio service queue deepness and policies

I understand a running asio service is like a queue I can use to post tasks a thread will execute sequentially. However, as any queue, I guess there are limits. Is it possible to set this limit for asio services? Is it possible to set what policy to follow when the queue is full (ie blocking, non blocking, etc.)?

UPDATE

Suppose I have a thread running an asio::service and a timer posting each 10ms a task to this thread. The task reception is bound to a method invokation which will make the thread sleeping for 100ms. I have therefore a timer posting 100 tasks a second to a thread which is capable of performing 10 tasks each second. It is evident that this situation will diverge. However, when dealing with queues, there are usually means to dimension the queue deepness (100? 1000? posts enqueued, etc.) or to specify the policy a sender should follow when the queue is full (ie shall it wait or shall it drop the request and continue?). My question is Ho to set these features in asio::service?

Asio does not provide policies to control the internal data structures. However, it does provide hooks into handler allocations: asio_handler_allocate and asio_handler_deallocate . These hooks can be used by an application to limit the amount of outstanding asynchronous operations, as well as defining the behavior when the user specified limit is reached.

There are a few key points to consider:

  • asio_handler_allocate is expected to return a valid memory block or throw an exception. If an exception is thrown from asio_handler_allocate , it will continue unwinding the stack through calls like io_service::post() . Thus, for a non-blocking behavior when the max is reached, throwing may be the only option.
  • Consider the effects on composed operations, such as async_read , where asio_handler_allocate and asio_handler_deallocate may be called multiple times. If an exception is thrown, a thread's stack will unwind to at least the point at which io_service::run was invoked. If blocking occurs, then it could be possible to have all threads servicing the reactor to become blocked, essentially preventing all asynchronous jobs from completing.

Here is an allocation example from the Boost.Asio examples showing a memory-pool being used for handlers.

There is none that I am aware of. As for your example, you are much better off having a timer in the io_service 's thread that performs a long running task and re-schedules itself once the work is done. Otherwise you will never clean-up your queue. And if you want a real-time system with good timing constraints — neither a generic OS/programming nor boost.asio is up for a task.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM