简体   繁体   中英

asio::io_service priority queue handling with multiple threads

I use asio::io_service a lot in my multi-threaded C++ code. Recently I have discovered a bottleneck in my code thanks to lack of priority in handling various tasks. I naturally came across this boost example to ensure some tasks have higher priority than rest. But this example works only in single threaded application.

Typically, my code uses this pattern.

boost::asio::io_service ioService;
boost::thread_group threadPool;
boost::asio::io_service::work work(ioService);

int noOfCores = boost::thread::hardware_concurrency();
for (int i = 0 ; i < noOfCores ; i ++)
{
    threadPool.create_thread(boost::bind(&boost::asio::io_service::run, &ioService));
}
threadPool.join_all();

I do lots of ioService.post() from various other threads, and all those handlers have same priority.

Now, if I am to use handler_priority_queue from the boost example, first I have to add some mutex protection to add() and execute_all() functions.

boost::mutex _mtx;
void add(int priority, boost::function<void()> function)
{
    boost::lock_guard<boost::mutex> lock(_mtx);
    handlers_.push(queued_handler(priority, function));
}

void execute_all()
{
    while (!handlers_.empty())
    {
        boost::unique_lock<boost::mutex> lock(_mtx);
        queued_handler handler = handlers_.top();
        handlers_.pop();
        lock.unlock();
        handler.execute();
    }
}

However, I am not sure what replaces the following line in my current code.

    threadPool.create_thread(boost::bind(&boost::asio::io_service::run, &ioService));

I obviously need to replace io_service::run with handler_priority_queue::execute_all() somehow. But how? What is the best way?

I could do this...

    threadPool.create_thread(boost::bind(&handler_priority_queue::execute_all,
&pri_queue));

But the execute_all() comes out right away. I think the execute_all() needs to be re-designed some how. How about this? It works, but I am not sure about the pitfalls.

void execute_all()
{
    while (ioService.run_one())
    {
        while (ioService.poll_one());
        while (true)
        {
            queued_handler handler;
            {
                boost::lock_guard<boost::mutex> lock(_mtx);
                if (handlers_.empty())
                {
                    break;
                }
                else
                {
                    handler = handlers_.top();
                    handlers_.pop();
                }
            }
            handler.execute();
        }
    }
}

Asio doesn't provide such possibility. The example is limited to a single thread because it doesn't modify Asio's scheduler. The scheduler distributes tasks between threads in FIFO manner and I'm not aware of any way to modify this. As long as there's no way to specify priority when you initiate asynchronous operation (eg io_service::post ) the scheduler has no idea about the task priority and so cannot use it.

Of course you can use priority_queue per thread, but in this case your priorities will make a limited "thread local" effect: only tasks scheduled to the same thread will be executed according to their priorities. Consider example (pseudo-code):

io_service.post(task(priority_1));
io_service.post(task(priority_2));
io_service.post(task(priority_3));

thread_1(io_service.run());
thread_2(io_service.run());

Let's assume task 1 and 3 are picked up by thread_1 and task 2 by thread_2 . So thread_1 would execute task 3 and then task 1 if priority queue is used as in the linked example. But thread_2 has no idea about these tasks and will execute task 2 immediately, potentially before task 3.

Your options are either to implement own scheduler (complexity depends on your requirements but in general it can be tricky) or to find a 3rd-party solution. Eg I'd check Intel TBB Priority .

EDIT: trying to elaborate "own scheduler" case:

You'd need a really good multiple producers/multiple consumers concurrent queue for a simpler version, thread pool and threads that pull from the queue. This would be quite fair solution from priority point of view: all higher priority tasks would start execution (almost always) before lower priority tasks. If performance is more important than fairness, this solution can be improved. But this would deserve another question and a lot of details about your particular case.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM