简体   繁体   中英

Threads lock mutex faster than std::conditional_variable::wait()

I'm trying to understand condition_variables.

I guess my code should work like:
1. main lock mx
2. main wait() notify <= here lock released
3. threads lock mx
4. threads send notify
5. threads unlock mx
6. main wait() finished and lock mx

So why threads can lock mx faster than wait() call after notify?
Example

#include <iostream>
#include <future>
#include <condition_variable>
#include <vector>

using namespace std::chrono_literals;

std::shared_future<void> ready;

std::mutex finish_mx;
std::condition_variable finish_cv;

int execute(int val, const std::shared_future<void> &ready){
    ready.wait();

    std::lock_guard<std::mutex> lock(finish_mx);
    std::cout<<"Locked: "<<val<<std::endl;
    finish_cv.notify_one();

    return val;
}


int main()
{
    std::promise<void> promise;
    auto shared = promise.get_future().share();

    std::vector<std::future<int>> pool;
    for (int i=0; i<10; ++i){
        auto fut = std::async(std::launch::async, execute, i, std::cref(shared));
        pool.push_back(std::move(fut));
    }

    std::this_thread::sleep_for(100ms);

    std::unique_lock<std::mutex> finish_lock(finish_mx);
    promise.set_value();

    for (int i=0; pool.size() > 0; ++i)
    {
        finish_cv.wait(finish_lock);
        std::cout<<"Notifies: "<<i<<std::endl;

        for (auto it = pool.begin(); it != pool.end(); ++it) {
            auto state = it->wait_for(0ms);
            if (state == std::future_status::ready) {
                pool.erase(it);
                break;
            }
        }
    }
}

example output:

Locked: 6
Locked: 7
Locked: 8
Locked: 9
Locked: 5
Locked: 4
Locked: 3
Locked: 2
Locked: 1
Notifies: 0
Locked: 0
Notifies: 1

Edit

for (int i=0; pool.size() > 0; ++i)
{
    finish_cv.wait(finish_lock);
    std::cout<<"Notifies: "<<i<<std::endl;

    auto it = pool.begin();
    while (it != pool.end()) {
        auto state = it->wait_for(0ms);
        if (state == std::future_status::ready) {
            /* process result */
            it = pool.erase(it);
        } else {
            ++it;
        }
    }
}

This depends on how your OS schedules threads that are waiting to acquire a mutex lock. All the execute threads are already waiting to acquire the mutex lock before the first notify_one , so if there's a simple FIFO queue of threads waiting to lock the mutex then they are all ahead of the main thread in the queue. As each mutex unlocks the mutex, the next one in the queue locks it.

This has nothing to do with mutexes being "faster" than condition variables, the condition variable has to lock the same mutex to return from the wait.

As soon as the future becomes ready all the execute threads return from the wait and all try to lock the mutex, joining the queue of waiters. When the condition variable starts to wait the mutex is unlocked, and one of the other threads (the one at the front of the queue) gets the lock. It calls notify_one which causes the condition variable to try to relock the mutex, joining the back of the queue. The notifying thread unlocks the mutex, and the next thread in the queue gets the lock, and calls notify_one (which does nothing because the condition variable is already notified and waiting to lock the mutex). Then the next thread in the queue gets the mutex, and so on.

It seems that one of the execute threads didn't run quickly enough to get in the queue before the first notify_one call, so it ended up in the queue behind the condition variable.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM