简体   繁体   中英

C++11 condition variable semantics

I am trying to understand the semantics of std::condition_variable . I thought I had a decent understanding of the C++11 concurrency model (atomics, memory ordering, the corresponding guarantees and formal relations ), but the description on how to use condition variables correctly seems to contradict my understanding.

TL;DR

The reference says:

The thread that intends to modify the variable has to

  1. acquire a std::mutex (typically via std::lock_guard)
  2. perform the modification while the lock is held
  3. execute notify_one or notify_all on the std::condition_variable (the lock does not need to be held for notification)

Even if the shared variable is atomic, it must be modified under the mutex in order to correctly publish the modification to the waiting thread.

I can see why the modification may have to be done before releasing the mutex, but the above seems to be fairly clear that it has to be whilst holding the mutex, ie it cannot be before acquiring it. Am I reading this correctly?

In more detail

If my reading of the above is correct, then why is this so? Consider we do the modification(s) before the critical section (ensuring no race conditions, via correct use of atomics and locks). Eg

std::atomic<bool> dummy;
std::mutex mtx;
std::condition_variable cv;

void thread1() {
    //...
    // Modify some program data, possibly in many places, over a long period of time
    dummy.store(true, std::memory_order_relaxed); // for simplicity
    //...
    mtx.lock(); mtx.unlock();
    cv.notify_one();
    //...
}

void thread2() {
    // ...
    { std::unique_lock<std::mutex> ul(mtx);
        cv.wait(ul, []() -> bool {
            // A complex condition, possibly involving data from many places
            return dummy.load(std::memory_order_relaxed); // for simplicity
        });
    }
    // ...
}

My understanding is that cv.wait() locks on mtx before proceeding (to check the condition and execute the rest of the program). Furthermore, std::mutex::lock() counts as an acquire operation and std::mutex::unlock() counts as a release operation. Would this not imply that the unlock() in thread1 synchronizes-with the lock() in thread2, and hence all atomic and even non-atomic stores performed in thread1 before unlock() are visible to thread2 when it wakes up?

Formally:  store --sequenced-before--> unlock() --synchronizes-with--> lock() --sequenced-before--> load
...and so: store --happens-before--> load

Thanks a lot for any answers!

[Note: I find it weird that I haven't found an answer to this after extensive googling; I'm sorry if it is a duplicate...]

Consider the time before locking the mutex in thread1 and the time before the condition_variable first unlocks the mutex in thread2.

thread1 does

  • Modify a lot of program data
  • dummy.store(true, std::memory_order_relaxed)

thread2 does

  • Lock the mutex
  • dummy.load(std::memory_order_relaxed) (to check the predicate before waiting)

There is no sequencing with respect to each other. If thread2 sees a true value for dummy at this check, and continues on, there is no guarantee that any of the data modifications are visible to thread2. thread2 will continue on, having correctly seen the value of dummy but without correctly seeing the modifications.

You say "ensuring no race conditions, via correct use of atomics and locks" which is very open. Relaxed atomics would be correct and modifications would not necessarily be visible in thread2. However, hypothetical additional synchronization around those other data modifications could guarantee visibility.

In other words, there should be some release-acquire ordering, between the store and the load.

This is similar to: waiting on worker thread using std::atomic flag and std::condition_variable

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM