[英]waiting on worker thread using std::atomic flag and std::condition_variable
Here is a C++17 snippet where on thread waits for another to reach certain stage:这是一个 C++17 片段,其中线程等待另一个到达某个阶段:
std::condition_variable cv;
std::atomic<bool> ready_flag{false};
std::mutex m;
// thread 1
... // start a thread, then wait for it to reach certain stage
auto lock = std::unique_lock(m);
cv.wait(lock, [&]{ return ready_flag.load(std::memory_order_acquire); });
// thread 2
... // modify state, etc
ready_flag.store(true, std::memory_order_release);
std::lock_guard{m}; // NOTE: this is lock immediately followed by unlock
cv.notify_all();
As I understand this is a valid way to use atomic flag and condition variable to achieve the goal.据我所知,这是使用原子标志和条件变量来实现目标的有效方法。 For example there is no need to use
std::memory_order_seq_cst
here.例如,这里不需要使用
std::memory_order_seq_cst
。
Is it possible to relax this code even further?是否可以进一步放宽此代码? For example:
例如:
std::memory_order_relaxed
in ready_flag.load()
ready_flag.load()
使用std::memory_order_relaxed
std::atomic_thread_fence()
instead of std::lock_guard{m};
std::atomic_thread_fence()
而不是std::lock_guard{m};
The combined use of a std:atomic
and std:condition_variable
is unconventional and should be avoided, but it can be interesting to analyse the behavior if you come across this in a code review and need to decide if a patch is required. std:atomic
和std:condition_variable
的组合使用是非常规的,应该避免,但如果您在代码审查中遇到这种情况并且需要决定是否需要补丁,分析行为会很有趣。
I believe there are 2 problems:我认为有两个问题:
Since ready_flag
is not protected by the std:mutex
, you cannot rely on the guarantee that thread 1 will observe the updated value once wait
wakes up from notify_one
.由于
ready_flag
不受std:mutex
保护,因此您不能保证一旦wait
从notify_one
唤醒,线程 1 将观察更新的值。 If the store to ready_flag
in thread 2 is delayed by the platform, thread 1 may see the old value ( false
) and enter wait
again (possibly causing a deadlock).如果在线程 2 中存储到
ready_flag
被平台延迟,线程 1 可能会看到旧值 ( false
) 并再次进入wait
(可能导致死锁)。
Whether a delayed store is possible depends on your platform.是否可以延迟存储取决于您的平台。 On a strongly ordered platform such as
X86
, you are probably safe, but again, no guarantees from the C++ standard.在诸如
X86
类的强有序平台上,您可能是安全的,但同样,C++ 标准无法保证。
Also note that using a stronger memory ordering does not help here.另请注意,使用更强的内存排序在这里无济于事。
let's say, the store is not delayed and once wait
wakes up, ready_flag
loads true
.比方说,商店没有延迟,一旦
wait
唤醒, ready_flag
加载true
。
This time, based on the memory ordering you are using, the store to ready_flag
in thread 2, synchronizes with the load in thread 1 which can now safely access the modified state written by thread 2.这一次,根据您使用的内存顺序,线程 2 中的
ready_flag
存储与线程 1 中的负载同步,线程 1 现在可以安全地访问线程 2 写入的修改状态。
But, this only works one time.但是,这只适用一次。 You cannot reset
ready_flag
and write to the shared state again.您无法重置
ready_flag
并再次写入共享状态。 That would introduce a data race since the shared state can now be accessed unsynchronized by both threads这将引入数据竞争,因为现在两个线程都可以不同步地访问共享状态
Is it possible to relax this code even further
是否可以进一步放宽此代码
Because you are modifying the shared state outside the lock, release/acquire ordering on ready_flag
is necessary for synchronization.因为您正在修改锁外的共享状态,所以在
ready_flag
上的释放/获取顺序对于同步是必要的。
To make this a portable solution, access both the shared state and ready_flag
while protected by the mutex ( ready_flag
can be a plain bool
).为了使它成为一个可移植的解决方案,在受互斥锁保护的情况下访问共享状态和
ready_flag
( ready_flag
可以是一个普通的bool
)。 This is how the mechanism is designed to be used.这就是该机制的设计用途。
std::condition_variable cv;
bool ready_flag{false}; // not atomic
std::mutex m;
// thread 1
... // start a thread, then wait for it to reach certain stage
auto lock = std::unique_lock(m);
cv.wait(lock, [&] { return ready_flag; });
ready_flag = false;
// access shared state
// thread 2
auto lock = std::unique_lock(m);
... // modify state, etc
ready_flag = true;
lock.unlock(); // optimization
cv.notify_one();
Unlocking the mutex before the call to notify_one
is an optimization.在调用
notify_one
之前解锁互斥锁是一种优化。 See this question for more details.有关更多详细信息,请参阅此问题。
Firstly: this code is indeed valid.首先:此代码确实有效。 The
lock_guard
prior to the notify_one
call ensures that the waiting thread will see the correct value of ready_flag
when it wakes, whether that is due to a spurious wake, or due to the call to notify_one
.在
notify_one
调用之前的lock_guard
确保等待线程在唤醒时将看到ready_flag
的正确值,无论是由于虚假唤醒,还是由于调用notify_one
。
Secondly: if the only accesses to the ready_flag
are those shown here, then the use of atomic
is overkill.其次:如果对
ready_flag
的唯一访问是这里显示的那些,那么使用atomic
就ready_flag
矫枉过正了。 Move the write to ready_flag
inside the scope of the lock_guard
on the writer thread and use a simpler, more conventional pattern.在写线程的
lock_guard
范围内移动对ready_flag
的写入,并使用更简单、更传统的模式。
If you stick with this pattern, then whether or not you can use memory_order_relaxed
depends on the ordering semantics you require.如果您坚持这种模式,那么您是否可以使用
memory_order_relaxed
取决于您需要的排序语义。
If the thread that sets the ready_flag
also writes to other objects which will be read by the reader thread, then you need the acquire/release semantics in order to ensure that the data is correctly visible: the reader thread may lock the mutex and see the new value of ready_flag
before the writer thread has locked the mutex, in which case the mutex itself would provide no ordering guarantees.如果设置
ready_flag
的线程也写入其他对象,这些对象将由读取器线程读取,那么您需要获取/释放语义以确保数据正确可见:读取器线程可能会锁定互斥锁并查看在ready_flag
线程锁定互斥锁之前ready_flag
新值,在这种情况下,互斥锁本身将不提供排序保证。
If there is no other data touched by the thread that sets the ready_flag
, or that data is protected by another mutex or other synchronization mechanism, then you can use memory_order_relaxed
everywhere, as it is only the value of ready_flag
itself that you care about, and not the ordering of any other writes.如果设置
ready_flag
的线程没有触及其他数据,或者该数据受到另一个互斥锁或其他同步机制的保护,那么您可以在任何地方使用memory_order_relaxed
,因为它只是您关心的ready_flag
本身的值,并且不是任何其他写入的顺序。
atomic_thread_fence
doesn't help with this code under any circumstances. atomic_thread_fence
在任何情况下都不会帮助处理此代码。 If you are using a condition variable, then the lock_guard{m}
is required.如果您使用的是条件变量,则需要
lock_guard{m}
。
声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.