简体   繁体   English

与 C++ 原子 memory 栅栏同步

[英]Synchronization with C++ atomic memory fence

I have a question about the synchronization of the code below using memory fence.我有一个关于使用 memory 围栏同步以下代码的问题。

std::atomic<int> a = 0;
std::atomic<int> b = 0;

void increase_b() {
  std::atomic_thread_fence(std::memory_order_release);
  b.store(1, std::memory_ordered_relaxed);
}

bool diff() {
  int val_a = a.load(std::memory_ordered_relaxed);
  int val_b = b.load(std::memory_ordered_relaxed);
  return val_b > val_a;
}

void f1() {
  increase_b();
  std::atomic_thread_fence(std::memory_order_seq_cst);
}

void f2() {
  std::atomic_thread_fence(std::memory_order_seq_cst);
  bool result = diff();
}

int main() {
  std::thread t1(f1);
  std::thread t2(f2);
  t1.join(); t2.join();
}

Assume t1 has finished f1 and then t2 just started f2 , will t2 see b incremented?假设 t1 已经完成f1然后 t2 刚刚开始f2 , t2 会看到b增加吗?

Your code is overcomplicated.您的代码过于复杂。 a=0 never changes so it always reads as 0. You might as well just have atomic<int> b=0; a=0永远不会改变,所以它总是读为 0。你不妨只拥有atomic<int> b=0; and only a single load that just return b.load .并且只有一个仅返回b.load的负载。

Assume t1 has finished f1 and then t2 just started f2, will t2 see b incremented?假设 t1 已经完成 f1,然后 t2 刚刚开始 f2,t2 会看到 b 递增吗?

There's no way for you to detect that this is how the timing worked out, unless you put t1.join() ahead of std::thread t2(f2);除非您将t1.join()放在std::thread t2(f2);之前,否则您无法检测到这是如何计算时间的。 construction.建造。 That would require that everything in thread 2 is sequenced after everything in thread 1. (I think even without a seq_cst fence at the end of f1, but that doesn't hurt. I think thread.join makes sure everything done inside a thread is visible after thread.join )这将要求线程 2 中的所有内容都在线程 1 中的所有内容之后进行排序。(我认为即使在 f1 末尾没有seq_cst围栏,但这并没有什么坏处。我认为 thread.join 确保在线程内完成的所有事情都是在thread.join之后可见)

But yes, that ordering can happen by chance, and then of course it works.但是,是的,这种排序可能是偶然发生的,然后它当然会起作用。

There's no guarantee that's even a meaningful condition in C++ terms.在 C++ 条款中,甚至不能保证这是一个有意义的条件。

But sure for most (all?) real implementations it's something that can happen.但可以肯定的是,对于大多数(所有?)真正的实现来说,这是可能发生的事情。 And a thread_fence(mo_seq_cst) will compile to a full barrier that blocks that thread until the store commits (becomes globally visible to all threads).并且thread_fence(mo_seq_cst)将编译为一个完整的屏障,该屏障会阻塞该线程,直到存储提交(对所有线程全局可见)。 So execution can't leave f1 until reads from other threads can see the updated value of b .所以执行不能离开 f1 直到从其他线程读取可以看到b的更新值。 (The C++ standard defines ordering and fences in terms of creating synchronizes-with relationships, not in terms of compiling to full barriers that flush the store buffer. The standard doesn't mention a store buffer or StoreLoad reordering or any of the CPU memory-order things.) (C++ 标准在创建同步关系方面定义了排序和栅栏,而不是在编译为刷新存储缓冲区的完整屏障方面。该标准没有提到存储缓冲区或 StoreLoad 重新排序或任何 CPU 内存 -订购东西。)

Given the synthetic condition, the threads actually are ordered wrt.给定合成条件,线程实际上是按 wrt 排序的。 each other, and it works just like if everything had been done in a single thread.彼此,它的工作方式就像所有事情都在一个线程中完成一样。


The loads in diff() aren't ordered wrt. diff()中的负载不是按顺序排列的。 each other because they're both mo_relaxed .彼此因为他们都是mo_relaxed But a is never modified by any thread so the only question is whether b.load() can happen before the thread even started, before the f1 store is visible.但是a永远不会被任何线程修改,所以唯一的问题是b.load()是否可以在线程启动之前发生,在f1存储可见之前。 In real implementations it can't because of what " and then t2 just started f2" means.在实际实现中它不能因为“然后t2 刚刚开始 f2”的含义。 If it could load the old value, then you wouldn't be able to say " and then ", so it's almost a tautology.如果它可以加载旧值,那么你就不能说“然后”,所以它几乎是一个重言式。

The thread_fence(seq_cst) before the loads doesn't really help anything.加载前的thread_fence(seq_cst)并没有真正帮助任何事情。 I guess it stops b.load() from reordering with the thread-startup machinery.我猜它会阻止b.load()使用线程启动机制重新排序。

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM