简体   繁体   English

在C ++队列中对读/写操作进行排序

[英]Ordering of read/write operations in a C++ queue

Let's assume we have a SyncQueue class with the following implementation: 假设我们有一个SyncQueue类,具有以下实现:

class SyncQueue {
    std::mutex mtx;
    std::queue<std::shared_ptr<ComplexType> > m_q;
public:
    void push(const std::shared_ptr<ComplexType> & ptr) {
        std::lock_guard<std::mutex> lck(mtx);
        m_q.push(ptr);
    }
    std::shared_ptr<ComplexType> pop() {
        std::lock_guard<std::mutex> lck(mtx);
        std::shared_ptr<ComplexType> rv(m_q.front());
        m_q.pop();
        return rv;
    }
};

then we have this code that uses it: 然后我们有这个使用它的代码:

SyncQueue q;

// Thread 1, Producer:
std::shared_ptr<ComplexType> ct(new ComplexType);
ct->foo = 3;
q.push(ct);

// Thread 2, Consumer:
std::shared_ptr<ComplexType> ct(q.pop());
std::cout << ct->foo << std::endl;

Am I guaranteed to get 3 when ct->foo is printed? 我打算在打印ct->foo时得到3 mtx provides happens-before semantics for the pointer itself, but I'm not sure that says anything for the memory of ComplexType . mtx为指针本身提供了发生之前的语义,但我不确定是否为ComplexType的内存说了什么。 If it is guaranteed, does it mean that every mutex lock ( std::lock_guard<std::mutex> lck(mtx); ) forces full cache-invalidation for any modified memory locations up-till the place where memory hierarchies of independent cores merge? 如果有保证,是否意味着每个互斥锁( std::lock_guard<std::mutex> lck(mtx); )强制对任何已修改的内存位置进行完全缓存失效,直到独立内核的内存层次结构为止合并?

std::mutex() is conformant to Mutex requirements ( http://en.cppreference.com/w/cpp/concept/Mutex ) std :: mutex()符合Mutex要求( http://en.cppreference.com/w/cpp/concept/Mutex

Prior m.unlock() operations on the same mutex synchronize-with this lock operation (equivalent to release-acquire std::memory_order) 先前m.unlock()操作在同一个互斥锁上同步 - 这个锁定操作(相当于release-acquire std :: memory_order)

release-acquire is explained here ( http://en.cppreference.com/w/cpp/atomic/memory_order ) release-acquire在这里解释( http://en.cppreference.com/w/cpp/atomic/memory_order

Release-Acquire ordering 发布 - 获取订购

If an atomic store in thread A is tagged memory_order_release and an atomic load in thread B from the same variable is tagged memory_order_acquire, all memory writes (non-atomic and relaxed atomic) that happened-before the atomic store from the point of view of thread A, become visible side-effects in thread B , that is, once the atomic load is completed, thread B is guaranteed to see everything thread A wrote to memory. 如果线程A中的原子存储被标记为memory_order_release并且来自同一变量的线程B中的原子加载被标记为memory_order_acquire, 那么从线程的角度来看, 所有内存都在原子存储之前写入(非原子和放松原子) A,在线程B中成为可见的副作用 ,也就是说,一旦原子加载完成,线程B就可以保证看到线程A写入内存的所有内容。

The synchronization is established only between the threads releasing and acquiring the same atomic variable. 仅在释放和获取相同原子变量的线程之间建立同步。 Other threads can see different order of memory accesses than either or both of the synchronized threads. 其他线程可以看到与同步线程中的一个或两个不同的内存访问顺序。

Code example in this section is very similar on yours. 本节中的代码示例与您的代码示例非常相似。 So it should be guaranteed that all writes in thread 1 will happen before mutex unlock in push(). 所以应该保证线程1中的所有写操作都会在push()中的互斥锁解锁之前发生。

Of course if "ct->foo = 3" hasn't any special tricky meaning where actual assignment happens in another thread :) 当然,如果“ct-> foo = 3”没有任何特殊的棘手意义,其中实际的赋值发生在另一个线程:)

wrt cache-invalidation, from cppreference: 来自cppreference的wrt cache-invalidation:

On strongly-ordered systems (x86, SPARC TSO, IBM mainframe), release-acquire ordering is automatic for the majority of operations. 在强排序系统(x86,SPARC TSO,IBM大型机)上,大多数操作都会自动发布 - 获取订购。 No additional CPU instructions are issued for this synchronization mode, only certain compiler optimizations are affected (eg the compiler is prohibited from moving non-atomic stores past the atomic store-release or perform non-atomic loads earlier than the atomic load-acquire). 没有为此同步模式发出额外的CPU指令,只会影响某些编译器优化(例如,禁止编译器在原子存储释放之前移动非原子存储或者在原子载荷获取之前执行非原子加载)。 On weakly-ordered systems (ARM, Itanium, PowerPC), special CPU load or memory fence instructions have to be used. 在弱有序系统(ARM,Itanium,PowerPC)上,必须使用特殊的CPU加载或内存栅栏指令。

So it really depends from the architecture. 所以它真的取决于架构。

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM