简体   繁体   English

C ++ 11相当于boost shared_mutex

[英]C++11 equivalent to boost shared_mutex

Is there a C++11 equivalent for the boost::shared_mutex . boost::shared_mutex是否有C ++ 11的等价物。 Or another solution to handle a multiple reader / single writer situation in C++11? 或者在C ++ 11中处理多个读者/单个写入者情况的另一个解决方案?

I tried but failed to get shared_mutex into C++11. 我试过但没能将shared_mutex变成C ++ 11。 It has been proposed for a future standard. 已经提出了未来的标准。 The proposal is here . 提案就在这里

Edit : A revised version (N3659) was accepted for C++14. 编辑 :C ++ 14 接受了修订版(N3659)。

Here is an implementation: 这是一个实现:

http://howardhinnant.github.io/shared_mutex http://howardhinnant.github.io/shared_mutex

http://howardhinnant.github.io/shared_mutex.cpp http://howardhinnant.github.io/shared_mutex.cpp

Simple... There isn't one. 简单......没有一个。 There is no standard C++ implementation of a readers-writer lock. 读写器锁没有标准的C ++实现。

But, you have a few options here. 但是,你有几个选择。

  1. You are left at your own devices to make your own readers-writer lock. 您可以使用自己的设备来制作自己的读者 - 作家锁。
  2. Use a platform-specific implementation such as Win32's , POSIX's , or Boost's as you mention. 如您所述,使用特定于平台的实现,例如Win32POSIXBoost
  3. Don't use one at all -- use a mutex which already exists in C++11. 不要使用一个 - 使用C ++ 11中已存在的互斥锁

Going with #1 and implementing your own is a scary undertaking and it is possible to riddle your code with race conditions if you don't get it right. #1一起执行并实现自己的是一项可怕的任务,如果你没有做到正确的话,可以用竞争条件来捣乱你的代码。 There is a reference implemenation that may make the job a bit easier. 有一个参考实现可以使工作更容易一些。

If you want platform independent code or don't want to include any extra libraries in your code for something as simple as a reader-writer lock, you can throw #2 out the window. 如果您想要与平台无关的代码,或者不希望在代码中包含任何额外的库,只需要像读写器锁一样简单,那么就可以从窗口中抛出#2

And, #3 has a couple caveats that most people don't realize: Using a reader-writer lock is often less performant, and has more difficult-to-understand code than an equivalent implementation using a simple mutex. 并且, #3有一些警告,大多数人都没有意识到:使用读写器锁定通常性能较差,并且与使用简单互斥锁的等效实现相比,具有更难以理解的代码。 This is because of the extra book-keeping that has to go on behind the scenes of a readers-writer lock implementation. 这是因为必须在读者 - 作者锁定实现的幕后进行额外的簿记。


I can only present you your options, really it is up to you to weigh the costs and benefits of each and pick which works best. 我只能向您展示您的选择,实际上您需要权衡每个选项的成本和收益,并选择最适合的选项。


Edit: C++17 now has a shared_mutex type for situations where the benefits of having multiple concurrent readers outweigh the performance cost of the shared_mutex itself. 编辑: C ++ 17现在有一个shared_mutex类型,适用于多个并发读取器的好处超过shared_mutex本身的性能成本的情况。

No, there is no equivalent for boost::shared_mutex in C++11. 不,在C ++ 11中没有boost::shared_mutex等价物。

Read/writer locks are supported in C++14 or later, though: 但是,在C ++ 14或更高版本中支持读/写锁:

The difference is that std::shared_timed_mutex adds additional timing operations. 不同之处在于std::shared_timed_mutex增加了额外的计时操作。 It implements the SharedTimedMutex concept , which is an extension of the simpler TimedMutex concept implemented by std::shared_mutex . 它实现了SharedTimedMutex概念 ,它是std::shared_mutex实现的更简单的TimedMutex概念的扩展。


Keep in mind that acquiring a lock for a read/writer mutex is more costly than acquiring a normal std::mutex . 请记住,获取读/写互斥锁的锁比获取普通的std::mutex As a consequence, a read/writer mutex will not improve the performance if you have frequent, but short read operations. 因此,如果您经常进行短读操作,则读/写互斥锁不会提高性能。 It is better suited for scenarios were read operations are frequent and expensive. 它更适合于读取操作频繁且昂贵的场景。 To quote from Anthony Williams' post : 引用Anthony Williams的帖子

The cost of locking a shared_mutex is higher than that of locking a plain std::mutex, even for the reader threads. 锁定shared_mutex的成本高于锁定普通std :: mutex的成本,即使对于读取器线程也是如此。 This is a necessary part of the functionality --- there are more possible states of a shared_mutex than a mutex, and the code must handle them correctly. 这是功能的必要部分--- shared_mutex的可能状态多于互斥锁,代码必须正确处理它们。 This cost comes in both the size of the object (which in both your implementation and my POSIX implementation includes both a plain mutex and a condition variable), and in the performance of the lock and unlock operations. 这个成本既包括对象的大小(在您的实现和我的POSIX实现中都包括普通的互斥锁和条件变量),以及锁定和解锁操作的性能。

Also, the shared_mutex is a point of contention, and thus not scalable. 此外,shared_mutex是一个争用点,因此不可扩展。 Locking a shared_mutex necessarily modifies the state of the mutex, even for a read lock. 锁定shared_mutex必然会修改互斥锁的状态,即使对于读锁定也是如此。 Consequently, the cache line holding the shared_mutex state must be transferred to whichever processor is performing a lock or unlock operation. 因此,必须将保持shared_mutex状态的高速缓存行传送到执行锁定或解锁操作的任何处理器。

If you have a lot of threads performing frequent, short read operations, then on a multiprocessor system this can lead to a lot of cache ping-pong, which will considerably impact the performance of the system. 如果你有很多线程执行频繁的短读操作,那么在多处理器系统上这会导致很多缓存乒乓,这将对系统的性能产生很大影响。 In this case, you may as well adopt the simpler design of just using a plain mutex, as the readers are essentially serialized anyway. 在这种情况下,您可以采用仅使用普通互斥锁的简单设计,因为无论如何读者基本上都是序列化的。

If the reads are not frequent, then there is no contention, so you don't need to worry about concurrent readers, and a plain mutex will suffice for that scenario anyway. 如果读取不频繁,那么就没有争用,所以你不必担心并发读者,无论如何,普通的互斥量就足以满足那种情况。

If the read operations are time consuming, then the consequence of this contention is less visible, since it is dwarfed by the time spent whilst holding the read lock. 如果读取操作是耗时的,则这种争用的后果不太明显,因为它与保持读取锁定所花费的时间相比相形见绌。 However, performing time consuming operations whilst holding a lock is a design smell. 然而,在握住锁的同时执行耗时的操作是一种设计气味。

In the vast majority of cases, I think that there are better alternatives to a shared_mutex. 在绝大多数情况下,我认为有更好的替代方法可以使用shared_mutex。 These may be a plain mutex, the atomic support of shared_ptr, the use of a carefully constructed concurrent container, or something else, depending on context. 这些可能是普通的互斥体,shared_ptr的原子支持,精心构造的并发容器的使用,或其他,取决于上下文。

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM