简体   繁体   English

Boost Shared_lock / Unique_lock,给作者优先权?

[英]Boost Shared_lock / Unique_lock, giving the writer priority?

I have a multithreaded application that i am writing using Boost Thread locking.我有一个使用 Boost Thread 锁定编写的多线程应用程序。

In this case, there is one writer, and multiple readers.在这种情况下,有一个作者和多个读者。 As I have it now, the writer seems to wait for all the readers to complete before it can write again.正如我现在所拥有的,作者似乎要等待所有读者完成才能再次写作。

What i want, is to give the writer priority, so that if it wants to write again, it does so, no matter what.我要的,是给作者优先权,这样它想再写,无论如何都要写。 the readers work around it.读者围绕它工作。 For example:例如:

Now:现在:

Writer;
reader1;
reader2;
reader3;
reader4;

What i would like, is:我想要的是:

Writer;
reader1;
reader2;
Writer(if ready);
reader3;
reader4;

Is this possible?这可能吗? My code is replicated below:我的代码复制如下:

typedef boost::shared_mutex Lock;
typedef boost::unique_lock< Lock > WriteLock;
typedef boost::shared_lock< Lock > ReadLock;
Lock frameLock;

cv::Mat currentFrame;
bool frameOk;

void writer()
{
    while (true)
    {
        cv::Mat frame;
        cv::Mat src = cv::imread("C:\\grace_17.0001.jpg");
        cv::resize(src, frame, cv::Size(src.cols / 4, src.rows / 4));

        int64 t0 = cv::getTickCount();


        WriteLock w_lock(frameLock);
        frame.copyTo(currentFrame);
        frameLock.unlock();

        frameOk = true; // tells read we have at least one frame

        int64 t1 = cv::getTickCount();
        double secs = (t1 - t0) / cv::getTickFrequency();
        std::cout << "wait time WRITE: " << secs * 1000 << std::endl;
    }
}

void readerTwo(int wait)
{
    while (true)
    {
        if (frameOk) // if first frame is written
        {
            static cv::Mat readframe;

            int64 t0 = cv::getTickCount();


            //gets frame
            ReadLock r_lockz(frameLock);
            currentFrame.copyTo(readframe);
            r_lockz.unlock();

            std::cout << "READ: " << std::to_string(wait)<< std::endl;

            cv::imshow(std::to_string(wait), readframe);
            cv::waitKey(1);
            std::this_thread::sleep_for(std::chrono::milliseconds(20));
        }
    }
}



void main()
{
    const int readerthreadcount = 50;

    std::vector<boost::thread*> readerthread;

    boost::thread* wThread = new boost::thread(writer);


    for (int i = 0; i<readerthreadcount; i++) {
        ostringstream id;  id << "reader" << i + 1;
        readerthread.push_back(new boost::thread(readerTwo, (i)));
    }

    wThread->join(); delete wThread;

    for (int i = 0; i<readerthreadcount; i++) {
        readerthread[i]->join(); delete readerthread[i];
    }

}

Thank you.谢谢。

Writer starvation is a typical problem with reader/writer locks.写入器饥饿是读取器/写入器锁的典型问题。

Reader/writer locks unfortunately have to be tuned per-algorithm and per-architecture.不幸的是,读/写锁必须针对每个算法和每个架构进行调整。 (At least until something smarter is developed.) (至少在开发出更智能的东西之前是这样。)

Is this possible?这可能吗?

Yes, it's possible.是的,这是可能的。 With condition variables.带条件变量。 Have a count waitingWriters.有一个计数等待作家。 When a writer comes in, it acquires the mutex, increments waitingWriters, then waits on condition readerCount == 0. When a reader thread ends, it acquires the mutex, decrements readerCount, and signals the writer condition if readerCount == 0. When a reader thread comes in, it acquires the mutex.当写入器进入时,它获取互斥锁,增加waitingWriters,然后在条件readerCount == 0 时等待。当读取器线程结束时,它获取互斥锁,减少readerCount,并在readerCount == 0 时向写入器发送条件信号。读取器线程进来,它获取互斥锁。 If waitingWriters == 0, increment readerCount and release the mutex.如果waitingWriters == 0,则增加readerCount 并释放互斥锁。 Otherwise wait on condition waitingWriters == 0. When a writer thread finishes, it acquires the mutex.否则等待条件 waitingWriters == 0。当编写器线程完成时,它获取互斥锁。 If waitingWriters == 0, it signals the reader condition.如果waitingWriters == 0,则表示读取器条件。 Otherwise, it signals the next writer.否则,它会向下一个写入者发出信号。

Note that this algorithm I just gave you:请注意,我刚刚给你的这个算法:

  1. Now prioritizes writes over reads.现在优先写入而不是读取。 It is the other extreme in that reads can be starved instead of writes.另一个极端是读可以饿死,而不是写。
  2. Only uses 1 mutex.仅使用 1 个互斥锁。 (not a reader mutex & writer mutex) (不是读者互斥锁和作者互斥锁)
  3. Wouldn't be suitable for quick reads ie reads whose read operation is shorter than one scheduling timeslice.不适合快速读取,即读取操作比一个调度时间片短的读取。 For that you would want to use spinlocks (check out the Big Reader)为此,您可能想要使用自旋锁(请查看 Big Reader)

The tuning depends on many factors, the most important of which are the ratio of reader vs writer threads and how long the critical sections are.调整取决于许多因素,其中最重要的是读取器与写入器线程的比率以及临界区的长度。

Here is my higly efficient write-prioritized shared mutex.这是我的高效写优先共享互斥锁。 In optimal cases, it needs only one atomic exchange for locking and unlocking - in contrast to other implementations which need two atomic exchanges.在最佳情况下,它只需要一次原子交换来锁定和解锁——与其他需要两次原子交换的实现相反。

#pragma once
#include <cstdint>
#include <cassert>
#include <thread>
#include <new>
#include <atomic>
#include "semaphore.h"

static_assert(std::atomic<std::uint64_t>::is_always_lock_free, "std::uint64_t must be lock-free");

class alignas(std::hardware_constructive_interference_size) wprio_shared_mutex
{
public:
         wprio_shared_mutex();
         wprio_shared_mutex( wprio_shared_mutex const & ) = delete;
         ~wprio_shared_mutex();
    void lock_shared();
    void unlock_shared();
    void shared_to_write();
    void lock_writer();
    void write_to_shared();
    void unlock_writer();
    bool we_are_writer();

private:
    std::atomic<std::uint64_t> m_atomic; // bit  0 - 20: readers
                                         // bit 21 - 41: waiting readers
                                         // bit 42 - 62: waiting writers
                                         // bit      61: writer-flag
    std::thread::id            m_writerId;
    std::uint32_t              m_writerRecursionCount;
    semaphore                  m_releaseReadersSem,
                               m_releaseWriterSem;

    static unsigned const      WAITING_READERS_BASE   = 21,
                               WAITING_WRITERS_BASE   = 42,
                               WRITER_FLAG_BASE       = 63;
    static std::uint64_t const MASK21                 = 0x1FFFFFu;
    static std::uint64_t const READERS_MASK           = MASK21,
                               WAITING_READERS_MASK   = MASK21           << WAITING_READERS_BASE,
                               WAITING_WRITERS_MASK   = MASK21           << WAITING_WRITERS_BASE,
                               WRITER_FLAG_MASK       = (std::uint64_t)1 << WRITER_FLAG_BASE;
    static std::uint64_t const READER_VALUE           = (std::uint64_t)1,
                               WAITING_READERS_VALUE  = (std::uint64_t)1 << WAITING_READERS_BASE,
                               WAITING_WRITERS_VALUE  = (std::uint64_t)1 << WAITING_WRITERS_BASE;

    static bool check( std::uint64_t flags );
};

inline
bool wprio_shared_mutex::check( std::uint64_t flags )
{
    unsigned readers        = (unsigned)(flags                           & MASK21),
             waitingReaders = (unsigned)((flags >> WAITING_READERS_BASE) & MASK21),
             waitingWriters = (unsigned)((flags >> WAITING_WRITERS_BASE) & MASK21),
             writerFlag     = (unsigned)((flags >> WRITER_FLAG_BASE)     & 1);
    if( readers && (waitingReaders || writerFlag) )
        return false;
    if( waitingReaders && (readers || !writerFlag) )
        return false;
    if( waitingWriters && !(writerFlag || readers) )
        return false;
    if( writerFlag && readers )
        return false;
    return true;
}

wprio_shared_mutex::wprio_shared_mutex()
{
    m_atomic.store( 0, std::memory_order_relaxed );
}

wprio_shared_mutex::~wprio_shared_mutex()
{
    assert(m_atomic == 0);
}

void wprio_shared_mutex::lock_shared()
{
    using namespace std;
    for( uint64_t cmp = m_atomic.load( std::memory_order_relaxed ); ; )
    {
        assert(check( cmp ));
        if( (cmp & WRITER_FLAG_MASK) == 0 )
        [[likely]]
        {
            if( m_atomic.compare_exchange_weak( cmp, cmp + READER_VALUE, memory_order_acquire, memory_order_relaxed ) )
                [[likely]]
                return;
        }
        else
            if( m_atomic.compare_exchange_weak( cmp, cmp + WAITING_READERS_VALUE, memory_order_relaxed, memory_order_relaxed ) )
            [[likely]]
            {
                m_releaseReadersSem.forced_wait();
                return;
            }
    }
}

void wprio_shared_mutex::unlock_shared()
{
    using namespace std;
    for( uint64_t cmp = m_atomic.load( std::memory_order_relaxed ); ; )
    {
        assert(check( cmp ));
        assert((cmp & READERS_MASK) >= READER_VALUE);
        if( (cmp & READERS_MASK) != READER_VALUE || (cmp & WAITING_WRITERS_MASK) == 0 )
        [[likely]]
        {
            if( m_atomic.compare_exchange_weak( cmp, cmp - READER_VALUE, memory_order_relaxed, memory_order_relaxed ) )
                [[likely]]
                return;
        }
        else
        {
            assert(!(cmp & WRITER_FLAG_MASK));
            if( m_atomic.compare_exchange_weak( cmp, (cmp - READER_VALUE - WAITING_WRITERS_VALUE) | WRITER_FLAG_MASK, memory_order_relaxed, memory_order_relaxed ) )
            [[likely]]
            {
                m_releaseWriterSem.forced_release( 1 );
                return;
            }
        }
    }
}

void wprio_shared_mutex::shared_to_write()
{
    using namespace std;
    for( uint64_t cmp = m_atomic.load( std::memory_order_relaxed ); ; )
    {
        assert(check( cmp ));
        assert((cmp & READERS_MASK) >= READER_VALUE);
        if( (cmp & READERS_MASK) == READER_VALUE )
        [[likely]]
        {
            assert(!(cmp & WRITER_FLAG_MASK));
            if( m_atomic.compare_exchange_weak( cmp, (cmp - READER_VALUE) | WRITER_FLAG_MASK, memory_order_acquire, memory_order_relaxed ) )
            [[likely]]
            {
                m_writerId             = this_thread::get_id();
                m_writerRecursionCount = 0;
                return;
            }
        }
        else
        {
            assert((cmp & READERS_MASK) > READER_VALUE);
            if( m_atomic.compare_exchange_weak( cmp, cmp - READER_VALUE + WAITING_WRITERS_VALUE, memory_order_relaxed, memory_order_relaxed ) )
            [[likely]]
            {
                m_releaseWriterSem.forced_wait();
                m_writerId             = this_thread::get_id();
                m_writerRecursionCount = 0;
                return;
            }
        }
    }
}

void wprio_shared_mutex::lock_writer()
{
    using namespace std;
    uint64_t cmp = m_atomic.load( std::memory_order_acquire );
    if( (cmp & WRITER_FLAG_MASK) && m_writerId == this_thread::get_id() )
    {
        ++m_writerRecursionCount;
        return;
    }
    for( ; ; )
    {
        assert(check( cmp ));
        if( (cmp & (WRITER_FLAG_MASK | READERS_MASK)) == 0 )
        [[likely]
        {
            if( m_atomic.compare_exchange_weak( cmp, cmp | WRITER_FLAG_MASK, memory_order_acquire, memory_order_relaxed ) )
            [[likely]
            {
                m_writerId             = this_thread::get_id();
                m_writerRecursionCount = 0;
                return;
            }
        }
        else
            if( m_atomic.compare_exchange_weak( cmp, cmp + WAITING_WRITERS_VALUE, memory_order_relaxed, memory_order_relaxed ) )
            [[likely]]
            {
                m_releaseWriterSem.forced_wait();
                m_writerId             = this_thread::get_id();
                m_writerRecursionCount = 0;
                return;
            }
    }
}

void wprio_shared_mutex::unlock_writer()
{
    using namespace std;
    uint64_t cmp = m_atomic.load( std::memory_order_relaxed );
    if( (cmp & WRITER_FLAG_MASK) && m_writerRecursionCount && m_writerId == this_thread::get_id() )
    {
        --m_writerRecursionCount;
        return;
    }
    m_writerId = thread::id();
    for( ; ; )
    {
        assert(cmp & WRITER_FLAG_MASK && !(cmp & READERS_MASK));
        assert(check( cmp ));
        if( (cmp & WAITING_WRITERS_MASK) != 0 )
            [[unlikely]]
            if( m_atomic.compare_exchange_weak( cmp, cmp - WAITING_WRITERS_VALUE, memory_order_release, memory_order_relaxed ) )
            [[likely]]
            {
                m_releaseWriterSem.forced_release( 1 );
                return;
            }
            else
                continue;
        if( (cmp & WAITING_READERS_MASK) != 0 )
        [[unlikely]]
        {
            uint64_t wakeups = (cmp & WAITING_READERS_MASK) >> WAITING_READERS_BASE;
            if( m_atomic.compare_exchange_weak( cmp, (cmp & ~WRITER_FLAG_MASK) - (cmp & WAITING_READERS_MASK) + wakeups, memory_order_release, memory_order_relaxed ) )
            [[likely]]
            {
                m_releaseReadersSem.forced_release( (unsigned)wakeups );
                return;
            }
            else
                continue;
        }
        if( m_atomic.compare_exchange_weak( cmp, 0, memory_order_release, memory_order_relaxed ) )
            [[likely]]
            return;
    }
}

bool wprio_shared_mutex::we_are_writer()
{
    return (m_atomic.load( std::memory_order_relaxed ) & WRITER_FLAG_MASK) && m_writerId == std::this_thread::get_id();
}

The algorithm allows continuing readers, but as soon as a writer registers for writing, further readers are enqueued and the current readers are waited to be finished;该算法允许继续读取,但是一旦写入器注册写入,更多的读取器将排队并等待当前读取器完成; and this is all done though a single 64 bit atomic value!这一切都是通过单个 64 位原子值完成的!

The code allows reader- as well as writer-recursion.该代码允许读者和作者递归。 But when you are reader multiple times, you shouldn't do shared_to_write();但是当你多次阅读时,你不应该这样做 shared_to_write(); you'll get a deadlock then.那么你就会陷入僵局。 The ability to have recursion comes by nature for shared reading and has no extra-overhead.递归的能力是共享阅读的本质,没有额外的开销。 But for writing there's an additional recursion-counter as well as a thread::id.但是为了写入,还有一个额外的递归计数器和一个线程::id。

I'm not going to include my semaphore-class here as it should self-explanatory.我不会在这里包括我的信号量类,因为它应该是不言自明的。 With my semaphore-class I have forced_wait and forced_release;对于我的信号量类,我有 force_wait 和 force_release; this are two functions which repeatedly do a wait or release if it fails.这是两个函数,如果失败,它们会重复等待或释放。

The [[likely]]- and [[unlikely]]-tags are C++20 optimization-hints. [[likely]]- 和 [[unlikely]]-tags 是 C++20 优化提示。 You can remove them with earlier compilers.您可以使用较早的编译器删除它们。 The we_are_writer-method checks if the current thread has write-ownership. we_are_writer-method 检查当前线程是否具有写所有权。 This could be used fe for debugging-purposes with assert().这可以通过 assert() 用于调试目的。

The shared-mutex is aligned to cachelines through the alignas()-directive.共享互斥量通过 alignas() 指令与缓存行对齐。 But the whole object itself may be larger than a cacheline because of the two semaphores at the end of the object.但是由于对象末尾的两个信号量,整个对象本身可能比缓存行大。 But the data for the short locking-path is at the header which fits into a cacheline.但是短锁定路径的数据位于适合缓存行的标头处。 It shouldn't hurt if the semaphores at the end of the object don't fit into the same cacheline since sleepy locking is slow anyay.如果对象末尾的信号量不适合同一个缓存行,它应该不会受到伤害,因为睡眠锁定很慢。

The object isn't neither copyable, nor moveable because the semaphore might not be also.该对象既不可复制也不可移动,因为信号量可能也不是。 This might be fe because POSIX-semaphores rely on a non-copyable sem_t-datatype which might me directly embedded in a C++ semaphore-datatype and thereby make it non-copy or -moveable.这可能是错误的,因为 POSIX 信号量依赖于不可复制的 sem_t 数据类型,它可能直接嵌入到 C++ 信号量数据类型中,从而使其不可复制或可移动。

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM