简体   繁体   English

在这种情况下,'双重检查锁定模式'是否适合std :: mutex?

[英]Is 'double checked locking pattern' good for std::mutex in this situation?

I often encounter with the design of this thread-safe structure. 我经常遇到这种线程安全结构的设计。 As the following version1 , one thread may call foo1::add_data() rarely, and another thread often call foo1::get_result() . 如下面的version1 ,一个线程很少调用foo1::add_data() ,而另一个线程经常调用foo1::get_result() For the purpose of optimization, I think it can use an atomic for applying double checked locking pattern(DCLP), as version2 showed. 出于优化的目的,我认为它可以使用原子来应用双重检查锁定模式(DCLP),正如版本2所示。 Is there other better design for this situation? 这种情况还有其他更好的设计吗? Or could it been improved, for example accessing atomic with std::memory_order ? 或者它是否可以改进,例如使用std::memory_order访问原子?

version1 : 版本1

class data {};
class some_data {};
class some_result {};

class foo1
{
public:
    foo1() : m_bNeedUpdate(false) {}

    void add_data(data n)
    {
        std::lock_guard<std::mutex> lock(m_mut);

        // ... restore new data to m_SomeData

        m_bNeedUpdate = true;
    }

    some_result get_result() const
    {
        {
            std::lock_guard<std::mutex> lock(m_mut);
            if (m_bNeedUpdate)
            {
                // ... process mSomeData and update m_SomeResult

                m_bNeedUpdate = false;
            }
        }
        return m_SomeResult;
    }

private:
    mutable std::mutex  m_mut;
    mutable bool        m_bNeedUpdate;
    some_data           m_SomeData;

    mutable some_result m_SomeResult;
};

version2 : 版本2

class foo2
{
public:
    foo2() : m_bNeedUpdate(false) {}

    void add_data(data n)
    {
        std::lock_guard<std::mutex> lock(m_mut);

        // ... restore new data to m_SomeData

        m_bNeedUpdate.store(true);
    }

    some_result get_result() const
    {
        if (m_bNeedUpdate.load())
        {
            std::lock_guard<std::mutex> lock(m_mut);
            if (m_bNeedUpdate.load())
            {
                // ... process mSomeData and update m_SomeResult

                m_bNeedUpdate.store(false);
            }
        }
        return m_SomeResult;
    }

private:
    mutable std::mutex          m_mut;
    mutable std::atomic<bool>   m_bNeedUpdate;
    some_data                   m_SomeData;

    mutable some_result         m_SomeResult;
};

The problem is that version 2 isn't thread safe, at least according to C++11 (and Posix, earlier); 问题是版本2不是线程安全的,至少根据C ++ 11(和Posix,之前的版本); you're accessing a variable which may be modified without the access being protected. 您正在访问一个变量,该变量可以在不受访问受保护的情况下进行修改。 (The double checked locking pattern is known to be broken, see http://www.aristeia.com/Papers/DDJ_Jul_Aug_2004_revised.pdf .) It can be made to work in C++11 (or non-portably earlier) by using atomic variables, but what you've written results in undefined behavior. (已知双重检查的锁定模式已被破坏,请参阅http://www.aristeia.com/Papers/DDJ_Jul_Aug_2004_revised.pdf 。) 可以使用C ++ 11(或前面的非移植版本)使用它原子变量,但你写的东西导致未定义的行为。

I think a significant improvement (in terms of code size as well as in terms of simplicity and performance) could be achieved by using a 'read-write lock' which allows many threads to read in parallel. 我认为通过使用允许许多线程并行读取的“读写锁”,可以实现显着改进(在代码大小以及简单性和性能方面)。 Boost provides shared_mutex for this purpose, but from a quick glance it appears that this blog article implements the same kind of lock in a portable manner without requiring Boost. Boost为此目的提供了shared_mutex ,但是从快速浏览看来, 这篇博客文章似乎以可移植的方式实现了相同类型的锁,而不需要Boost。

You said that you're calling get_average quite often, have you considered calculating average only based on numbers that you haven't 'seen'? 你说你经常调用get_average,你是否考虑过根据你没有“看到”的数字计算平均值? It would be O(n) instead of O(n^2). 它将是O(n)而不是O(n ^ 2)。

It would be something like 它会是这样的

average = (last_average * last_size + static_cast<double>(
           std::accumulate(m_vecData.begin() + last_size, m_vecData.end(), 0))) /
           m_vecData.size();

It should give you satisfying results, depending on how big your vector is. 它应该给你满意的结果,取决于你的矢量有多大。

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM