简体   繁体   English

C++11 线程安全队列

[英]C++11 thread-safe queue

A project I'm working on uses multiple threads to do work on a collection of files.我正在处理的一个项目使用多个线程来处理一组文件。 Each thread can add files to the list of files to be processed, so I put together (what I thought was) a thread-safe queue.每个线程都可以将文件添加到要处理的文件列表中,因此我将(我认为是)一个线程安全队列放在一起。 Relevant portions follow:相关部分如下:

// qMutex is a std::mutex intended to guard the queue
// populatedNotifier is a std::condition_variable intended to
//                   notify waiting threads of a new item in the queue

void FileQueue::enqueue(std::string&& filename)
{
    std::lock_guard<std::mutex> lock(qMutex);
    q.push(std::move(filename));

    // Notify anyone waiting for additional files that more have arrived
    populatedNotifier.notify_one();
}

std::string FileQueue::dequeue(const std::chrono::milliseconds& timeout)
{
    std::unique_lock<std::mutex> lock(qMutex);
    if (q.empty()) {
        if (populatedNotifier.wait_for(lock, timeout) == std::cv_status::no_timeout) {
            std::string ret = q.front();
            q.pop();
            return ret;
        }
        else {
            return std::string();
        }
    }
    else {
        std::string ret = q.front();
        q.pop();
        return ret;
    }
}

However, I am occasionally segfaulting inside the if (...wait_for(lock, timeout) == std::cv_status::no_timeout) { } block, and inspection in gdb indicates that the segfaults are occurring because the queue is empty.但是,我偶尔会在if (...wait_for(lock, timeout) == std::cv_status::no_timeout) { }块内出现段错误,并且 gdb 中的检查表明由于队列为空而发生段错误。 How is this possible?这怎么可能? It was my understanding that wait_for only returns cv_status::no_timeout when it has been notified, and this should only happen after FileQueue::enqueue has just pushed a new item to the queue.据我了解, wait_for仅在收到通知时才返回cv_status::no_timeout ,并且这只应在FileQueue::enqueue刚刚将新项目推送到队列后发生。

It is best to make the condition (monitored by your condition variable) the inverse condition of a while-loop: while(!some_condition) .最好使条件(由您的条件变量监控)成为 while 循环的逆条件: while(!some_condition) Inside this loop, you go to sleep if your condition fails, triggering the body of the loop.在这个循环中,如果条件失败,您将进入睡眠状态,从而触发循环体。

This way, if your thread is awoken--possibly spuriously--your loop will still check the condition before proceeding.这样,如果您的线程被唤醒(可能是虚假的),您的循环仍将在继续之前检查条件。 Think of the condition as the state of interest, and think of the condition variable as more of a signal from the system that this state might be ready.条件视为感兴趣的状态,并将条件变量更多地视为来自系统的信号,表明该状态可能已准备好。 The loop will do the heavy lifting of actually confirming that it's true, and going to sleep if it's not.循环将完成实际确认它是真实的繁重工作,如果不是,则进入睡眠状态。

I just wrote a template for an async queue, hope this helps.我刚刚为异步队列编写了一个模板,希望对您有所帮助。 Here, q.empty() is the inverse condition of what we want: for the queue to have something in it.在这里, q.empty()是我们想要的相反条件:队列中有东西。 So it serves as the check for the while loop.所以它作为while循环的检查。

#ifndef SAFE_QUEUE
#define SAFE_QUEUE

#include <queue>
#include <mutex>
#include <condition_variable>

// A threadsafe-queue.
template <class T>
class SafeQueue
{
public:
  SafeQueue(void)
    : q()
    , m()
    , c()
  {}

  ~SafeQueue(void)
  {}

  // Add an element to the queue.
  void enqueue(T t)
  {
    std::lock_guard<std::mutex> lock(m);
    q.push(t);
    c.notify_one();
  }

  // Get the "front"-element.
  // If the queue is empty, wait till a element is avaiable.
  T dequeue(void)
  {
    std::unique_lock<std::mutex> lock(m);
    while(q.empty())
    {
      // release lock as long as the wait and reaquire it afterwards.
      c.wait(lock);
    }
    T val = q.front();
    q.pop();
    return val;
  }

private:
  std::queue<T> q;
  mutable std::mutex m;
  std::condition_variable c;
};
#endif

According to the standard condition_variables are allowed to wakeup spuriously, even if the event hasn't occured.根据标准condition_variables被允许虚假唤醒,即使事件没有发生。 In case of a spurious wakeup it will return cv_status::no_timeout (since it woke up instead of timing out), even though it hasn't been notified.在虚假唤醒的情况下,它会返回cv_status::no_timeout (因为它是唤醒而不是超时),即使它没有被通知。 The correct solution for this is of course to check if the wakeup was actually legit before proceding.正确的解决方案当然是在继续之前检查唤醒是否真的合法。

The details are specified in the standard §30.5.1 [thread.condition.condvar]:详细信息在标准§30.5.1 [thread.condition.condvar] 中指定:

—The function will unblock when signaled by a call to notify_one(), a call to notify_all(), expiration of the absolute timeout (30.2.4) specified by abs_time, or spuriously. — 当调用 notify_one()、调用 notify_all()、abs_time 指定的绝对超时 (30.2.4) 到期或虚假发出信号时,该函数将解除阻塞。

... ...

Returns: cv_status::timeout if the absolute timeout (30.2.4) specifiedby abs_time expired, other-ise cv_status::no_timeout.返回:如果 abs_time 指定的绝对超时 (30.2.4) 已过期,则返回 cv_status::timeout,否则为 cv_status::no_timeout。

This is probably how you should do it:这可能是您应该这样做的方式:

void push(std::string&& filename)
{
    {
        std::lock_guard<std::mutex> lock(qMutex);

        q.push(std::move(filename));
    }

    populatedNotifier.notify_one();
}

bool try_pop(std::string& filename, std::chrono::milliseconds timeout)
{
    std::unique_lock<std::mutex> lock(qMutex);

    if(!populatedNotifier.wait_for(lock, timeout, [this] { return !q.empty(); }))
        return false;

    filename = std::move(q.front());
    q.pop();

    return true;    
}

Adding to the accepted answer, I would say that implementing a correct multi producers / multi consumers queue is difficult (easier since C++11, though)除了已接受的答案之外,我想说实现正确的多生产者/多消费者队列很困难(不过,自 C++11 以来更容易)

I would suggest you to try the (very good) lock free boost library , the "queue" structure will do what you want, with wait-free/lock-free guarantees and without the need for a C++11 compiler .我建议您尝试(非常好的)无锁 boost 库,“队列”结构将做您想做的事,具有无等待/无锁保证并且不需要 C++11 编译器

I am adding this answer now because the lock-free library is quite new to boost (since 1.53 I believe)我现在添加这个答案是因为无锁库对于提升来说是相当新的(我相信从 1.53 开始)

I would rewrite your dequeue function as:我会将您的出列函数重写为:

std::string FileQueue::dequeue(const std::chrono::milliseconds& timeout)
{
    std::unique_lock<std::mutex> lock(qMutex);
    while(q.empty()) {
        if (populatedNotifier.wait_for(lock, timeout) == std::cv_status::timeout ) 
           return std::string();
    }
    std::string ret = q.front();
    q.pop();
    return ret;
}

It is shorter and does not have duplicate code like your did.它更短,并且没有像您那样重复的代码。 Only issue it may wait longer that timeout.仅发出它可能会等待更长的超时时间。 To prevent that you would need to remember start time before loop, check for timeout and adjust wait time accordingly.为防止您需要记住循环之前的开始时间,请检查超时并相应地调整等待时间。 Or specify absolute time on wait condition.或指定等待条件的绝对时间。

There is also GLib solution for this case, I did not try it yet, but I believe it is a good solution.这个案例也有 GLib 解决方案,我还没试过,但我相信这是一个很好的解决方案。 https://developer.gnome.org/glib/2.36/glib-Asynchronous-Queues.html#g-async-queue-new https://developer.gnome.org/glib/2.36/glib-Asynchronous-Queues.html#g-async-queue-new

BlockingCollection is a C++11 thread safe collection class that provides support for queue, stack and priority containers. BlockingCollection是一个 C++11 线程安全的集合类,它提供对队列、堆栈和优先级容器的支持。 It handles the "empty" queue scenario you described.它处理您描述的“空”队列场景。 As well as a "full" queue.以及“完整”队列。

You may like lfqueue, https://github.com/Taymindis/lfqueue .你可能喜欢 lfqueue, https://github.com/Taymindis/lfqueue It's lock free concurrent queue.它是无锁并发队列。 I'm currently using it to consuming the queue from multiple incoming calls and works like a charm.我目前正在使用它来消耗来自多个来电的队列,并且像一个魅力一样工作。

This is my implementation of a thread-queue in C++20:这是我在 C++20 中实现的线程队列:

#pragma once
#include <deque>
#include <mutex>
#include <condition_variable>
#include <utility>
#include <concepts>
#include <list>

template<typename QueueType>
concept thread_queue_concept =
    std::same_as<QueueType, std::deque<typename QueueType::value_type, typename QueueType::allocator_type>>
    || std::same_as<QueueType, std::list<typename QueueType::value_type, typename QueueType::allocator_type>>;

template<typename QueueType>
    requires thread_queue_concept<QueueType>
struct thread_queue
{
    using value_type = typename QueueType::value_type;
    thread_queue();
    explicit thread_queue( typename QueueType::allocator_type const &alloc );
    thread_queue( thread_queue &&other );
    thread_queue &operator =( thread_queue const &other );
    thread_queue &operator =( thread_queue &&other );
    bool empty() const;
    std::size_t size() const;
    void shrink_to_fit();
    void clear();
    template<typename ... Args>
        requires std::is_constructible_v<typename QueueType::value_type, Args ...>
    void enque( Args &&... args );
    template<typename Producer>
        requires requires( Producer producer ) { { producer() } -> std::same_as<std::pair<bool, typename QueueType::value_type>>; }
    void enqueue_multiple( Producer producer );
    template<typename Consumer>
        requires requires( Consumer consumer, typename QueueType::value_type value ) { { consumer( std::move( value ) ) } -> std::same_as<bool>; }
    void dequeue_multiple( Consumer consumer );
    typename QueueType::value_type dequeue();
    void swap( thread_queue &other );
private:
    mutable std::mutex m_mtx;
    mutable std::condition_variable m_cv;
    QueueType m_queue;
};

template<typename QueueType>
    requires thread_queue_concept<QueueType>
thread_queue<QueueType>::thread_queue()
{
}

template<typename QueueType>
    requires thread_queue_concept<QueueType>
thread_queue<QueueType>::thread_queue( typename QueueType::allocator_type const &alloc ) :
    m_queue( alloc )
{
}

template<typename QueueType>
    requires thread_queue_concept<QueueType>
thread_queue<QueueType>::thread_queue( thread_queue &&other )
{
    using namespace std;
    lock_guard lock( other.m_mtx );
    m_queue = move( other.m_queue );
}

template<typename QueueType>
    requires thread_queue_concept<QueueType>
thread_queue<QueueType> &thread_queue<QueueType>::thread_queue::operator =( thread_queue const &other )
{
    std::lock_guard
        ourLock( m_mtx ),
        otherLock( other.m_mtx );
    m_queue = other.m_queue;
    return *this;
}

template<typename QueueType>
    requires thread_queue_concept<QueueType>
thread_queue<QueueType> &thread_queue<QueueType>::thread_queue::operator =( thread_queue &&other )
{
    using namespace std;
    lock_guard
        ourLock( m_mtx ),
        otherLock( other.m_mtx );
    m_queue = move( other.m_queue );
    return *this;
}

template<typename QueueType>
    requires thread_queue_concept<QueueType>
bool thread_queue<QueueType>::thread_queue::empty() const
{
    std::lock_guard lock( m_mtx );
    return m_queue.empty();
}

template<typename QueueType>
    requires thread_queue_concept<QueueType>
std::size_t thread_queue<QueueType>::thread_queue::size() const
{
    std::lock_guard lock( m_mtx );
    return m_queue.size();
}

template<typename QueueType>
    requires thread_queue_concept<QueueType>
void thread_queue<QueueType>::thread_queue::shrink_to_fit()
{
    std::lock_guard lock( m_mtx );
    return m_queue.shrink_to_fit();
}

template<typename QueueType>
    requires thread_queue_concept<QueueType>
void thread_queue<QueueType>::thread_queue::clear()
{
    std::lock_guard lock( m_mtx );
    m_queue.clear();
}

template<typename QueueType>
    requires thread_queue_concept<QueueType>
template<typename ... Args>
    requires std::is_constructible_v<typename QueueType::value_type, Args ...>
void thread_queue<QueueType>::thread_queue::enque( Args &&... args )
{
    using namespace std;
    unique_lock lock( m_mtx );
    m_queue.emplace_front( forward<Args>( args ) ... );
    m_cv.notify_one();
}

template<typename QueueType>
    requires thread_queue_concept<QueueType>
typename QueueType::value_type thread_queue<QueueType>::thread_queue::dequeue()
{
    using namespace std;
    unique_lock lock( m_mtx );
    while( m_queue.empty() )
        m_cv.wait( lock );
    value_type value = move( m_queue.back() );
    m_queue.pop_back();
    return value;
}

template<typename QueueType>
    requires thread_queue_concept<QueueType>
template<typename Producer>
    requires requires( Producer producer ) { { producer() } -> std::same_as<std::pair<bool, typename QueueType::value_type>>; }
void thread_queue<QueueType>::enqueue_multiple( Producer producer )
{
    using namespace std;
    lock_guard lock( m_mtx );
    for( std::pair<bool, value_type> ret; (ret = move( producer() )).first; )
        m_queue.emplace_front( move( ret.second ) ),
        m_cv.notify_one();
}

template<typename QueueType>
    requires thread_queue_concept<QueueType>
template<typename Consumer>
    requires requires( Consumer consumer, typename QueueType::value_type value ) { { consumer( std::move( value ) ) } -> std::same_as<bool>; }
void thread_queue<QueueType>::dequeue_multiple( Consumer consumer )
{
    using namespace std;
    unique_lock lock( m_mtx );
    for( ; ; )
    {
        while( m_queue.empty() )
            m_cv.wait( lock );
        try
        {
            bool cont = consumer( move( m_queue.back() ) );
            m_queue.pop_back();
            if( !cont )
                return;
        }
        catch( ... )
        {
            m_queue.pop_back();
            throw;
        }
    }
}

template<typename QueueType>
    requires thread_queue_concept<QueueType>
void thread_queue<QueueType>::thread_queue::swap( thread_queue &other )
{
    std::lock_guard
        ourLock( m_mtx ),
        otherLock( other.m_mtx );
    m_queue.swap( other.m_queue );
}

The only template-parameter is BaseType, which can be a std::deque type or std::list type, restricted with thread_queue_concept.唯一的模板参数是 BaseType,它可以是 std::deque 类型或 std::list 类型,受 thread_queue_concept 限制。 This class uses this type as the internal queue type.此类使用此类型作为内部队列类型。 Chose that BaseType that is most efficient for your application.选择对您的应用程序最有效的 BaseType。 I might have restricted the class on a more differentiated thread_queue_concepts that checks for all the used parts of BaseType so that this class might apply for other types compatible to std::list<> or std::deque<> but I was too lazy to implement that for the unlikely case that someone implements something like that on his own.我可能已经将这个类限制在一个更区分的 thread_queue_concepts 上,它检查 BaseType 的所有使用部分,以便这个类可能适用于与 std::list<> 或 std::deque<> 兼容的其他类型,但我懒得在不太可能的情况下实施,即有人自己实施类似的事情。 One advantage of this code are enqueue_multiple and dequeue_multiple.此代码的一个优点是 enqueue_multiple 和 dequeue_multiple。 These functions are given a function-object, usually a lambda, which can enqueue or dequeue multiple items with only one locking step.这些函数被赋予一个函数对象,通常是一个 lambda,它只需一个锁定步骤就可以使多个项目入队或出队。 For enqueue this always holds true, for dequeue this depends on if the queue has elements to fetch or not.对于入队,这始终成立,对于出队,这取决于队列是否有要获取的元素。
enqueue_multiple usually makes sense if you have one producer and multiple consumers.如果您有一个生产者和多个消费者,则 enqueue_multiple 通常是有意义的。 It results in longer periods holding the lock and therefore it makes sense only if the items can be produced or move fast.它导致持有锁的时间更长,因此只有在物品可以生产或快速移动时才有意义。
dequeue_multiple usually makes sense if you have multiple producers and one consumer.如果您有多个生产者和一个消费者,则 dequeue_multiple 通常是有意义的。 Here we also have longer locking periods, but as objects are usually only have fast moves here, this normally doesn't hurt.在这里我们也有更长的锁定时间,但由于对象通常在这里只有快速移动,这通常不会造成伤害。
If the consumer function object of the dequeue_multiple throws an exception while consuming, the exception is caugt and the element provided to the consumer (rvalue-refernce inside the underlying queue types object) is removed.如果 dequeue_multiple 的消费者函数对象在消费时抛出异常,则异常被 caugt 并提供给消费者的元素(底层队列类型对象内的右值引用)被删除。
If you like to use this class with C++11 you have to remove the concepts or disable them with #if defined(__cpp_concepts).如果你想在 C++11 中使用这个类,你必须删除这些概念或使用 #if defined(__cpp_concepts) 禁用它们。

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM