简体   繁体   English

C ++ STL生产者多个消费者,其中生产者在产生下一个价值之前等待免费消费者

[英]C++ STL Producer multiple consumer where producer waits for free consumer before producing next value

My little consumer-producer problem had me stumped for some time. 我的小生产者问题困扰了我一段时间。 I didn't want an implementation where one producer pushes some data round-robin to the consumers, filling up their queues of data respectively. 我不希望这样的实现,即一个生产者将一些数据循环传递给消费者,分别填充他们的数据队列。

I wanted to have one producer, x consumers, but the producer waits with producing new data until a consumer is free again. 我希望有一个生产者x消费者,但是生产者等待产生新数据,直到消费者再次自由。 In my example there are 3 consumers so the producer creates a maximum of 3 objects of data at any given time. 在我的示例中,有3个使用者,因此生产者在任何给定时间最多创建3个数据对象。 Since I don't like polling, the consumers were supposed to notify the producer when they are done. 由于我不喜欢轮询,因此应该让消费者在完成后通知生产者。 Sounds simple, but the solution I found doesn't please me. 听起来很简单,但是我发现的解决方案并不令人满意。 First the code. 首先是代码。

#include "stdafx.h"
#include <mutex>
#include <iostream>
#include <future>
#include <map>
#include <atomic>

std::atomic_int totalconsumed;

class producer {
    using runningmap_t = std::map<int, std::pair<std::future<void>, bool>>;

    // Secure the map of futures.
    std::mutex mutex_;
    runningmap_t running_;

    // Used for finished notification
    std::mutex waitermutex_;
    std::condition_variable waiter_;

    // The magic number to limit the producer.
    std::atomic<int> count_;

    bool can_run();
    void clean();

    // Fake a source, e.g. filesystem scan.
    int fakeiter;
    int next();
    bool has_next() const;

public:
    producer() : fakeiter(50) {}
    void run();
    void notify(int value);
    void wait();
};

class consumer {
    producer& producer_;
public:
    consumer(producer& producer) : producer_(producer) {}
    void run(int value) {
        std::this_thread::sleep_for(std::chrono::milliseconds(42));
        std::cout << "Consumed " << value << " on (" << std::this_thread::get_id() << ")" << std::endl;
        totalconsumed++;
        producer_.notify(value);
    }
};


// Only if less than three threads are active, another gets to run.
bool producer::can_run() { return count_.load() < 3; }

// Verify if there's something to consume
bool producer::has_next() const { return 0 != fakeiter; }

// Produce the next value for consumption.
int producer::next() { return --fakeiter; }

// Remove the futures that have reported to be finished.
void producer::clean()
{
    for (auto it = running_.begin(); it != running_.end(); ) {
        if (it->second.second) {
            it = running_.erase(it);
        }
        else { 
            ++it;
        }
    }
}

// Runs the producer. Creates a new consumer for every produced value. Max 3 at a time.
void producer::run()
{
    while (has_next()) {
        if (can_run()) {
            auto c = next();

            count_++;
            auto future = std::async(&consumer::run, consumer(*this), c);

            std::unique_lock<std::mutex> lock(mutex_);
            running_[c] = std::make_pair(std::move(future), false);

            clean();
        }
        else {
            std::unique_lock<std::mutex> lock(waitermutex_);
            waiter_.wait(lock);
        }
    }
}

// Consumers diligently tell the producer that they are finished.
void producer::notify(int value)
{
    count_--;

    mutex_.lock();
    running_[value].second = true;
    mutex_.unlock();

    std::unique_lock<std::mutex> waiterlock(waitermutex_);
    waiter_.notify_all();
}

// Wait for all consumers to finish.
void producer::wait()
{
    while (!running_.empty()) {
        mutex_.lock();
        clean();
        mutex_.unlock();

        std::this_thread::sleep_for(std::chrono::milliseconds(10));
    }
}

// Looks like the application entry point.
int main()
{
    producer p;

    std::thread pthread(&producer::run, &p);
    pthread.join();
    p.wait();

    std::cout << std::endl << std::endl << "Total consumed " << totalconsumed.load() << std::endl;

    return 0;
}

The part I don't like is the list of values mapped to the futures, called running_ . 我不喜欢的部分是映射到期货的值列表,称为running_ I need to keep the future around until the consumer is actually done. consumer真正完成之前,我需要保持future I can't remove the future from the map in the notify method or else I'll kill the thread that is currently calling notify . 我无法在notify方法中从地图中删除future ,否则我将杀死当前正在调用notify的线程。

Am I missing something that could simplify this construct? 我是否缺少可以简化此构造的内容?

template<class T>
struct slotted_data {
  std::size_t I;
  T t;
};
template<class T>
using sink = std::function<void(T)>;
template<class T, std::size_t N>
struct async_slots {
  bool produce( slotted_data<T> data ) {
    if (terminate || data.I>=N) return false;
    {
      auto l = lock();
      if (slots[data.I]) return false;
      slots[data.I] = std::move(data.t);
    }
    cv.notify_one();
    return true;
  }
  // rare use of non-lambda cv.wait in the wild!
  bool consume(sink<slotted_data<T>> f) {
    auto l = lock();
    while(!terminate) {
      for (auto& slot:slots) {
        if (slot) {
          auto r = std::move(*slot);
          slot = std::nullopt;
          f({std::size_t(&slot-slots.data()), std::move(r)}); // invoke in lock
          return true;
        }
      }
      cv.wait(l);
    }
    return false;
  }
  // easier and safer version:
  std::optional<slotted_data<T>> consume() {
    std::optional<slotted_data<T>> r;
    bool worked = consume([&](auto&& data) { r = std::move(data); });
    if (!worked) return {};
    return r;
  }
  void finish() {
      {
        auto l = lock();
        terminate = true;
      }
      cv.notify_all();
  }
private:
  auto lock() { return std::unique_lock<std::mutex>(m); }
  std::mutex m;
  std::condition_variable cv;
  std::array< std::optional<T>, N > slots;
  bool terminate = false;
};

async_slots provides a fixed number of slots and an awaitable consume. async_slots提供固定数量的插槽和一个等待的消耗。 If you try to produce two things in the same slot, the producer function returns false and ignores you. 如果您尝试在同一插槽中生产两件东西,那么生产者函数将返回false并忽略您。

consume invokes the sink of the data inside the mutex in a continuation passing style. consume以连续传递样式调用互斥锁内的数据接收器。 This permits atomic consumption. 这允许原子消耗。

We want to invert producer and consumer: 我们想要反转生产者和消费者:

template<class T, std::size_t N>
struct slotted_consumer {
  bool consume( std::size_t I, sink<T> sink ) {
    std::optional<T> data;
    std::condition_variable cv;
    std::mutex m;
    bool worked = slots.produce(
      {
        I,
        [&](auto&& t){
          {
            std::unique_lock<std::mutex> l(m);
            data.emplace(std::move(t));
          }
          cv.notify_one();
        }
      }
    );
    if (!worked) return false;
    std::unique_lock<std::mutex> l(m);
    cv.wait(l, [&]()->bool{
      return (bool)data;
    });
    sink( std::move(*data) );
    return true;
  }
  bool produce( T t ) {
    return slots.consume(
        [&](auto&& f) {
            f.t( std::move(t) );
        }
    );
  }
  void finish() {
      slots.finish();
  }
private:
  async_slots< sink<T>, N > slots;
};

we have to take some care to execute sink in a context where we are not holding the mutex of async_slots , which is why consume above is so strange. 我们必须注意在不持有async_slots互斥量的上下文中执行接收sink ,这就是为什么上面的consume如此奇怪的原因。

Live example . 现场例子

You share a slotted_consumer< int, 3 > slots . 您共享一个slotted_consumer< int, 3 > slots The producing thread repeatedly calls slots.produce(42); 生产线程重复调用slots.produce(42); . It blocks until a new consumer lines up. 它一直阻塞直到有新的消费者排队。

Consumer #2 calls slots.consume( 2, [&](int x){ /* code to consume x */ } ) , and #1 and #0 pass their slot numbers as well. 消费者#2调用slots.consume( 2, [&](int x){ /* code to consume x */ } ) ,并且#1和#0也传递其插槽号。

All 3 consumers can be waiting for the next production. 所有3个消费者都可以等待下一次生产。 The above system defaults to feeding #0 first if it is waiting for more work; 上面的系统默认在等待更多工作时首先送入#0。 we could make it "fair" at a cost of keeping a bit more state. 我们可以以保持更多状态为代价来使其“公平”。

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM