简体   繁体   English

多线程C ++消息传递

[英]Multi-threaded C++ Message Passing

I am tasked to modify a synchronous C program so that it can run in parallel. 我的任务是修改同步C程序,以便它可以并行运行。 The goal is to have it be as portable as possible as it is an open source program that many people use. 我们的目标是让它尽可能便携,因为它是许多人使用的开源程序。 Because of this, I thought it would be best to wrap the program in a C++ layer so that I could take advantage of the portable boost libraries. 因此,我认为最好将程序包装在C ++层中,这样我就可以利用便携式的boost库。 I have already done this and everything seems to work as expected. 我已经完成了这一切,一切似乎按预期工作。

The problem I am having is deciding on what is the best approach to pass messages between the threads. 我遇到的问题是决定在线程之间传递消息的最佳方法是什么。 Luckily, the architecture of the program is that of a multiple producer and single consumer. 幸运的是,该程序的体系结构是多生产者和单个消费者的体系结构。 Even better, the order of the messages is not important. 更好的是,消息的顺序并不重要。 I have read that single-producer/single-consumer (SPSC) queues would benefit from this architecture. 我已经读过单一生产者/单一消费者(SPSC)队列将受益于这种架构。 Those experienced with multi-threaded programming have any advice? 那些有多线程编程经验的人有什么建议吗? I'm quite new to this stuff. 我对这些东西很新。 Also any code examples using boost to implement SPSC would be greatly appreciated. 此外,任何使用boost实现SPSC的代码示例都将非常感激。

Below is the technique I used for my Cooperative Multi-tasking / Multi-threading library (MACE) http://bytemaster.github.com/mace/ . 以下是我用于合作多任务/多线程库(MACE)的技术http://bytemaster.github.com/mace/ It has the benefit of being lock-free except for when the queue is empty. 除了队列为空时,它具有无锁的优点。

struct task {
   boost::function<void()> func;
   task* next;
};


boost::mutex                     task_ready_mutex;
boost::condition_variable        task_ready;
boost::atomic<task*>             task_in_queue;

// this can be called from any thread
void thread::post_task( task* t ) {
     // atomically post the task to the queue.
     task* stale_head = task_in_queue.load(boost::memory_order_relaxed);
     do { t->next = stale_head;
     } while( !task_in_queue.compare_exchange_weak( stale_head, t, boost::memory_order_release ) );

   // Because only one thread can post the 'first task', only that thread will attempt
   // to aquire the lock and therefore there should be no contention on this lock except
   // when *this thread is about to block on a wait condition.  
    if( !stale_head ) { 
        boost::unique_lock<boost::mutex> lock(task_ready_mutex);
        task_ready.notify_one();
    }
}

// this is the consumer thread.
void process_tasks() {
  while( !done ) {
   // this will atomically pop everything that has been posted so far.
   pending = task_in_queue.exchange(0,boost::memory_order_consume);
   // pending is a linked list in 'reverse post order', so process them
   // from tail to head if you want to maintain order.

   if( !pending ) { // lock scope
      boost::unique_lock<boost::mutex> lock(task_ready_mutex);                
      // check one last time while holding the lock before blocking.
      if( !task_in_queue ) task_ready.wait( lock );
   }
 }

If there is only a single consumer but multiple producers, then I would use an array or some array-like data-structure with O(1) access time where each array-slot represents a single-producer-consumer queue. 如果只有一个消费者而是多个生产者,那么我将使用一个数组或类似数组的数据结构,其中O(1)访问时间,其中每个数组槽代表一个生产者 - 消费者队列。 The great advantage to a single-producer-consumer queue is the fact you can make it lock-free without any explicit synchronization mechanisms, thus making it a very fast data-structure in a multi-threaded environment. 单生产者 - 消费者队列的巨大优势在于,您可以在没有任何显式同步机制的情况下使其无锁,从而使其成为多线程环境中非常快速的数据结构。 See my answer here for a bare-bones implementation of a single-producer-consumer queue. 请参阅我的答案,了解单生产者 - 消费者队列的简单实现。

There are many examples of producer-consumer queues on the net, safe for multiple producers/consumers. 网上有许多生产者 - 消费者队列的例子,对多个生产者/消费者来说是安全的。 @bytemaster posted one that uses a link inside each message to eliminate storage in the queue class itself - that's a fine approach, I use it myself on embedded jobs. @bytemaster发布了一个在每个消息中使用链接来消除队列类本身的存储 - 这是一个很好的方法,我自己在嵌入式作业上使用它。

Where the queue class must provide storage, I usually go with a 'pool queue' of size N, loaded up with N *message class instances at startup. 在队列类必须提供存储的情况下,我通常使用大小为N的“池队列”,在启动时加载N *消息类实例。 Threads that need to communicate have to pop a *message from the pool, load it up and queue it on. 需要通信的线程必须从池中弹出*消息,加载它并将其排队。 When eventually 'used up' the *message gets pushed back onto the pool. 当最终'用完'时,*消息被推回到池中。 This caps the number of messages and so all queues need only be of length N - no resizing, no new(), no delete(), easy leak-detection. 这会限制消息的数量,因此所有队列只需要长度为N - 没有调整大小,没有new(),没有delete(),容易泄漏检测。

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM