简体   繁体   English

在多个python进程之间共享RabbitMQ通道

[英]Share RabbitMQ channel between multiple python processes

I want to share the BlockingChannel across multiple python process. 我想在多个python进程之间共享BlockingChannel In order to send basic_ack from other python process. 为了从其他python进程发送basic_ack

How to share the BlockingChannel across multiple python processes. 如何在多个python进程之间共享BlockingChannel

Following is the code: 以下是代码:

self.__connection__ = pika.BlockingConnection(pika.ConnectionParameters(host='localhost'))
self.__channel__ = self.__connection__.channel()

I have tried to dump using pickle but it doenst allow to dump Channel and give error can't pickle select.epoll objects using the follwoing code 我尝试使用pickle进行转储,但是它确实允许转储Channel并给出错误can't pickle select.epoll objects使用以下代码对can't pickle select.epoll objects进行can't pickle select.epoll objects

filepath = "temp/" + "merger_channel.sav"
pickle.dump(self.__channel__, open(filepath, 'wb'))

GOAL: 目标:

Goal is to send basic_ack from channel from other python processes. 目标是从其他python进程的通道发送basic_ack

It is an antipattern to share a channel between multiple threads and it's quite unlikely you will manage to share it between processes. 在多个线程之间共享一个通道是一种反模式,您很难在进程之间共享它。

The rule of thumb is 1 connection per process and 1 channel per thread. 经验法则是每个进程1个connection和每个线程1个channel

You can read more in regard of this matter at the following links: 您可以通过以下链接阅读有关此问题的更多信息:

  1. 13 common RabbitMQ mistakes 13个常见的RabbitMQ错误
  2. RabbitMQ best practices RabbitMQ最佳实践
  3. This SO thread gives an in depth analysis in regards of RabbitMQ and concurrent consumption SO线程对RabbitMQ和并发消耗进行了深入分析

If you want to pair message consumption together with multiprocessing, the usual pattern is to let the main process receive the messages, deliver their payload to a pool of worker processes and acknowledge them once they are done. 如果要将消息使用量与多处理功能结合在一起,通常的模式是让主进程接收消息,将其有效负载传递到工作进程池中,并在完成后对其进行确认。

Simple example using pika.BlockingChannel and concurrent.futures.ProcessPoolExecutor : 使用简单的例子pika.BlockingChannelconcurrent.futures.ProcessPoolExecutor

def ack_message(channel, delivery_tag, _future):
    """Called once the message has been processed.
    Acknowledge the message to RabbitMQ.
    """
    channel.basic_ack(delivery_tag=delivery_tag)

for message in channel.consume(queue='example'):
    method, properties, body = message

    future = pool.submit(process_message, body)
    # use partial to pass channel and ack_tag to callback function
    ack_message_callback = functools.partial(ack_message, channel, method.delivery_tag)
    future.add_done_callback(ack_message_callback)      

The above loop will endlessly consume messages from the example queue and submit them to the pool of processes. 上面的循环将无休止地消耗example队列中的消息,并将其提交到进程池。 You can control how many messages to process concurrently via RabbitMQ consumer prefetch parameter. 您可以通过RabbitMQ 使用者预取参数控制要同时处理的消息数量。 Check pika.basic_qos to see how to do it in Python. 检查pika.basic_qos以了解如何在Python中进行操作。

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM