简体   繁体   English

为Python脚本创建共享消息流的最佳方法是什么?

[英]What is the best way to create shared message stream for Python scripts?

What I want to do: I need a simple message stream, so some scripts can send results there and another script can take results and do some work asynchronously. 我想做什么:我需要一个简单的消息流,因此某些脚本可以在那里发送结果,而另一个脚本可以获取结果并异步完成一些工作。

Main problem: I want to see what's happening, so if something breaks - I can fix it quickly. 主要问题:我想看看发生了什么,所以如果有什么问题-我可以迅速解决。 I tried to use Celery+RabbitMQ (can see workers with args, using Flower, but scheduling too complicated) and multiprocessing.Queue (simple, but can't see workers with args). 我尝试使用Celery + RabbitMQ(可以看到带有args的工人,使用Flower,但是调度太复杂了)和multiprocessing.Queue(很简单,但是看不到带有args的工人)。


What I've done: I tried to build something similar, using MongoDB capped collection and run Popen with multiple processes, to react. 我做了什么:我尝试使用MongoDB封顶的集合并通过多个进程运行Popen来构建类似的东西,以做出反应。 Some scripts write smth to the collection, the script below monitors it and if some condition is met - run another script. 一些脚本将smth写入集合,下面的脚本对其进行监视,如果满足某些条件,请运行另一个脚本。

Main problem: subprocess.Popen() usage from inside multiprocessing.Process() looks unnatural (still does the work), so I'm trying to find better or/and more stable solution :) 主要问题: multiprocessing.Process()内部的subprocess.Popen()用法看起来很不自然(仍然可以正常工作),所以我试图找到更好或更稳定的解决方案:)


Listener script: 侦听器脚本:

from pymongo import MongoClient, CursorType
from time import sleep
from datetime import datetime

from multiprocessing import Process
import subprocess

def worker_email(keyword):
     subprocess.Popen(["python", "worker_email.py", str(keyword)])

def worker_checker(keyword):
     subprocess.Popen(["python", "worker_checker.py", str(keyword)])

if __name__ == '__main__':

    #DB connect
    client = MongoClient('mongodb://localhost:27017/')
    db = client.admetric
    coll = db.my_collection
    cursor = coll.find(cursor_type = CursorType.TAILABLE_AWAIT)

    #Script start UTC time
    utc_run = datetime.utcnow()

    while cursor.alive:
        try:
            doc = cursor.next()
            #Print doc name/args to see in command line, while Listener runs
            print(doc)
            #Filter docs without 'created' data
            if 'created' in doc.keys():
                #Ignore docs older than script
                if doc['created'] > utc_run:
                    #Filter docs without 'type' data
                    if 'type' in doc.keys():
                        #Check type
                        if doc['type'] == 'send_email':
                            #Create process and run external script
                            p = Process(target=worker_email, args=(doc['message'],))
                            p.start()
                            p.join()
                        #Check type
                        elif doc['type'] == 'check_data':
                            #Create process and run external script
                            p = Process(target=worker_checker, args=(doc['message'],))
                            p.start()
                            p.join()
        except StopIteration:
            sleep(1)

As long as you have control over the worker_email and worker_checker logic, you don't need to execute the in a separate interpreter. 只要您可以控制worker_emailworker_checker逻辑,就无需在单独的解释器中执行。

Just expose an entry point in the two modules and run them via multiprocessing.Process . 只需在两个模块中公开一个入口点,然后通过multiprocessing.Process运行它们。

worker_email.py worker_email.py

def email_job(message):
    # start processing the message here

worker_checker.py worker_checker.py

def check_job(message):
    # start checking the message here

listener_script.py listener_script.py

# you are not going to pollute the listener namespace
# as the only names you import are the entry points of the scripts
# therefore, encapsulation is preserved
from worker_email import email_job
from worker_checker import check_job

email_process = Process(target=email_job, args=[message])
check_process = Process(target=check_job, args=[message])

If you cannot expose an entry point from the worker modules then just run subprocess.Popen . 如果您无法从工作程序模块中公开入口点,则只需运行subprocess.Popen You have no benefit in wrapping them within a Process . 将它们包装在Process没有任何好处。

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

相关问题 在 AWS 中运行 python 脚本的最佳方式是什么? - What is the best way to run python scripts in AWS? 在 Python 中执行简单的用户定义脚本的最佳方法是什么 - What is the best way to execute simple user defined scripts in Python 在Ubuntu后台运行python脚本的最佳方法是什么? - What is the best way to run python scripts in the background of Ubuntu? 在 AWS 中每月运行一次 python 脚本的最佳方法是什么? - What is the best way to run python scripts once per month in AWS? 使用共享包构建 Python 项目的最佳方法是什么? - What is the best way to structure Python projects using a shared package? 在python中创建字符串数组的最佳方法是什么? - What is the best way to create a string array in python? 使用Python管理多平台视频流的最佳方法是什么? - What is the best way to manage multiplatform video stream using Python? 验证 RTSP stream URL [python] 的最佳方法是什么 - What is the best way to validate RTSP stream URL [python] 将FIX消息解压缩到python字典的最佳方法是什么? - what is the best way to unpack a FIX message into a python dictionary? 在python exe中调用子流程脚本的最佳方法 - Best way to call subprocess scripts in a Python exe
 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM