简体   繁体   English

没有STDOUT的两个python脚本之间的进程间通信

[英]Interprocess Communication between two python scripts without STDOUT

I am trying to create a Monitor script that monitors all the threads or a huge python script which has several loggers running, several thread running. 我正在尝试创建一个Monitor脚本来监视所有线程,或者一个巨大的python脚本,其中运行着几个记录器,几个线程正在运行。

From Monitor.py i could run subprocess and forward the STDOUT which might contain my status of the threads.. but since several loggers are running i am seeing other logging in that.. 从Monitor.py,我可以运行子进程并转发可能包含线程状态的STDOUT。.但是由于多个记录程序正在运行,因此我看到了其他日志记录。

Question: How can run the main script as a separate process and get custom messages, thread status without interfering with logging. 问题:如何将主脚本作为一个单独的进程运行,并在不干扰日志记录的情况下获取自定义消息和线程状态。 ( passing PIPE as argument ? ) (将PIPE作为参数传递?)

Main_Script.py * Runs Several Threads * Each Thread has separate Loggers. Main_Script.py *运行多个线程*每个线程都有单独的记录器。

Monitor.py * Spins up the Main_script.py * Monitors the each of the threads in MainScript.py ( may be obtain other messages from Main_script in the future) Monitor.py *旋转Main_script.py *监视MainScript.py中的每个线程(将来可能会从Main_script获得其他消息)

So Far, I tried subprocess, process from Multiprocessing. 到目前为止,我尝试了子进程,即来自Multiprocessing的进程。

Subprocess lets me start the Main_script and forward the stdout back to monitor but I see the logging of threads coming in through the same STDOUT. 子进程允许我启动Main_script并将标准输出转发回监视,但是我看到通过同一STDOUT进入的线程的日志记录。 I am using the “import logging “ Library to log the data from each threads to separate files. 我正在使用“导入日志”库将每个线程中的数据记录到单独的文件中。

I tried “process” from Multiprocessing. 我尝试了多处理中的“过程”。 I had to call the main function of the main_script.py as a process and send a PIPE argument to it from monitor.py. 我必须将main_script.py的主要功能作为进程调用,并从monitor.py向其发送PIPE参数。 Now I can't see the Main_script.py as a separate process when I run top command. 现在,当我运行top命令时,看不到Main_script.py是一个单独的进程。

Normally, you want to change the child process to work like a typical Unix userland tool: the logging and other side-band information goes to stderr (or to a file, or syslog, etc.), and only the actual output goes to stdout. 通常,您希望将子进程更改为像典型的Unix用户态工具一样工作:日志记录和其他边带信息进入stderr(或进入文件或syslog等),而只有实际输出进入stdout 。

Then, the problem is easy: just capture stdout to a PIPE that you process, and either capture stderr to a different PIPE , or pass it through to real stderr . 然后,问题就很容易了:只需将stdout捕获到您要处理的PIPE ,然后将stderr捕获到其他PIPE ,或者将其传递到实际的stderr


If that's not appropriate for some reason, you need to come up with some other mechanism for IPC: Unix or Windows named pipes, anonymous pipes that you pass by leaking the file descriptor across the fork / exec and then pass the fd as an argument, Unix-domain sockets, TCP or UDP localhost sockets, a higher-level protocol like a web service on top of TCP sockets, mmap ped files, anonymous mmap s or pipes that you pass between processes via a Unix-domain socket or Windows API calls, … 如果由于某些原因这不合适,则需要为IPC提供其他机制:Unix或Windows命名管道,通过在fork / exec泄漏文件描述符传递的匿名管道,然后将fd作为参数传递, Unix域套接字,TCP或UDP本地主机套接字,更高级别的协议,例如TCP套接字之上的Web服务, mmap ped文件,匿名mmap或通过Unix域套接字或Windows API调用在进程之间传递的管道,…

As you can see, there are a huge number of options. 如您所见,有很多选择。 Without knowing anything about your problem other than that you want "custom messages", it's impossible to tell you which one you want. 除了想要“自定义消息”之外,不知道其他任何有关您问题的信息,就无法告诉您您想要哪一个。

While we're at it: If you can rewrite your code around multiprocessing rather than subprocess , there are nice high-level abstractions built in to that module. 当我们在讨论它时:如果您可以围绕multiprocessing而不是subprocess重写代码,则该模块内置了不错的高级抽象。 For example, you can use a Queue that automatically manages synchronization and blocking, and also manages pickling/unpickling so you can just pass any (picklable) object rather than having to worry about serializing to text and parsing the text. 例如,您可以使用自动管理同步和阻止以及管理酸洗/酸洗的Queue ,以便您可以传递任何(可酸洗)对象,而不必担心序列化为文本和解析文本。 Or you can create shared memory holding arrays of int32 objects, or NumPy arrays, or arbitrary structures that you define with ctypes . 或者,您可以创建共享内存,其中包含int32对象数组,NumPy数组或使用ctypes定义的任意结构。 And so on. 等等。 Of course you could build the same abstractions yourself, without needing to use multiprocessing , but it's a lot easier when they're there out of the box. 当然,您可以自己构建相同的抽象,而无需使用multiprocessing ,但是当它们开箱即用时,它要容易得多。


Finally, while your question is tagged ipc and pipe , and titled "Interprocess Communication", your description refers to threads , not processes. 最后,虽然您的问题被标记为ipcpipe ,并且标题为“进程间通信”,但您的描述是针对线程而不是进程。 If you actually are using a bunch of threads in a single process, you don't need any of this. 如果实际上在单个进程中使用了一堆线程,则不需要任何线程。

You can just stick your results on a queue.Queue , or store them in a list or deque with a Lock around it, or pass in a callback to be called with each new result, or use a higher-level abstraction like concurrent.futures.ThreadPoolExecutor and return a Future object or an iterator of Future s, etc. 您可以将结果粘贴在queue.Queue ,或将它们存储在list或在其周围带有Lock deque ,或者传递要在每个新结果中调用的回调,或使用更高级别的抽象(例如concurrent.futures.ThreadPoolExecutor并返回一个Future对象或一个迭代Future S等

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM