[英]Is queue object automatically shared among Processes from python multiprocessing module?
I have recently start working with Python multiprocessing module. 我最近开始使用Python多处理模块。 I understand explanation of queues, but recently I found on https://pymotw.com/2/multiprocessing/communication.html that queues don't need to be pass as args to Proccess constructor method, eg
我了解队列的解释,但是最近我在https://pymotw.com/2/multiprocessing/communication.html上发现,不需要将arg作为args传递给Proccess构造函数方法,例如
p = Process(target=f, args=(q,)),
instead, it seems that they are globally shared. 相反,它们似乎是全局共享的。 I thought that this is only the case when we have managed queues, ie
我以为只有在我们管理队列的情况下,即
queue = manager.Queue()
Can someone help me to understand this? 有人可以帮我理解吗?
In Unix, a child process is created with fork()
. 在Unix中,使用
fork()
创建一个子进程。
In Windows, a child process is created by invoking the same script with special arguments . 在Windows中,通过使用特殊参数调用相同的脚本来创建子进程。
In both cases, there may be the q
variable in the child process because it inherited the state or because the relevant code has run before execution reached the worker function. 在这两种情况下,子进程中都可能存在
q
变量,因为它继承了状态,或者因为相关代码在执行到达worker函数之前已经运行。
But that is not enough. 但这还不够。 An IPC needs to be set up between the processes for it to play its role as a communication channel.
需要在各个过程之间建立IPC,以发挥其作为通信渠道的作用。 Otherwise, it's just a regular local object.
否则,它只是一个常规的本地对象。
When in doubt, see the official documentation which is the authoritative information source and is generally of exceptional quality. 如有疑问,请参阅官方文档 , 该文档是权威的信息源,通常具有卓越的质量。 With
multiprocessing
, it's especially important to stick to the docs because due to its quirky nature, various things may seem to work but break in unpredictable ways. 使用
multiprocessing
,坚持使用文档尤为重要,因为由于其古怪的性质,各种事物似乎都可以工作,但会以无法预测的方式中断。
声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.