[英]Communicating with Process in Python 3.4 multiprocessing through function calling
I create a new class that is a subclass of multiprocessing.Process and I would like to invoke methods on this class. 我创建了一个新类,它是multiprocessing.Process的子类,我想在该类上调用方法。 The methods change class members but take no arguments, and I think should work transparently.
这些方法可以更改类成员,但不带参数,我认为应该透明地工作。 For instance, in the MWE below I create a class that inherits from Process and has a stop() function which just sets an instance member flag.
例如,在下面的MWE中,我创建一个从Process继承的类,并具有一个stop()函数,该函数仅设置实例成员标志。 When this flag is set though the run() method doesn't seem to notice a change.
尽管设置了此标志,但run()方法似乎并未注意到更改。 This all seemed to work when I was inheriting from threading.Thread, thoughts?
当我从threading.Thread继承时,这一切似乎都起作用。
from queue import Empty
import multiprocessing
class Worker(multiprocessing.Process):
def __init__(self, queue):
multiprocessing.Process.__init__(self) # , daemon=True)
self.queue = queue
self.close = False
def stop(self):
self.close = True
print(self.close)
def run(self):
while (not self.close) or self.queue.qsize() > 0:
print(self.close)
print(self.queue.qsize())
for item in range(0, self.queue.qsize()):
try:
self.queue.get_nowait()
except Empty:
continue
queue = multiprocessing.Queue()
dbq = Worker(queue)
dbq.start()
queue.put("d")
dbq.stop()
dbq.join()
You have to use something like multiprocessing.Value
for synchronization between processes. 您必须使用诸如
multiprocessing.Value
东西在进程之间进行同步。
Sample code: 样例代码:
from queue import Empty
from ctypes import c_bool
import multiprocessing
class Worker(multiprocessing.Process):
def __init__(self, queue):
multiprocessing.Process.__init__(self) # , daemon=True)
self.queue = queue
self.close = multiprocessing.Value(c_bool, False)
def stop(self):
self.close.value = True
print(self.close)
def run(self):
while (not self.close.value) or self.queue.qsize() > 0:
print(self.close)
print(self.queue.qsize())
for item in range(0, self.queue.qsize()):
try:
self.queue.get_nowait()
except Empty:
continue
if __name__ == '__main__':
queue = multiprocessing.Queue()
dbq = Worker(queue)
dbq.start()
queue.put("d")
dbq.stop()
dbq.join()
Processes do not share memory space with their parent in the same way threads do. 进程不会像线程一样与父进程共享内存空间。 When a process is
fork
ed it will get a new copy of the parent's memory so you can't share as easily as with threads (effectively... realistically there is copy-on-write ). fork
一个进程时,它将获得父级内存的新副本,因此您无法像使用线程那样轻松地共享(实际上,实际上是写时复制 )。
I recommend that in order to kill workers you use an synchronization primitive like Event
, because usually workers are killed together in response to something that happened. 我建议为了杀死工作人员,请使用
Event
类的同步原语,因为通常为响应所发生的事情而将工作人员一起杀死。
You will end up with something like this (notice, no stop
method for workers): 您将得到如下所示的结果(注意,工作人员没有
stop
方法):
from queue import Empty
import multiprocessing
class Worker(multiprocessing.Process):
# added the event to the initializing function
def __init__(self, queue, close_event):
multiprocessing.Process.__init__(self) # , daemon=True)
self.queue = queue
self.close = close_event
def run(self):
while (not self.close.is_set()) or self.queue.qsize() > 0:
print(self.close)
print(self.queue.qsize())
for item in range(0, self.queue.qsize()):
try:
self.queue.get_nowait()
except Empty:
continue
queue = multiprocessing.Queue()
# create a shared event for processes to react to
close_event = multiprocessing.Event()
# send event to all processes
dbq = Worker(queue, close_event)
dbq.start()
queue.put("d")
# set the event to stop workers
close_event.set()
dbq.join()
声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.