简体   繁体   中英

Communicating with Process in Python 3.4 multiprocessing through function calling

I create a new class that is a subclass of multiprocessing.Process and I would like to invoke methods on this class. The methods change class members but take no arguments, and I think should work transparently. For instance, in the MWE below I create a class that inherits from Process and has a stop() function which just sets an instance member flag. When this flag is set though the run() method doesn't seem to notice a change. This all seemed to work when I was inheriting from threading.Thread, thoughts?

from queue import Empty
import multiprocessing


class Worker(multiprocessing.Process):
    def __init__(self, queue):
        multiprocessing.Process.__init__(self) # , daemon=True)
        self.queue = queue
        self.close = False

    def stop(self):
        self.close = True
        print(self.close)

    def run(self):
        while (not self.close) or self.queue.qsize() > 0:
            print(self.close)
            print(self.queue.qsize())
            for item in range(0, self.queue.qsize()):
                try:
                    self.queue.get_nowait()
                except Empty:
                    continue

queue = multiprocessing.Queue()
dbq = Worker(queue)
dbq.start()
queue.put("d")
dbq.stop()
dbq.join()

You have to use something like multiprocessing.Value for synchronization between processes.

Sample code:

from queue import Empty
from ctypes import c_bool
import multiprocessing

class Worker(multiprocessing.Process):
    def __init__(self, queue):
        multiprocessing.Process.__init__(self) # , daemon=True)
        self.queue = queue
        self.close = multiprocessing.Value(c_bool, False)

    def stop(self):
        self.close.value = True
        print(self.close)

    def run(self):
        while (not self.close.value) or self.queue.qsize() > 0:
            print(self.close)
            print(self.queue.qsize())
            for item in range(0, self.queue.qsize()):
                try:
                    self.queue.get_nowait()
                except Empty:
                    continue

if __name__ == '__main__':
    queue = multiprocessing.Queue()
    dbq = Worker(queue)
    dbq.start()
    queue.put("d")
    dbq.stop()
    dbq.join()

Processes do not share memory space with their parent in the same way threads do. When a process is fork ed it will get a new copy of the parent's memory so you can't share as easily as with threads (effectively... realistically there is copy-on-write ).

I recommend that in order to kill workers you use an synchronization primitive like Event , because usually workers are killed together in response to something that happened.

You will end up with something like this (notice, no stop method for workers):

from queue import Empty
import multiprocessing


class Worker(multiprocessing.Process):
    # added the event to the initializing function
    def __init__(self, queue, close_event):
        multiprocessing.Process.__init__(self) # , daemon=True)
        self.queue = queue
        self.close = close_event

    def run(self):
        while (not self.close.is_set()) or self.queue.qsize() > 0:
            print(self.close)
            print(self.queue.qsize())
            for item in range(0, self.queue.qsize()):
                try:
                    self.queue.get_nowait()
                except Empty:
                    continue

queue = multiprocessing.Queue()
# create a shared event for processes to react to
close_event = multiprocessing.Event()
# send event to all processes
dbq = Worker(queue, close_event)
dbq.start()
queue.put("d")
# set the event to stop workers
close_event.set()
dbq.join()

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM