简体   繁体   中英

Python Multiprocessing with shared data source and multiple class instances

My program needs to spawn multiple instances of a class, each processing data that is coming from a streaming data source.

For example:

parameters = [1, 2, 3]

class FakeStreamingApi:
    def __init__(self):
        pass

    def data(self):
        return 42
    pass

class DoStuff:
    def __init__(self, parameter):
        self.parameter = parameter

    def run(self):
        data = streaming_api.data()
        output = self.parameter ** 2 + data # Some CPU intensive task
        print output

streaming_api = FakeStreamingApi()

# Here's how this would work with no multiprocessing
instance_1 = DoStuff(parameters[0])
instance_1.run()

Once the instances are running they don't need to interact with each other, they just have to get the data as it comes in. (and print error messages, etc)

I am totally at a loss how to make this work with multiprocessing, since I first have to create a new instance of the class DoStuff, and then have it run.

This is definitely not the way to do it:

# Let's try multiprocessing
import multiprocessing

for parameter in parameters:
    processes = [ multiprocessing.Process(target = DoStuff, args = (parameter)) ]

# Hmm, this doesn't work...

We could try defining a function to spawn classes, but that seems ugly:

import multiprocessing

def spawn_classes(parameter):
    instance = DoStuff(parameter)
    instance.run()

for parameter in parameters:
        processes = [ multiprocessing.Process(target = spawn_classes, args = (parameter,)) ]

# Can't tell if it works -- no output on screen?

Plus, I don't want to have 3 different copies of the API interface class running, I want that data to be shared between all the processes... and as far as I can tell, multiprocessing creates copies of everything for each new process.

Ideas?

Edit: I think I may have got it... is there anything wrong with this?

import multiprocessing

parameters = [1, 2, 3]

class FakeStreamingApi:
    def __init__(self):
        pass

    def data(self):
        return 42
    pass

class Worker(multiprocessing.Process):
    def __init__(self, parameter):
        super(Worker, self).__init__()
        self.parameter = parameter

    def run(self):
        data = streaming_api.data()
        output = self.parameter ** 2 + data # Some CPU intensive task
        print output

streaming_api = FakeStreamingApi()

if __name__ == '__main__':
    jobs = []
    for parameter in parameters:
        p = Worker(parameter)
        jobs.append(p)
        p.start()
    for j in jobs:
        j.join()

I came to the conclusion that it would be necessary to use multiprocessing.Queues to solve this. The data source (the streaming API) needs to pass copies of the data to all the different processes, so they can consume it.

There's another way to solve this using the multiprocessing.Manager to create a shared dict, but I didn't explore it further, as it looks fairly inefficient and cannot propagate changes to inner values (eg if you have a dict of lists, changes to the inner lists will not propagate).

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM