简体   繁体   中英

Plotting the pool map for multi processing Python

How can I run multiple processes pool where I process run1-3 asynchronously, with a multi processing tool in python. I am trying to pass the values (10,2,4),(55,6,8),(9,8,7) for run1,run2,run3 respectively?

import multiprocessing 
def Numbers(number,number2,divider):
   value = number * number2/divider
   return value
if __name__ == "__main__":

   with multiprocessing.Pool(3) as pool:               # 3 processes
        run1, run2, run3 = pool.map(Numbers, [(10,2,4),(55,6,8),(9,8,7)]) # map input & output

You just need to use method starmap instead of map , which, according to the documentation:

Like map() except that the elements of the iterable are expected to be iterables that are unpacked as arguments.

Hence an iterable of [(1,2), (3, 4)] results in [func(1,2), func(3,4)] .

import multiprocessing
def Numbers(number,number2,divider):
   value = number * number2/divider
   return value
if __name__ == "__main__":

   with multiprocessing.Pool(3) as pool:               # 3 processes
        run1, run2, run3 = pool.starmap(Numbers, [(10,2,4),(55,6,8),(9,8,7)]) # map input & output
   print(run1, run2, run3)

Prints:

5.0 41.25 10.285714285714286

Note

This is the correct way of doing what you want to do, but you will not find that using multiprocessing for such a trivial worker function will improve performance; in fact, it will degrade performance due to the overhead in creating the pool and passing arguments and results to and from one address space to another.

Python's multiprocessing library does however have a wrapper for piping data between a parent and child process, the Manager which has shared data utilities such as a shared dictionary. There is a good stack overflow post here about the topic.

Using multiprocessing you can pass unique arguments and a shared dictionary to each process, and you must ensure each process writes to a different key in the dictionary.

An example of this in use given your example is as follows:

import multiprocessing


def worker(process_key, return_dict, compute_array):
    """worker function"""
    number = compute_array[0]
    number2 = compute_array[1]
    divider = compute_array[2]
    return_dict[process_key] = number * number2/divider


if __name__ == "__main__":
    manager = multiprocessing.Manager()
    return_dict = manager.dict()
    jobs = []
    compute_arrays = [[10, 2, 4], [55, 6, 8], [9, 8, 7]]
    for i in range(len(compute_arrays)):
        p = multiprocessing.Process(target=worker, args=(
            i, return_dict, compute_arrays[i]))
        jobs.append(p)
        p.start()

    for proc in jobs:
        proc.join()
    print(return_dict)

Edit: Information from Booboo is much more precise, I had a recommendation for threading which I'm removing as it's certainly not the right utility in Python due to the GIL.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM