简体   繁体   中英

Python multiprocessing for each key in dictionary

I am new to python and i am trying to scale my processing in parallel. I have a file with certain number of tuples, each with certain value in the last column. I want to split this file data and apply my function in parallel to each chunk. But the thing is to split the data in to chunks based on last column value and apply the function for each chunk. For example, last column may have 'a' for some tuples and 'b' for some and 'c' for some. So in that case, i should get three chunks and process it in parallel. Number of unique values in last column may change depends on dataset, so i need to use the CPU accordingly.

Q1: What i tried till now is to read the file and created a dictionary based on that records, so basically three key-value pairs for the above one, one with 'a' as key and all records having 'a' as values and the same to 'b' and 'c'. I can make use of chunksize in multiprocessing, but here it is not size, its based on key, so how can i achieve this?

Q2: After processing the above chunks, i need the output of all together,order does not matter and then i need to use the whole output for further processing, how can i make my main program wait till all those process complete?

Let me know if further input is required. Thanks.

Assuming, as you described, you have a three sets as values on dictionary d , and want to apply function f to each of them separately:

from multiprocessing import Pool
p = Pool()                                   #number of processes = number of CPUs
keys, values= zip(*d.iteritems())            #ordered keys and values
processed_values= p.map( f, values )         #apply the function f to each set and wait for result
#then proceed to join the three sets

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM