简体   繁体   中英

How do I parallelize a simple loop in python?

I have a loop that crashes my RAM every time, and I would like the parallelize.

I tried this code, but donesn't work:

from joblib import Parallel, delayed

from Bio.Align.Applications import ClustalOmegaCommandline


def run(test):
    im = process_image(Image.open(test['Path'][i]))
    test_images.append(im)


if __name__ == "__main__":
    test_images = []
    test = range(len(test))

    Parallel(n_jobs=len(test)(
        delayed(run)(i) for i in len(test))

I got this error:

File "", line 16 delayed(run)(i) for i in len(test)) ^ SyntaxError: unexpected EOF while parsing

My loop:

test_images = []
for i in range(len(test)):
  im = process_image(Image.open(test['Path'][i]))
  test_images.append(im)
test_images = np.asarray(test_images)

I have tried several solutions, but I need a single database output.

Can you try the following:

def process_image(img_path):
    img_obj = Image.open(img_path)
    # your logic here
    return im

def main():
    image_dict = {}
    with concurrent.futures.ProcessPoolExecutor() as executor:
        for img_path, im in zip(test['Path'], executor.map(process_image, test['Path'])):
            image_dict[img_path] = im
    return image_dict

if __name__ == '__main__':
    image_dict = main()
    test_images = np.asarray(image_dict.values())

I am not sure, if parallelization is the answer to memory problems.

Do you need to store every image inside a list, which is stored in memory? Maybe just save the path and load it, when it is needed?

Or try out generators . There the values are generated lazy (only if they are needed), which results in fewer memory consumption.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM