简体   繁体   中英

How to parallelize this nested loop in Python that calls Abaqus

I have the nested loops below. How can i parallelize the outside loop so i can distribute the outside loop into 4 simultaneous runs and wait for all 4 runs to complete before moving on with the rest of the script?

    for r in range(4):
        for k in range( r*nAnalysis/4, (r+1)*nAnalysis/4 ):

            # - Write Abaqus INP file - #
            writeABQfile(ppos,props,totalTime[k],recInt[k],inpFiles[k],i,lineNum[k],aPath[k])

            # - Delete LCK file to Enable Another Analysis - #
            delFile(aPath[k]+"/"+inpFiles[k]+".lck")

            # - Run Analysis - #
            runABQfile(inpFiles[k],aPath[k])

I tried using multiprocess.pool as but it never gets in:

            def parRunABQfiles(nA,nP,r,ppos,prop0,prop1,totalTime2Run_,recIntervals_,inpFiles_,i,lineNumbers_,aPath_):
            from os import path 
            from auxFunctions import writeABQfile, runABQfile 
            print("I am Here")
            for k in range( r*nA/nP, (r+1)*nA/nP ):
                # - Write Abaqus INP file - #
                writeABQfile(ppos,prop0,prop1,totalTime2Run_,recIntervals_,inpFiles_,i,lineNumbers_,aPath_)
                # - Delete LCK file to Enable Another Analysis - #
                delFile(aPath_+"/"+inpFiles[k]+".lck")
                # - Run Analysis - #
                runABQfile(inpFiles_,aPath_)
                # - Make Sure Analysis is not Bypassed - #
                while os.path.isfile(aPath_+"/"+inpFiles[k]+".lck") == True:
                      sleep(0.1)
            return k

        results = zip(*pool.map(parRunABQfiles, range(0, 4, 1)))

The runABQfile is just a subprocess.call to a sh script that runs abaqus

     def runABQfile(inpFile,path):    
         import subprocess
         import os

         prcStr1 = ('sbatch '+path+'/runJob.sh')

         process = subprocess.call(prcStr1, stdin=None, stdout=None, stderr=None, shell=True )

         return

I have no errors showing up so I am not sure why is not getting in there. I know because the writeABQfile does not write the input file. The question again is:

How can i parallelize the outside loop so i can distribute the outside loop into 4 simultaneous runs and wait for all 4 runs to complete before moving on with the rest of the script?

Use concurrent.futures module if multiprocessing is what you want.

from concurrent.futures import ProcessPoolExecutor

def each(r):
    for k in range( r*nAnalysis/4, (r+1)*nAnalysis/4 ):
        writeABQfile(ppos,props,totalTime[k],recInt[k],inpFiles[k],i,lineNum[k],aPath[k])
        delFile(aPath[k]+"/"+inpFiles[k]+".lck")
        runABQfile(inpFiles[k],aPath[k])

with ProcessPoolExecutor(max_workers=4) as executor:
    output = executor.map(each, range(4)) # returns an iterable

If you just want to "do" stuff rather than "produce", check out as_completed function from the same module. There are direct examples in the doc.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM