简体   繁体   中英

Calling mpirun from python on mpi compiled fortran executable

I got a fortran simulation code that is only able to run in parallel and has to be compiled with mpi ( make mpi=yes ) on at least 4 cores. I can run the executable, lets name it "test", without problems when i call mpirun -n 4 ./test .

Now I can generate different input files an process the outputs from python. Hence, I want to execute the above command from python to run several simulations. The main problem seems to be, no matter if I use os.system, subprocess.call, .run, .Popen, etc. that for MPI only one process is available (which actually makes sense when python starts one new subprocess):

Eg when using os.system('mpirun -n 4 ./test') or subprocess.run(['mpirun', '-n', '4', './test']) , I get the following output:

starting MPI-3.1 code.
 using    1 nodes with total    1 processors and    1 threads.
 node    0: procs=   1 threads=   1 name=my-pcname

>>>>>>  General information  <<<<<<<
 -------------------------------------------------------------------
 Started              : 22-OCT-2020 18:38:55
 Name of host machine : 
 Current directory    : /home/test
 Compiled on          : Linux (Intel Fortran)
 Compiled             : without OpenMP
 Compiled             : with MPI
 Linked FFT package   : cvecfft_acc         
 Compiled for         :           4  MPI processors
                                  1  OMP threads
 Running on           :           1  MPI processors
                                  1  OMP threads
                                  1  OMP processors per node

>>>> some more information about simulation parameters...

par.f: Mismatch nproc in par.f and MPI nodes:
 Compiled for :            4  MPI processors
 Running on   :            1  MPI processors
 *** STOP *** at location (node            0 ):           3

Interestingly enough I get this output 4 times, which confuses me even more...

Any ideas on how I could get this to work? And sorry, if a similar question was already asked somewhere, I searched for at least an hour and asked colleagues before I decided to post this question here...

I use the Intel fortran compiler ifort 19.0.5.281 20190815 together with OpenMPI 4.0.5

It looks strange. All these messages related to execution environment. Are these produced by your code or is it part of your custom mpirun script?

If I run simple MPI based case

program main

  use mpi

  integer error, id, p

  call MPI_Init ( error )
  call MPI_Comm_size ( MPI_COMM_WORLD, p, error )
  call MPI_Comm_rank ( MPI_COMM_WORLD, id, error )

  write (*,*) 'Hello: ', id, '/', p

  call MPI_Finalize ( error )

end

compiled it with GNU based chain

gnuchain/openmpi-4.0.2-gcc-9.2.0

following way: mpif90 -o hello hello.f90 it works.

And then, if I use simple Python script

import os

os.system('mpirun -np 4 ./hello')

it simply works as expected:

> python ./mpi_run.py
 Hello:            1 /           4
 Hello:            3 /           4
 Hello:            0 /           4
 Hello:            2 /           4

Do you, by any chance, run your Python code itself as an MPI code?

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM