简体   繁体   中英

Error when trying to use MPI with emcee on a slurm cluster

Hi and thanks for your help already!

I am trying to get emcee to run using mpi on a Slurm cluster, but when I launch my code, it returns an error after a few minutes, with a big error described lower, that seems to revolve around the 'Invalid communicator' error.

Do you have any idea what I might be doing wrong?

I'm using anaconda, so I tried reinstalling the environment, changing the packages used, and removing all that might be not necessary, but the error is always the same.

Here is the script I submit via sbatch :

#!/bin/bash
#SBATCH --partition=largemem
#SBATCH --ntasks=40
#SBATCH --ntasks-per-node=40
#SBATCH --mem-per-cpu=4000
#SBATCH --mail-user=(my email)
#SBATCH --mail-type=ALL
#SBATCH --output=results/LastOpti.out
#SBATCH --error=results/LastOpti.err
#SBATCH --job-name=gal

source ~/anaconda3/etc/profile.d/conda.sh
conda activate EmceeMPI

cd ~/GalarioFitting

srun -n $SLURM_NTASKS python3 OptimizationGalarioMPI.py --nwalkers 560 --iterations 3000 --suffix _lasttest

conda deactivate

and in my python code I use schwimmbad's MPIPool.

the error is this big chunk :

Traceback (most recent call last):
  File "OptimizationGalarioMPI.py", line 303, in <module>
    pos, prob, state = sampler.run_mcmc(pos, iterations, progress=True)
  File "/home/mbenisty/anaconda3/envs/EmceeMPI/lib/python3.7/site-packages/emcee-3.0rc2-py3.7.egg/emcee/ensemble.py", line 346, in run_mcmc
  File "/home/mbenisty/anaconda3/envs/EmceeMPI/lib/python3.7/site-packages/emcee-3.0rc2-py3.7.egg/emcee/ensemble.py", line 305, in sample
  File "/home/mbenisty/anaconda3/envs/EmceeMPI/lib/python3.7/site-packages/emcee-3.0rc2-py3.7.egg/emcee/moves/red_blue.py", line 92, in propose
  File "/home/mbenisty/anaconda3/envs/EmceeMPI/lib/python3.7/site-packages/emcee-3.0rc2-py3.7.egg/emcee/ensemble.py", line 389, in compute_log_prob
  File "/home/mbenisty/anaconda3/envs/EmceeMPI/lib/python3.7/site-packages/schwimmbad/mpi.py", line 168, in map
    status=status)
  File "mpi4py/MPI/Comm.pyx", line 1173, in mpi4py.MPI.Comm.recv
  File "mpi4py/MPI/msgpickle.pxi", line 302, in mpi4py.MPI.PyMPI_recv
  File "mpi4py/MPI/msgpickle.pxi", line 261, in mpi4py.MPI.PyMPI_recv_match
mpi4py.MPI.Exception: Invalid communicator, error stack:
PMPI_Mprobe(120):  MPI_Mprobe(source=-2, tag=-1, comm=MPI_COMM_WORLD, message=0x7ffed877b790, status=0x7ffed877b7a0)
PMPI_Mprobe(85).: Invalid communicator

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "OptimizationGalarioMPI.py", line 303, in <module>
    pos, prob, state = sampler.run_mcmc(pos, iterations, progress=True)
  File "/home/mbenisty/anaconda3/envs/EmceeMPI/lib/python3.7/site-packages/schwimmbad/pool.py", line 46, in __exit__
    self.close()
  File "/home/mbenisty/anaconda3/envs/EmceeMPI/lib/python3.7/site-packages/schwimmbad/mpi.py", line 188, in close
    self.comm.send(None, worker, 0)
  File "mpi4py/MPI/Comm.pyx", line 1156, in mpi4py.MPI.Comm.send
  File "mpi4py/MPI/msgpickle.pxi", line 174, in mpi4py.MPI.PyMPI_send

It could be that the MPI implementation shipped with Conda does not include Slurm support. If so, you should try to start your program with mpirun rather than srun . But that error often indicates that several implementations of MPI are active simultaneously. Make sure that no environment module is loaded when you submit your job, and that no MPI-related OS package is installed.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM