[英]Invalid pointer and segmentation fault when using MPI_Gather in Fortran
I have a simple program, which is supposed to gather a number of small arrays into one big one using MPI. 我有一个简单的程序,该程序应该使用MPI将多个小数组收集为一个大数组。
PROGRAM main
include 'mpif.h'
integer ierr, i, myrank, thefile, n_procs
integer, parameter :: BUFSIZE = 3
complex*16, allocatable :: loc_arr(:), glob_arr(:)
call MPI_INIT(ierr)
call MPI_COMM_RANK(MPI_COMM_WORLD, myrank, ierr)
call MPI_COMM_SIZE(MPI_COMM_WORLD, n_procs, ierr)
allocate(loc_arr(BUFSIZE))
loc_arr = 0.7 * myrank - cmplx(0.3, 0, kind=8)
allocate(glob_arr(n_procs* BUFSIZE))
write (*,*) myrank, shape(glob_arr)
call MPI_Gather(loc_arr, BUFSIZE, MPI_DOUBLE_COMPLEX,&
glob_arr, n_procs * BUFSIZE, MPI_DOUBLE_COMPLEX,&
0, MPI_COMM_WORLD, ierr)
write (*,*) myrank,"Errorcode:" , ierr
call MPI_FINALIZE(ierr)
END PROGRAM main
I have some experience with MPI in C, but for Fortran 90 nothing seems to work. 我在C中使用MPI有一定的经验,但是对于Fortran 90而言,似乎没有任何作用。 Here is how I compile(I use ifort) and run it:
这是我编译(运行ifort)并运行它的方式:
mpif90 test.f90 -check all && mpirun -np 4 ./a.out
1 12
3 12
3 Errorcode: 0
1 Errorcode: 0
0 12
2 12
2 Errorcode: 0
0 Errorcode: 0
*** Error in `./a.out': free(): invalid pointer: 0x0000000000a25790 ***
===================================================================================
= BAD TERMINATION OF ONE OF YOUR APPLICATION PROCESSES
= PID 10889 RUNNING AT LenovoX1kabel
= EXIT CODE: 6
= CLEANING UP REMAINING PROCESSES
= YOU CAN IGNORE THE BELOW CLEANUP MESSAGES
===================================================================================
===================================================================================
= BAD TERMINATION OF ONE OF YOUR APPLICATION PROCESSES
= PID 10889 RUNNING AT LenovoX1kabel
= EXIT CODE: 6
= CLEANING UP REMAINING PROCESSES
= YOU CAN IGNORE THE BELOW CLEANUP MESSAGES
===================================================================================
What do I do wrong? 我做错了什么? Sometimes I will get this pointer problem, sometimes I will a segmentation fault, but to me it doesn't look like any of the ifort checks complain.
有时我会遇到此指针问题,有时会遇到分段错误,但对我来说,这似乎不像任何ifort检查所抱怨的那样。
All the Errorcodes are 0, so I'm not sure where I go wrong. 所有的错误代码均为0,所以我不确定哪里出错了。
You should never specify the number of processes in MPI collectives. 您永远不要在MPI集合中指定进程数。 That is a simple rule of thumb.
这是一条简单的经验法则。
Therefore the line n_procs * BUFSIZE
is clearly wrong. 因此,
n_procs * BUFSIZE
行显然是错误的。
And indeed the manual states that: recvcount
Number of elements for any single receive (integer, significant only at root). 实际上,该手册指出:
recvcount
任何单次接收的元素数(整数,仅在根目录有效)。
You should just use BUFSIZE
. 您应该只使用
BUFSIZE
。 This is the same for C and Fortran. 对于C和Fortran来说都是一样的。
声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.