简体   繁体   中英

OpenMPI Segmentation fault: address not mapped

During the development of my OpenMPI-based program I sometimes encounter a segmentation fault:

[11655] *** Process received signal ***
[11655] Signal: Segmentation fault (11)
[11655] Signal code: Address not mapped (1)
[11655] Failing at address: 0x10
[11655] [ 0] /usr/lib/libpthread.so.0(+0x11940)[0x7fe42b159940]
[11655] [ 1] /usr/lib/openmpi/openmpi/mca_btl_vader.so(mca_btl_vader_alloc+0xde)[0x7fe41e94717e]
[11655] [ 2] /usr/lib/openmpi/openmpi/mca_btl_vader.so(mca_btl_vader_sendi+0x22d)[0x7fe41e949c5d]
[11655] [ 3] /usr/lib/openmpi/openmpi/mca_pml_ob1.so(+0x806f)[0x7fe41e30806f]
[11655] [ 4] /usr/lib/openmpi/openmpi/mca_pml_ob1.so(mca_pml_ob1_send+0x3d9)[0x7fe41e308f29]
[11655] [ 5] /usr/lib/openmpi/libmpi.so.12(MPI_Send+0x11c)[0x7fe42b3df1cc]
[11655] [ 6] project[0x400e41]
[11655] [ 7] project[0x401429]
[11655] [ 8] project[0x400cdc]
[11655] [ 9] /usr/lib/libc.so.6(__libc_start_main+0xea)[0x7fe42adc343a]
[11655] [10] project[0x400b3a]
[11655] *** End of error message ***
[11670] *** Process received signal ***
[11670] Signal: Segmentation fault (11)
[11670] Signal code: Address not mapped (1)
[11670] Failing at address: 0x1ede1f0
[11670] [ 0] /usr/lib/libpthread.so.0(+0x11940)[0x7fc5f8c13940]
[11670] [ 1] /usr/lib/openmpi/openmpi/mca_btl_vader.so(mca_btl_vader_poll_handle_frag+0x14c)[0x7fc5ec458aac]
[11670] [ 2] /usr/lib/openmpi/openmpi/mca_btl_vader.so(+0x3c9e)[0x7fc5ec458c9e]
[11670] [ 3] /usr/lib/openmpi/libopen-pal.so.13(opal_progress+0x4a)[0x7fc5f836814a]
[11670] [ 4] /usr/lib/openmpi/openmpi/mca_pml_ob1.so(mca_pml_ob1_recv+0x255)[0x7fc5ebe171c5]
[11670] [ 5] /usr/lib/openmpi/libmpi.so.12(MPI_Recv+0x190)[0x7fc5f8e917d0]
[11670] [ 6] project[0x400d94]
[11670] [ 7] project[0x400e8a]
[11670] [ 8] /usr/lib/libpthread.so.0(+0x7297)[0x7fc5f8c09297]
[11670] [ 9] /usr/lib/libc.so.6(clone+0x3f)[0x7fc5f894a25f]

From these messages, I suppose there is some error in my usage of MPI_Send and (corresponding?) MPI_Recv . I use a wrappers like this:

void mpi_send(int *buf, int to, int tag) {
    int msg[2];
    msg[0] = l_clock++;
    msg[1] = *buf;
    MPI_Send(msg, 2, MPI_INT, to, tag, MPI_COMM_WORLD);
}

int mpi_rcv(int *buf, int source, int tag, MPI_Status *status) {
    int msg[2];
    MPI_Recv(msg, 2, MPI_INT, source, tag, MPI_COMM_WORLD, status);
    int r_clock = msg[0];
    *buf = msg[1];

    if (r_clock > l_clock) {
        l_clock = r_clock + 1;
        return 1;
    }
    if (r_clock == l_clock) {
        return rank < status->MPI_SOURCE;
    }
    return 0;
}

Full code is hosted here.

I can't see the mistake I'm making here. Any help would be much appreciated.

EDIT: Now I noticed, that the segfault sometimes mentions MPI_Barrier . This makes absolutely no sense to me. Does this mean that my OpenMPI implementation is at fault? I am using Manjaro Linux with openmpi instaled from arm extra repository.

There is a second thread in the stack trace, which hints at using MPI in a threaded program. A quick look at your full code confirms it. In order for MPI to be used in such scenarios, it has to be initialised properly by calling MPI_Init_thread() instead of MPI_Init() . If you'd like to make multiple MPI calls simultaneously from different threads, the threading level passed to MPI_Init_thread should be MPI_THREAD_MULTIPLE :

int provided;
MPI_Init_thread(&argc, &argv, MPI_THREAD_MULTIPLE, &provided);
if (provided < MPI_THREAD_MULTIPLE) {
   // Error - MPI does not provide needed threading level
}

Any threading level (as returned in provided ) lower than MPI_THREAD_MULTIPLE won't work in your case.

Support for MPI_THREAD_MULTIPLE is a build-time option in Open MPI. Check that the Manjaro package was compiled accordingly. The one in Arch Linux is not:

$ ompi_info
...
     Thread support: posix (MPI_THREAD_MULTIPLE: no, OPAL support: yes,
                            ^^^^^^^^^^^^^^^^^^^^^^^
                     OMPI progress: no, ORTE progress: yes, Event lib:
                     yes)
...

You might need to build Open MPI from source and enable the support for MPI_THREAD_MULTIPLE .

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM