简体   繁体   English

线性搜索MPI(在其他过程中停止)

[英]Linear search MPI (stop for in other process)

I'm trying to write a simple multiprocess program to found a value in an array. 我正在尝试编写一个简单的多进程程序以在数组中找到一个值。

#include <mpi.h>
#include <stdio.h>

int* create_array(int num_items) {
    int* tmp = new int[num_items];
    for(int i = 0; i < num_items; i++)
        tmp[i] = i;

    return tmp;
}

int main() {

    int num_items = 1000;
    int item = 999;

    MPI_Init(NULL, NULL);
    int world_rank, world_size, num_items_per_proc;
    MPI_Comm_rank(MPI_COMM_WORLD, &world_rank);
    MPI_Comm_size(MPI_COMM_WORLD, &world_size);

    MPI_Request* inReq;

    int* array;
    if(world_rank == 0) {
        array = create_array(num_items);
        num_items_per_proc = (num_items / world_size) + 1;
    }

    int* sub_array = new int[num_items_per_proc];
    MPI_Scatter(array, num_items_per_proc, MPI_INT, sub_array,
                num_items_per_proc, MPI_INT, 0, MPI_COMM_WORLD);

    bool found = false;
    MPI_Irecv(&found, 1, MPI::BOOL, MPI_ANY_SOURCE, MPI_ANY_TAG,
              MPI_COMM_WORLD, inReq);

    for(int i = 0; i < num_items_per_proc && !found; i++) {
        if (sub_array[i] == item) {
            found = true;
            printf("Elemento %d trovato in posizione: %d\n", item, i);
            for(int j = 0; j < world_size; j++)
                if(j != world_rank)
                    MPI_Send(&found, 1, MPI::BOOL, j, j, MPI_COMM_WORLD);
        }
    }

    if(world_rank == 0) delete[] array;
    delete[] sub_array;

    MPI_Barrier(MPI_COMM_WORLD);
    MPI_Finalize();

    return 0;
}

I'm trying to stop all for when one of them found the value in a portion of array, but I got a segmentation fault form Irecv. 当其中之一在数组的一部分中找到值时,我试图停止所有操作,但出现Irecv的分段错误。 How I can solve this ? 我该如何解决?

The reason your code doesn't work is that you must supply an actual MPI_Request to MPI_Irecv - not just an uninitialized pointer! 您的代码不工作的原因是,你必须提供一个实际MPI_RequestMPI_Irecv -不只是一个未初始化的指针!

MPI_Request inReq;
MPI_Irecv(&found, 1, MPI_CXX_BOOL, MPI_ANY_SOURCE, MPI_ANY_TAG,
          MPI_COMM_WORLD, &inReq);

The way you handle found is wrong. found处理方式是错误的。 You must not modify a variable given to an asynchronous request and you cannot just assume it is updated in the background. 您不能修改给异步请求的变量,也不能仅仅假定它是在后台更新的。 Non-blocking messages are not one-sided remote memory operations. 非阻塞消息不是单方面的远程内存操作。 So you have to call test and if the status indicates a received message you can abort the loop. 因此,您必须调用test,并且如果状态指示收到消息,则可以中止循环。 Make sure that each request is completed, also in the rank who found the result. 确保每个请求都已完成,也要确保找到结果的排名。

Further, num_items_per_proc must be valid on all ranks (for allocating the memory, and for specifying the recvcount in MPI_Scatter . 此外, num_items_per_proc必须在所有级别上均有效(用于分配内存,以及用于在recvcount中指定MPI_Scatter

The barrier before MPI_Finalize is redundant and finally, the C++ bindings of MPI were removed, use MPI_CXX_BOOL instead of MPI::BOOL . MPI_Finalize之前的MPI_Finalize是多余的,最后,删除了MPI的C ++绑定,请使用MPI_CXX_BOOL而不是MPI::BOOL

You can find more sophisticated approaches to your problem in the answers of this question . 您可以在此问题答案中找到解决问题的更复杂的方法。

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM