简体   繁体   English

MPI 点对点通信到集体通信

[英]MPI Point to Point Communication to Collective Communication

I am learning MPI and I am trying to convert my MPI program from Point to Point Communication to MPI Collectives ..我正在学习 MPI 并且我正在尝试将我的 MPI 程序从点对点通信转换为 MPI 集合 ..

Below is a fragment of my code for Matrix Multiplication using MPI Point to Point communication ...下面是我使用 MPI 点对点通信进行矩阵乘法的代码片段......

int i;
    if(rank == 0) {
        for(i = 1; i < size; i++){
            MPI_Send(&rows, 1, MPI_INT, i, 0, MPI_COMM_WORLD);
            MPI_Send(&columns, 1, MPI_INT, i, 0, MPI_COMM_WORLD);
        }
    } else {
        MPI_Recv(&rows, 1, MPI_INT, 0, 0, MPI_COMM_WORLD, &status);
        MPI_Recv(&columns, 1, MPI_INT, 0, 0, MPI_COMM_WORLD, &status);
    }   

    int local_block_size = rows / size;
    int process, column_pivot;

    if(rank == 0) {
        for(i = 1; i < size; i++){
            MPI_Send((matrix_1D_mapped + (i * (local_block_size * rows))), (local_block_size * rows), MPI_DOUBLE, i, 0, MPI_COMM_WORLD);
            MPI_Send((rhs + (i * local_block_size)), local_block_size, MPI_DOUBLE, i, 0, MPI_COMM_WORLD);
        }
        for(i = 0; i < local_block_size * rows; i++){
            matrix_local_block[i] = matrix_1D_mapped[i];
        }
        for(i = 0; i < local_block_size; i++){
            rhs_local_block[i] = rhs[i];
        }
    } else {
        MPI_Recv(matrix_local_block, local_block_size * rows, MPI_DOUBLE, 0, 0, MPI_COMM_WORLD, &status);
        MPI_Recv(rhs_local_block, local_block_size, MPI_DOUBLE, 0, 0, MPI_COMM_WORLD, &status);
    }

I am thinking about replacing MPI_Send with MPI_Bcast ... will that be the correct approach ?我正在考虑用 MPI_Bcast 替换 MPI_Send ...这是正确的方法吗?

For the first communication that data sent to all receivers is in fact identical, thus MPI_Bcast is the correct approach.对于第一次通信,发送到所有接收器的数据实际上是相同的,因此MPI_Bcast是正确的方法。 The second communication distributes different chunks of a larger array to the recipients, this is done as a collective with MPI_Scatter .第二次通信将更大数组的不同块分发给接收者,这是作为一个集合与MPI_Scatter一起MPI_Scatter Note that scatter includes the root rank in the communication, so you can omit the manual local copy.请注意,scatter 包括通信中的根等级,因此您可以省略手动本地复制。

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM