简体   繁体   English

使用 MPI_Send 和 MPI_Recv 未正确接收矩阵

[英]Matrix not received properly with MPI_Send and MPI_Recv

I am new to programing with MPI and I have an exercise where I have to multiply 2 matrices using MPI_Send and MPI_Recv while sending both matrices to my processes and sending back the result to the root process.我是使用 MPI 编程的新手,我有一个练习,我必须使用 MPI_Send 和 MPI_Recv 将两个矩阵相乘,同时将两个矩阵发送到我的进程并将结果发送回根进程。 (both matrices are square and N is equal to the number of processes). (两个矩阵都是方阵,N 等于进程数)。

This is the code I have written:这是我写的代码:

#include "mpi.h"
#include <stdio.h>
#include <stdlib.h>
#include <time.h>

int main(int argc, char *argv[]){
srand(time(NULL));

int rank, nproc;
MPI_Status status;

MPI_Init(&argc, &argv);

MPI_Comm_rank(MPI_COMM_WORLD, &rank);
MPI_Comm_size(MPI_COMM_WORLD, &nproc);

int **matrice = (int **)malloc(nproc * sizeof(int *));
for ( int i=0; i<nproc; i++)
    matrice[i] = (int *)malloc(nproc * sizeof(int));

int **matrice1 = (int **)malloc(nproc * sizeof(int *));
for (int i=0; i<nproc; i++)
    matrice1[i] = (int *)malloc(nproc * sizeof(int));

int **result = (int **)malloc(nproc * sizeof(int *));
for (int i=0; i<nproc; i++)
    result[i] = (int *)malloc(nproc * sizeof(int));

if(rank == 0){
    for(int i = 0; i < nproc; i++){
        for(int j = 0; j < nproc; j++){
            matrice[i][j] = (rand() % 20) + 1;
            matrice1[i][j] = (rand() % 20) + 1;
        }
    }
    
    for(int i = 1; i < nproc; i++){
        MPI_Send(&(matrice[0][0]), nproc*nproc, MPI_INT, i, 1, MPI_COMM_WORLD);
        MPI_Send(&(matrice1[0][0]), nproc*nproc, MPI_INT, i, 2, MPI_COMM_WORLD);
    }
    
}else{
    MPI_Recv(&(matrice[0][0]), nproc*nproc, MPI_INT, 0, 1, MPI_COMM_WORLD, &status);
    MPI_Recv(&(matrice1[0][0]), nproc*nproc, MPI_INT, 0, 2, MPI_COMM_WORLD, &status);
}
    
for(int i = 0; i < nproc; i++){
    result[i][j] = 0;
    for(int j = 0; j < nproc; j++){
        result[rank][i] += matrice[rank][j] * matrice1[j][i];       
    }
}

if(rank != 0){
    MPI_Send(&result[rank][0], nproc, MPI_INT, 0, 'p', MPI_COMM_WORLD);
}


if(rank == 0){
    for(int i = 1; i < nproc; i++){
        MPI_Recv(&result[i][0], nproc, MPI_INT, i, 'p', MPI_COMM_WORLD, &status);
    }
}

MPI_Finalize();

}

I am having problems with MPI_Send or MPI_Recv because only the first row of the matrice I receive is correct, the second row is filled with 0 and the others are random.我在使用MPI_SendMPI_Recv时遇到问题,因为只有我收到的矩阵的第一行是正确的,第二行填充为 0,其他行是随机的。

I don't understand what is causing this problem.我不明白是什么导致了这个问题。

I am having problems with MPI_Send or MPI_Recv because only the first row of the matrice I receive is correct, the second row is filled with 0 and the others are random.我在使用 MPI_Send 或 MPI_Recv 时遇到问题,因为只有我收到的矩阵的第一行是正确的,第二行填充为 0,其他行是随机的。

You are calling the MPI_Send as follows:您正在调用MPI_Send ,如下所示:

MPI_Send(&(matrice[0][0]), nproc*nproc, MPI_INT, i, 1, MPI_COMM_WORLD);

so telling MPI that you will be sending nproc*nproc elements starting from the position &(matrice[0][0]) .所以告诉MPI 你将从 position &(matrice[0][0])开始发送nproc*nproc元素。 MPI_Send expects that those nproc*nproc elements are continuously allocated in memory. MPI_Send期望那些nproc*nproc元素在 memory 中连续分配。 Therefore, your matrices should be allocated continuously in memory.因此,您的矩阵应在 memory 中连续分配。 You can think of the memory layout of such matrices as:您可以将此类矩阵的 memory 布局视为:

| ------------ data used in the MPI_Send -----------|
|     row1          row2         ...      rowN      |
|[0, 1, 2, 3, N][0, 1, 2, 3, N]  ... [0, 1, 2, 3, N]|
\---------------------------------------------------/

From the last element of one row to the first element of the next row there is no gap.从一行的最后一个元素到下一行的第一个元素没有间隙。

Unfortunately, you have allocated your matrix as:不幸的是,您已将矩阵分配为:

int **matrice = (int **)malloc(nproc * sizeof(int *));
for ( int i=0; i<nproc; i++)
    matrice[i] = (int *)malloc(nproc * sizeof(int));

which does not allocate a matrix continuously in memory, but rather allocates an array of pointers which are not force to be continuously in memory.它不会在 memory 中连续分配矩阵,而是在 memory 中分配不强制连续的指针数组。 You can think of that matrix as having the following memory layout:您可以将该矩阵视为具有以下 memory 布局:

| ------------ data used in the MPI_Send ----------|
| row1 [0, 1, 2, 3, N] ... (some "random" stuff)   |
\--------------------------------------------------/
  row2 [0, 1, 2, 3, N] ... (some "random" stuff)
  ...
  rowN [0, 1, 2, 3, N] ... (some "random" stuff)

From the last element of one row to the first element of the next row there might be a memory gap.从一行的最后一个元素到下一行的第一个元素可能存在 memory 间隙。 Consequently, making it impossible for the MPI_Send to know where the next rows starts.因此,使MPI_Send无法知道下一行的开始位置。 That is why you can receive the first row, but not the remaining rows.这就是为什么您可以收到第一行,但不能收到剩余的行。

Among others you can use the following approaches to solve that issue除其他外,您可以使用以下方法来解决该问题

  1. allocated the matrix continuously in memory;在 memory 中连续分配矩阵;
  2. send the matrix row by row.逐行发送矩阵。

The simplest (and performance-wise better) solution would be for you to use the first approach;最简单(并且性能更好)的解决方案是您使用第一种方法; check this SO Thread to see how to dynamically allocate a contiguous block of memory for a 2D array.检查此SO 线程以了解如何为二维数组动态分配 memory 的连续块。

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM