簡體   English   中英

在MPI中發送和接收數組

[英]send and receive array in MPI

我是MPI的新手,我正在編寫一個簡單的MPI程序來獲得矩陣和向量的點積,即A * b = c。 但是,我的代碼不起作用。 源代碼如下所示。

如果我替換A,b,c和緩沖區的聲明

double A[16], b[4], c[4], buffer[8];

並注釋那些與分配和自由操作相關的行,我的代碼工作,結果是正確的。 在這種情況下,我想知道問題應該與指針有關,但我不知道要解決這個問題。

還有一件事,在我的代碼中,緩沖區只有4個元素,但緩沖區大小必須大於8,否則它不起作用。

#include<mpi.h>
#include<iostream>
#include<stdlib.h>

using namespace std;

int nx = 4, ny = 4, nxny;
int ix, iy;
double *A = nullptr, *b = nullptr, *c = nullptr, *buffer = nullptr;
double ans;

// info MPI
int myGlobalID, root = 0, numProc;
int numSent;
MPI_Status status;

// functions
void get_ixiy(int);

int main(){

  MPI_Init(NULL, NULL);
  MPI_Comm_size(MPI_COMM_WORLD, &numProc);
  MPI_Comm_rank(MPI_COMM_WORLD, &myGlobalID);

  nxny = nx * ny;

  A = new double(nxny);
  b = new double(ny);
  c = new double(nx);
  buffer = new double(ny);

  if(myGlobalID == root){
    // init A, b
    for(int k = 0; k < nxny; ++k){
      get_ixiy(k);
      b[iy] = 1;
      A[k] = k;
    }
    numSent = 0;

    // send b to each worker processor
    MPI_Bcast(&b, ny, MPI_DOUBLE, root, MPI_COMM_WORLD);

    // send a row of A to each worker processor, tag with row number
    for(ix = 0; ix < min(numProc - 1, nx); ++ix){
      for(iy = 0; iy < ny; ++iy){
        buffer[iy] = A[iy + ix * ny];
      }
      MPI_Send(&buffer, ny, MPI_DOUBLE, ix+1, ix+1, MPI_COMM_WORLD);
      numSent += 1;
    }

    for(ix = 0; ix < nx; ++ix){
      MPI_Recv(&ans, 1, MPI_DOUBLE, MPI_ANY_SOURCE, MPI_ANY_TAG, MPI_COMM_WORLD, &status);
      int sender = status.MPI_SOURCE;
      int ansType = status.MPI_TAG;
      c[ansType] = ans;

      // send another row to worker process
      if(numSent < nx){
        for(iy = 0; iy < ny; ++iy){
          buffer[iy] = A[iy + numSent * ny];
        }
        MPI_Send(&buffer, ny, MPI_DOUBLE, sender, numSent+1, 
        MPI_COMM_WORLD);
        numSent += 1;
      }
      else
        MPI_Send(MPI_BOTTOM, 0, MPI_DOUBLE, sender, 0, MPI_COMM_WORLD);
    }

    for(ix = 0; ix < nx; ++ix){
      std::cout << c[ix] << " ";
    }
    std::cout << std::endl;

    delete [] A;
    delete [] b;
    delete [] c;
    delete [] buffer;
  }
  else{
    MPI_Bcast(&b, ny, MPI_DOUBLE, root, MPI_COMM_WORLD);
      if(myGlobalID <= nx){
        while(1){
          MPI_Recv(&buffer, ny, MPI_DOUBLE, root, MPI_ANY_TAG, MPI_COMM_WORLD, &status);
          if(status.MPI_TAG == 0) break;
          int row = status.MPI_TAG - 1;
          ans = 0.0;

          for(iy = 0; iy < ny; ++iy) ans += buffer[iy] * b[iy];

          MPI_Send(&ans, 1, MPI_DOUBLE, root, row, MPI_COMM_WORLD);
      }
    }
  }

  MPI_Finalize();
  return 0;
} // main

void get_ixiy(int k){
  ix = k / ny;
  iy = k % ny;
}

錯誤信息如下所示。

=   BAD TERMINATION OF ONE OF YOUR APPLICATION PROCESSES
=   PID 7455 RUNNING AT ***
=   EXIT CODE: 11
=   CLEANING UP REMAINING PROCESSES
=   YOU CAN IGNORE THE BELOW CLEANUP MESSAGES

YOUR APPLICATION TERMINATED WITH THE EXIT STRING: Segmentation fault: 
11 (signal 11)
This typically refers to a problem with your application.
Please see the FAQ page for debugging suggestions

您的代碼中存在幾個問題,您必須先修復它們。

首先,你想在這個for循環中訪問一個不存在的b[]元素:

for(int k = 0; k < nxny; ++k){
  get_ixiy(k);
  b[k] = 1;     // WARNING: this is an error
  A[k] = k;
}

其次,您只是為根進程刪除已分配的內存。 這會導致內存泄漏:

if(myGlobalID == root){
  // ...
  delete [] A;
  delete [] b;
  delete [] c;
  delete [] buffer;
}

您必須刪除所有進程的已分配內存。

第三,你有一個無用的函數void get_ixiy(int); 改變全局變量ix,iy。 它沒用,因為在調用此函數后,你永遠不會使用ix,直到你手動更改它們。 看這里:

for(ix = 0; ix < min(numProc - 1, nx); ++ix){
    for(iy = 0; iy < ny; ++iy){
        // ...
    }
}

第四,您正在以完全錯誤的方式使用MPI_Send()MPI_Recv() 你很幸運,你沒有得到更多的錯誤。

暫無
暫無

聲明:本站的技術帖子網頁,遵循CC BY-SA 4.0協議,如果您需要轉載,請注明本站網址或者原文地址。任何問題請咨詢:yoyou2525@163.com.

 
粵ICP備18138465號  © 2020-2024 STACKOOM.COM