繁体   English   中英

在生成过程中获取价值

[英]Getting values in spawn process

我正在尝试使用集体MPI函数在生成的过程中获取值。

在这种情况下,我有一个N * N矩阵,我想将每一行传递给每个进程。 获取每个过程中的值并求和。

我正在使用此示例:

二维数组和malloc的MPI_Scatter

主要

int main(int argc, char *argv[]){
  int *n, range, i, j, dato, resultado;
  int *matriz;
  char *nombre_esclave="esclavo";

  //MPI Section
  int rank, size;
  MPI_Comm hijos;
  MPI_Status status;



  MPI_Init(&argc, &argv);
  MPI_Comm_rank(MPI_COMM_WORLD, &rank);
  MPI_Comm_size(MPI_COMM_WORLD, &size);

  matriz = createMatrix(N, N); 
  printArray(matriz, N * N);

  //Child process
  MPI_Comm_spawn("slave", MPI_ARGV_NULL, N, MPI_INFO_NULL, 0,    MPI_COMM_SELF, &hijos, MPI_ERRCODES_IGNORE);


  // received row will contain N integers
  int *procRow = malloc(sizeof(int) * N); 

  MPI_Scatter(matriz, N, MPI_INT, // send one row, which contains N integers
              procRow, N, MPI_INT, // receive one row, which contains N integers
              MPI_ROOT, hijos);



  MPI_Finalize();
  return 0;
}

和奴隶

奴隶

   MPI_Init(&argc, &argv);
   MPI_Comm_rank(MPI_COMM_WORLD, &pid);
   MPI_Comm_size(MPI_COMM_WORLD, &size);


   MPI_Comm_get_parent(&parent);

   if (parent != MPI_COMM_NULL) {
        printf("This is a child process\n");
   }       

   //number of processes in the remote group of comm (integer)
   MPI_Comm_remote_size(parent, &size);


   int *procRow = malloc(sizeof(int) * N);

   //UNABLE TO GET VALUES FROM THE PARENT
   //I need to sum old the values y every portion of the matrix
   //passed to every child process
   MPI_Reduce(procRow, &resultado_global, N, MPI_INT, MPI_SUM, 0, parent);

UPDATE 在此处输入图片说明

使用MPI_Comm_spawn我创建了3个孩子。 在每个孩子中,我想得到一排矩阵(我在master中使用散点图)。 稍后,我使用MPI_Reduce对子级中的每一行求和(这就是为什么我说要获取值)。

更新2

在从属服务器上,我修改了代码,并在每个进程中获取了行。

if (parent != MPI_COMM_NULL) {


       //number of processes in the remote group of comm (integer)
       MPI_Comm_remote_size(parent, &size_remote);

       int *matrix = malloc(sizeof(int) * size);
       int *procRow = malloc(sizeof(int) * size);



       MPI_Scatter(matrix, N, MPI_INT,procRow, N, MPI_INT,0, parent);

       //procRow values correctly from each row of the matrix

       if (procRow != NULL) {
          printf("Process %d; %d %d %d \n", pid, procRow[0], procRow[1], procRow[2]);
       }       

    //Unable to sum each row
       MPI_Reduce(procRow, &resultado_global, size, MPI_INT, MPI_SUM, ROOT, parent);
       //MPI_Reduce(procRow, &resultado_global, size, MPI_INT, MPI_SUM, ROOT, MPI_COMM_WORLD);

   }

更新3(已解决)

在奴隶

if (parent != MPI_COMM_NULL) {

       //number of processes in the remote group of comm (integer)
       MPI_Comm_remote_size(parent, &size_remote);

       int *matrix = malloc(sizeof(int) * size);
       int *procRow = malloc(sizeof(int) * size);


       MPI_Scatter(matrix, N, MPI_INT, procRow, N, MPI_INT, 0, parent);



       if (procRow != NULL) {
          printf("Process %d; %d %d %d \n", pid, procRow[0], procRow[1], procRow[2]);
          sumaParcial=0;
          for (int i = 0; i < N; i++)
            sumaParcial = sumaParcial + procRow[i]; 
       }       



       MPI_Reduce(&sumaParcial, &resultado_global, 1, MPI_INT, MPI_SUM, ROOT, parent);


   }

掌握

  // received row will contain N integers
  int *procRow = malloc(sizeof(int) * N); 

  MPI_Scatter(matriz, N, MPI_INT, // send one row, which contains N integers
              procRow, N, MPI_INT, // receive one row, which contains N integers
              MPI_ROOT, hijos);


  MPI_Reduce(&sumaParcial, &resultado_global, 1, MPI_INT, MPI_SUM, MPI_ROOT, hijos);

  printf("\n GLOBAL RESULT :%d\n",resultado_global);

任何想法? 谢谢

通过编辑,我认为分散工作正常。

您的主要困惑似乎与MPI_Reduce有关。 它不会进行任何局部缩减。 根据你的图形,你想拥有的值6, 15, 24的行列0, 1, 2的奴隶。 仅通过遍历本地行,完全无需MPI即可完成此操作。

行上的MPI_Reduce将导致根具有[12, 15, 18] MPI_Reduce [12, 15, 18] 如果您只希望总和为45 ,则应首先在本地汇总值,然后MPI_Reduce将每个等级中的单个值减少为单个全局值。

暂无
暂无

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM