简体   繁体   English

MPI_Send或MPI_Recv正在给出分段错误

[英]MPI_Send or MPI_Recv is giving segmentation fault

I'm trying to calculate pi using mpi c library on hypercube topology. 我正在尝试使用超立方体拓扑上的mpi c库来计算pi。 But the execution doesn't proceed the MPI_Send and MPI_Recv part. 但执行不会继续执行MPI_Send和MPI_Recv部分。

I'm using 4 processors! 我正在使用4个处理器!

It seems like none of the processors are receiving any data. 似乎没有一个处理器正在接收任何数据。

Here's the code, output and the error I'm getting. 这是代码,输出和我得到的错误。

Any help would be appreciated! 任何帮助,将不胜感激! Thanks! 谢谢!

Code: After initializations and calculating local mypi at each processor. 代码:在每个处理器初始化和计算本地mypi之后。

  mypi = h * sum;
    printf("Processor %d has local pi = %f", myid, mypi);
    //Logic for send and receive!                                                                                                                                                   
    int k;
    for(k = 0; k < log10(numprocs) / log10(2.0); k++){
      printf("entering dimension %d \n", dimension);
      dimension = k;
      if(decimalRank[k] == 1 && k < e){
        //if it is a processor that need to send then                                                                                                                               
        int destination = 0;
        //find destination processor and send                                                                                                                                       
        destination = myid ^ (int)pow(2,dimension);
        printf("Processor %d sending to %d in dimension %d the value %f\n", myid, destination, dimension,  mypi);

        MPI_SEND(&mypi, 1, MPI_DOUBLE, destination, MPI_ANY_TAG, MPI_COMM_WORLD);
        printf("Processor %d done sending to %d in dimension %d the value %f\n", myid, destination, dimension, mypi);
      }
      else{
        //Else this processor is supposed to be receiving                                                                                                                           
        pi += mypi;
        printf("Processor %d ready to receive in dimension %d\n", myid, dimension);
        MPI_RECV(&mypi, 1, MPI_DOUBLE, MPI_ANY_SOURCE, MPI_ANY_TAG, MPI_COMM_WORLD);
        printf("Processor %d received value %d in dimension %d\n", myid, pi, dimension);
        pi += mypi;
      }
    }

    done = 1;
  }

Error: 错误:

mpiexec: Warning: tasks 0-3 died with signal 11 (Segmentation fault).

Output: 输出:

bcast complete
Processor 0 has local pi = 0.785473
Processor 0 ready to receive in dimension 0
Processor 1 has local pi = 0.785423
Processor 1 sending to 0 in dimension 0 the value 0.785423
Processor 3 has local pi = 0.785323
Processor 3 sending to 2 in dimension 0 the value 0.785323
Processor 2 has local pi = 0.785373
Processor 2 ready to receive in dimension 0

MPI_ANY_TAG is not a valid tag value in send operations. MPI_ANY_TAG 不是发送操作中的有效标记值。 It can only be used as a wildcard tag value in receive operations in order to receive messages no matter what their tag value. 它只能在接收操作中用作通配符标记值,以便接收消息,无论它们的标记值是什么。 The sender must specify a valid tag value - 0 suffices in most cases. 发件人必须指定有效的标签值 - 在大多数情况下, 0就足够了。

This: 这个:

for(k = 0; k < log10(numprocs) / log10(2.0); k++) ...

and this: 和这个:

... pow(2,dimension);

are bad : you must use integer logic only. 糟糕 :你必须只使用整数逻辑。 Be sure that at some time it will happen that something will be valued as "2.999999" and rounded to "2", breaking your algorithm. 确保在某个时候会发生某些事情将被视为“2.999999”并四舍五入为“2”,从而破坏您的算法。

I'd try something like: 我会尝试这样的事情:

for(k = 0, k2 = 1; k2 < numprocs; k++, k2 <<= 1) ...

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM