简体   繁体   English

限制为MPI_Send或MPI_Recv?

[英]Limits with MPI_Send or MPI_Recv?

Do we have any limits about message size on MPI_Send or MPI_Recv - or limits by computer? 我们对MPI_SendMPI_Recv上的邮件大小有任何限制吗?还是受计算机限制? When I try to send large data, it can not completed. 当我尝试发送大数据时,它无法完成。 This is my code: 这是我的代码:

#include <stdio.h>
#include <stdlib.h>
#include <mpi.h>
#include <math.h>
#include <string.h>

void AllGather_ring(void* data, int count, MPI_Datatype datatype,MPI_Comm communicator)
{
  int me;
  MPI_Comm_rank(communicator, &me);
  int world_size;
  MPI_Comm_size(communicator, &world_size);
  int next=me+1;
  if(next>=world_size)
      next=0;
  int prev=me-1;
  if(prev<0)
      prev=world_size-1;
  int i,curi=me;
  for(i=0;i<world_size-1;i++)
  {
     MPI_Send(data+curi*sizeof(int)*count, count, datatype, next, 0, communicator);
     curi=curi-1;
     if(curi<0)
         curi=world_size-1;
     MPI_Recv(data+curi*sizeof(int)*count, count, datatype, prev, 0, communicator, MPI_STATUS_IGNORE);
  }
}


void test(void* buff,int world_size,int count)
{
    MPI_Barrier(MPI_COMM_WORLD);
    AllGather_ring(buff,count,MPI_INT,MPI_COMM_WORLD);
    MPI_Barrier(MPI_COMM_WORLD);
    }
}
void main(int argc, char* argv[]) {
    int count = 20000;
    char processor_name[MPI_MAX_PROCESSOR_NAME];
    MPI_Init(&argc,&argv);
    int world_rank,world_size,namelen;
    MPI_Comm_size(MPI_COMM_WORLD, &world_size);
    MPI_Comm_rank(MPI_COMM_WORLD, &world_rank);
    int* buff=(int*) malloc(world_size*sizeof(int)*count);
      int i;
      for (i = 0; i < world_size; i++) {
          buff[i]=world_rank;
      }
    test(buff,world_size,count);
    MPI_Finalize();
}

It stopped when I try to run with a buffer about 80000 bytes (40000 integers) (by count = 20000 + 4 processes) 当我尝试使用大约80000字节(40000整数)的缓冲区运行时,它停止了(按计数= 20000 + 4个进程)

You code is incorrect. 您的代码不正确。 You are posting the receives only after the respective sends are completed. 仅在完成各自的发送后才过帐接收。 MPI_Send is only guaranteed to complete after a corresponding MPI_Recv is posted, so you run into a classic deadlock. 只有在发布相应的MPI_Recv之后,才能保证MPI_Send完成,因此您会遇到经典的死锁。

It happens to work for small messages, because they are handled differently (using an unexpected message buffer as performance optimization). 它恰好适用于小消息,因为它们的处理方式不同(使用意外的消息缓冲区作为性能优化)。 In that case MPI_Send is allowed to complete before the MPI_Recv is posted. 在这种情况下,允许在发布MPI_Recv之前完成MPI_Send

Alternatively you can: 或者,您可以:

  • Post immediate sends or receive ( MPI_Isend , MPI_Irecv ) to resolve the deadlock. 立即发布发送或接收( MPI_IsendMPI_Irecv )以解决死锁。
  • Use MPI_Sendrecv . 使用MPI_Sendrecv
  • Use MPI_Allgather . 使用MPI_Allgather

I recommend the latter. 我推荐后者。

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM