[英]Other processes hang after MPI_Sendrecv
I think using MPI_Sendrecv
我认为使用
MPI_Sendrecv
MPI_Sendrecv(&ballPos, 2, MPI_INT, FIELD, NEW_BALL_POS_TAG, &ballPos, 2, MPI_INT, winner, NEW_BALL_POS_TAG, MPI_COMM_WORLD, MPI_STATUS_IGNORE);
But I noticed only the root (recieving party continues to run?). 但是我只注意到根(接收方是否继续运行?)。 Having
cout
before and after Sendrecv produces: 有
cout
前后SENDRECV生产:
0 b4 sendrecv
2 b4 sendrecv
4 b4 sendrecv
1 b4 sendrecv
3 b4 sendrecv
5 b4 sendrecv
0 after sendrecv
All processes OK before sendrecv, but only root unblocks afterwards. 所有进程在sendrecv之前都可以,但是之后只有root才能取消阻止。
Full source : see line 147 全文 :见第147行
UPDATE 更新
The result should be something similar to below 结果应该类似于以下内容
if (rank == winner) {
ballPos[0] = rand() % 128;
ballPos[1] = rand() % 64;
cout << "new ball pos: " << ballPos[0] << " " << ballPos[1] << endl;
MPI_Send(&ballPos, 2, MPI_INT, FIELD, NEW_BALL_POS_TAG, MPI_COMM_WORLD);
} else if (rank == FIELD) {
MPI_Recv(&ballPos, 2, MPI_INT, winner, NEW_BALL_POS_TAG, MPI_COMM_WORLD, MPI_STATUS_IGNORE);
}
The number of sends posted should be equal to the number of receives posted. 发送的发送数量应等于发送的接收数量。 In you case all ranks are sending to rank
FIELD
and receiving from rank winner
, including FIELD
and winner
: 在您的情况下,所有排名都发送到排名
FIELD
并从排名winner
接收,包括FIELD
和winner
:
Rank Sends to Receives from
----------------------------------
0 (FIELD) FIELD winner
1 FIELD winner
2 FIELD winner
... ... ...
winner FIELD winner
... ... ...
numprocs-1 FIELD winner
(such tables could be very useful sometimes) (此类表有时可能非常有用)
Hence FIELD
should receive numprocs
messages but it only executes MPI_Sendrecv
once and hence numprocs-1
calls to MPI_Sendrecv
would not be able to complete their sends. 因此
FIELD
应该接受numprocs
消息,但它仅执行MPI_Sendrecv
一次,因此numprocs-1
调用MPI_Sendrecv
将无法完成其发送。 The same goes for winner
. winner
。 It should send numprocs
message but as it only executes MPI_Sendrecv
once, only one message is sent and hence numprocs-1
calls to MPI_Sendrecv
would not be able to complete their receives. 它应该发送
numprocs
消息,但因为它仅执行MPI_Sendrecv
一次,只发送一个消息,因此numprocs-1
调用MPI_Sendrecv
将无法完成其接收。
There is also another error. 还有另一个错误。 The MPI standard requires that the send and the receive buffers be disjoint (ie they should not overlap), which is not the case with your code.
MPI标准要求发送缓冲区和接收缓冲区不相交(即它们不应重叠),而您的代码则不是这种情况。 Your send and receive buffers not only overlap but they are one and the same buffer.
您的发送和接收缓冲区不仅重叠,而且是一个缓冲区,并且是同一缓冲区。 If you want to perform the swap in the same buffer, MPI provides the
MPI_Sendrecv_replace
operation. 如果要在同一缓冲区中执行交换,则MPI提供
MPI_Sendrecv_replace
操作。
I am not sure what you are trying to achieve with this MPI_Sendrecv
statement but I strongly suspect that you need to put it inside an if
statement. 我不确定您要使用此
MPI_Sendrecv
语句尝试实现什么,但是我强烈怀疑您需要将其放在if
语句中。
声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.