簡體   English   中英

在不使用 mpi4py 收集和分散的情況下檢查所有排名是否正確

[英]Checking all rank is true without using mpi4py gather and scatter

我正在嘗試在進程之間進行通信,以便在所有其他進程都准備好時通知每個進程。 下面的代碼片段就是這樣做的。 有沒有更優雅的方法來做到這一點?

def get_all_ready_status(ready_batch):
    all_ready= all(ready_batch)
    return [all_ready for _ in ready_batch]

ready_batch= comm.gather(ready_agent, root=0)
if rank == 0:
    all_ready_batch = get_all_ready_status(ready_batch)
all_ready_flag = comm.scatter(all_ready_batch , root=0)                

如果所有進程都需要知道哪些其他進程已准備好,那么您可以使用comm.Allgather例程:

from mpi4py import MPI
import numpy


comm = MPI.COMM_WORLD
size = comm.Get_size()
rank = comm.Get_rank()

sendBuffer = numpy.ones(1, dtype=bool)
recvBuffer = numpy.zeros(size, dtype=bool)

print("Before Allgather => Process %s | sendBuffer %s | recvBuffer %s" % (rank, sendBuffer, recvBuffer))
comm.Allgather([sendBuffer,  MPI.BOOL],[recvBuffer, MPI.BOOL])
print("After Allgather  => Process %s | sendBuffer %s | recvBuffer %s" % (rank, sendBuffer, recvBuffer))

Output:

Before Allgather => Process 0 | sendBuffer [ True] | recvBuffer [False False]
Before Allgather => Process 1 | sendBuffer [ True] | recvBuffer [False False]
After Allgather  => Process 0 | sendBuffer [ True] | recvBuffer [ True  True]
After Allgather  => Process 1 | sendBuffer [ True] | recvBuffer [ True  True]

正如@Gilles Gouaillardet 在評論中指出的那樣:

如果所有進程只需要知道是否所有進程都准備好了,那么 MPI_Allreduce() 更合適。

這個想法是理論上Allreduce應該比Allgather更快,因為前者可以使用樹通信模式,並且因為它需要分配和通信更少的 memory。 更多信息可以在這里找到。

在您的情況下,您使用MPI.LAND (即邏輯與)作為 Allreduce 操作運算符。

一個例子:

from mpi4py import MPI
import numpy


comm = MPI.COMM_WORLD
size = comm.Get_size()
rank = comm.Get_rank()

sendBuffer =  numpy.ones(1, dtype=bool) if rank % 2 ==  0 else numpy.zeros(1, dtype=bool)
recvBuffer = numpy.zeros(1, dtype=bool)

print("Before Allreduce => Process %s | sendBuffer %s | recvBuffer %s" % (rank, sendBuffer, recvBuffer))
comm.Allreduce([sendBuffer,  MPI.BOOL],[recvBuffer, MPI.BOOL], MPI.LAND)
print("After Allreduce  => Process %s | sendBuffer %s | recvBuffer %s" % (rank, sendBuffer, recvBuffer))

comm.Barrier()
if rank == 0:
   print("Second RUN")
comm.Barrier()

sendBuffer =  numpy.ones(1, dtype=bool)
recvBuffer = numpy.zeros(1, dtype=bool)

print("Before Allreduce => Process %s | sendBuffer %s | recvBuffer %s" % (rank, sendBuffer, recvBuffer))
comm.Allreduce([sendBuffer,  MPI.BOOL],[recvBuffer, MPI.BOOL], MPI.LAND)
print("After Allreduce  => Process %s | sendBuffer %s | recvBuffer %s" % (rank, sendBuffer, recvBuffer))

Output:

Before Allreduce => Process 1 | sendBuffer [False] | recvBuffer [False]
Before Allreduce => Process 0 | sendBuffer [ True] | recvBuffer [False]
After Allreduce  => Process 1 | sendBuffer [False] | recvBuffer [False]
After Allreduce  => Process 0 | sendBuffer [ True] | recvBuffer [False]
Second RUN
Before Allreduce => Process 0 | sendBuffer [ True] | recvBuffer [False]
Before Allreduce => Process 1 | sendBuffer [ True] | recvBuffer [False]
After Allreduce  => Process 0 | sendBuffer [ True] | recvBuffer [ True]
After Allreduce  => Process 1 | sendBuffer [ True] | recvBuffer [ True]

在 output 的第一部分(“第二次運行”之前),結果為FALSE ,因為具有偶數等級的進程未准備好(即False ),而具有奇數等級的進程已准備好。 因此, False & True => False 在第二部分中,結果為True ,因為所有進程都已准備好。

暫無
暫無

聲明:本站的技術帖子網頁,遵循CC BY-SA 4.0協議,如果您需要轉載,請注明本站網址或者原文地址。任何問題請咨詢:yoyou2525@163.com.

 
粵ICP備18138465號  © 2020-2024 STACKOOM.COM