繁体   English   中英

MPI(汇总)

[英]MPI (Summation)

我正在编写一个程序,该程序计算最多1000个数字的总和。例如1 + 2 + 3 + 4 + 5 .... + 100。 首先,我将求和作业分配给10个处理器:处理器0获得1-100,处理器1获得101-200,依此类推。 总和存储在数组中。

并行完成所有求和后,处理器将其值发送到处理器0(处理器0使用无阻塞发送/接收接收值),处理器0对所有值求和并显示结果。

这是代码:

#include <mpi.h>
#include <iostream>

using namespace std;

int summation(int, int);

int main(int argc, char ** argv)
{
    int * array;
    int total_proc;
    int curr_proc;
    int limit = 0;
    int partial_sum = 0;
    int upperlimit = 0, lowerlimit = 0;

    MPI_Init(&argc, &argv);
    MPI_Comm_size(MPI_COMM_WORLD, &total_proc);
    MPI_Comm_rank(MPI_COMM_WORLD, &curr_proc);
    MPI_Request send_request, recv_request;

    /* checking if 1000 is divisible by number of procs, else quit */
    if(1000 % total_proc != 0)
    {
        MPI_Finalize();
        if(curr_proc == 0)
            cout << "**** 1000 is not divisible by " << total_proc << " ...quitting..."<< endl;
        return 0;
    }

    /* number of partial summations */
    limit = 1000/total_proc;

    array = new int [total_proc];

    /* assigning jobs to processors */
    for(int i = 0; i < total_proc; i++)
    {
        if(curr_proc == i)
        {
            upperlimit = upperlimit + limit;
            lowerlimit = (upperlimit - limit) + 1;
            partial_sum = summation(upperlimit, lowerlimit);
            array[i] = partial_sum;
        }
        else
        {
            upperlimit = upperlimit + limit;
            lowerlimit = (upperlimit - limit) + 1;
        }
    }

    cout << "** Partial Sum From Process " << curr_proc << " is " << array[curr_proc] << endl;

    /* send and receive - non blocking */
    for(int i = 1; i < total_proc; i++)
    {
        if(curr_proc == i) /* (i = current processor) */
        {
            MPI_Isend(&array[i], 1, MPI_INT, 0, i, MPI_COMM_WORLD, &send_request);
            cout << "-> Process " << i << " sent " << array[i] << " to Process 0" << endl;

            MPI_Irecv(&array[i], 1, MPI_INT, i, i, MPI_COMM_WORLD, &recv_request);
            //cout << "<- Process 0 received " << array[i] << " from Process " << i << endl;
        }
    }

    MPI_Finalize();

    if(curr_proc == 0)
    {
        for(int i = 1; i < total_proc; i++)
            array[0] = array[0] + array[i];
        cout << "Sum is " << array[0] << endl;
    }

    return 0;
}

int summation(int u, int l)
{
    int result = 0; 
    for(int i = l; i <= u; i++)
        result = result + i;
    return result;
}

输出:

** Partial Sum From Process 0 is 5050
** Partial Sum From Process 3 is 35050
-> Process 3 sent 35050 to Process 0
<- Process 0 received 35050 from Process 3
** Partial Sum From Process 4 is 45050
-> Process 4 sent 45050 to Process 0
<- Process 0 received 45050 from Process 4
** Partial Sum From Process 5 is 55050
-> Process 5 sent 55050 to Process 0
<- Process 0 received 55050 from Process 5
** Partial Sum From Process 6 is 65050
** Partial Sum From Process 8 is 85050
-> Process 8 sent 85050 to Process 0
<- Process 0 received 85050 from Process 8
-> Process 6 sent 65050 to Process 0
** Partial Sum From Process 1 is 15050
** Partial Sum From Process 2 is 25050
-> Process 2 sent 25050 to Process 0
<- Process 0 received 25050 from Process 2
<- Process 0 received 65050 from Process 6
** Partial Sum From Process 7 is 75050
-> Process 1 sent 15050 to Process 0
<- Process 0 received 15050 from Process 1
-> Process 7 sent 75050 to Process 0
<- Process 0 received 75050 from Process 7
** Partial Sum From Process 9 is 95050
-> Process 9 sent 95050 to Process 0
<- Process 0 received 95050 from Process 9
Sum is -1544080023

打印数组的内容:

5050
536870912
-1579286148
-268433415
501219332
32666
501222192
32666
1
0

我想知道是什么原因造成的。

如果我在调用MPI_Finalize之前打印数组,则可以正常工作。

程序最重要的缺陷是如何划分工作。 在MPI中, 每个进程都在执行主要功能 因此,如果您希望所有过程协作以构建结果,则必须确保所有过程都执行summation功能。

您不需要for循环。 每个进程都单独执行主体。 它们只是具有不同的curr_proc值,您可以根据该值来计算它们必须执行的工作部分:

/* assigning jobs to processors */
int chunk_size = 1000 / total_proc;
lowerlimit = curr_proc * chunk_size;
upperlimit = (curr_proc+1) * chunk_size;
partial_sum = summation(upperlimit, lowerlimit);

然后,主进程如何接收所有其他进程的部分和是不正确的。

  • MPI等级值( curr_proc )从0开始到MPI_Comm_size输出值( total_proc-1 )。
  • 仅进程#1正在发送/接收数据。
  • 您正在使用发送的即时版本,并得到: MPI_IsendMPI_recv ,但你是不是等到这些请求已完成。 为此,您应该使用MPI_Waitall

正确的版本如下所示:

if( curr_proc == 0 ) {
   // master process receives all data
   for( int i = 1; i < total_proc; i++ )
      MPI_Recv( &array[i], MPI_INT, 1, i, 0, MPI_COMM_WORLD );
} else {
   // other processes send data to the master
   MPI_Send( &partial_sum, MPI_INT, 1, 0, 0, MPI_COMM_WORLD );
}

这种多对一的通信模式被称为collect 在MPI中,有一个已经执行此功能的函数: MPI_Gather

最后,您打算执行的操作称为减少 :取一定数量的数值并通过连续执行单个操作(在您的情况下为总和)来生成单个输出值。 在MPI中,也有一个函数可以做到这一点: MPI_Reduce

我强烈建议您在尝试自己做一些基本的指导练习之前。 MPI在一开始很难理解。 建立良好的基础对于您以后能够增加复杂性至关重要。 动手教程也是入门MPI的好方法。

编辑:忘记提及您不需要通过资源数量( total_proc对问题大小 (在这种情况下为1000) 实施偶数除法 根据情况,您可以将其余的分配给单个进程:

chunk_size = 1000 / total_proc;
if( curr_proc == 0 )
    chunk_size += 1000 % total_proc;

或尽可能使其平衡:

int remainder = curr_proc < ( 1000 % proc )? 1 : 0;
lowerlimit = curr_proc * chunk_size /* as usual */
           + curr_proc;             /* cumulative remainder */
upperlimit = (curr_proc + 1) * chunk_size /* as usual */
           + remainder;                   /* curr_proc remainder */

在第二种情况下,负载不平衡将高达1,而在第一种情况下,负载不平衡在最坏的情况下可以达到total_proc-1

您只需初始化array[i] ,即与curr_proc id对应的元素。 该数组中的其他元素将未初始化,从而产生随机值。 在发送/接收打印循环中,您仅访问初始化的元素。

我对MPI不太熟悉,所以我猜,但是您可能需要在调用MPI_Init之前分配array 或在进程0而不是每个单独的进程上调用MPI_Receive

暂无
暂无

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM