简体   繁体   中英

How to find sum of the given numbers using MPI in C?

I am trying to find the sum of all given numbers in an array. And I have to split the array in equal size and send to each process and calculate the sum. Later send back the calculated sum from each process to root process for the final answer. Actually, I know I can use MPI_Scatter . But my problem is what if my list is in the odd number . For example, I have an array with 13 elements, then I have 3 process. So by default, the MPI_Scatter will divide the array by 3 and left the last element. Basically, it will calculate the sum for only 12 elements. My output when I just use the MPI_Scatter :

myid = 0 total = 6
myid = 1 total = 22
myid = 2 total = 38
results from all processors_= 66 
size= 13 

So, I plan to use the MPI_Scatter and MPI_Send . So I can get the last element and send that through the MPI_Send and calculate it, and receive in root process. But I am getting problem.. My code:

#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#include <mpi.h>

/*  globals */
int numnodes, myid, mpi_err;
int last_core;
int n;
int last_elements[];

#define mpi_root 0
/* end globals  */

void init_it(int  *argc, char ***argv);

void init_it(int  *argc, char ***argv) {
    mpi_err = MPI_Init(argc, argv);
    mpi_err = MPI_Comm_size( MPI_COMM_WORLD, &numnodes );
    mpi_err = MPI_Comm_rank(MPI_COMM_WORLD, &myid);
}

int main(int argc, char *argv[]) {
    int *myray, *send_ray, *back_ray;
    int count;
    int size, mysize, i, k, j, total;

    MPI_Status status;

    init_it(&argc, &argv);

    /* each processor will get count elements from the root */
    count = 4;
    myray = (int*)malloc(count * sizeof(int));
    size = (count * numnodes) + 1;
    send_ray = (int*)malloc(size * sizeof(int));
    back_ray = (int*)malloc(numnodes * sizeof(int));
    last_core = numnodes - 1;

    /* create the data to be sent on the root */
    if(myid == mpi_root){
        for(i = 0; i < size; i++)
        {
            send_ray[i] = i;
        }
    }

    /* send different data to each processor */
    mpi_err = MPI_Scatter( send_ray, count, MPI_INT,
                           myray, count, MPI_INT,
                           mpi_root, MPI_COMM_WORLD);

    if(myid == mpi_root) {
        n = 1;
        memcpy(last_elements, &send_ray[size-n], n * sizeof(int));

        //Send the last numbers to the last core through send command
        MPI_Send(last_elements, n, MPI_INT, last_core, 99, MPI_COMM_WORLD);
    }

    /* each processor does a local sum */
    total = 0;
    for(i = 0; i < count; i++)
        total = total + myray[i];
        //total = total + send_ray[size-1];
    printf("myid= %d total= %d\n", myid, total);

    if(myid == last_core)
    {
        printf("Last core\n");
        MPI_Recv(last_elements, n, MPI_INT, 0, 99, MPI_COMM_WORLD, &status);
    }

    /* send the local sums back to the root */
    mpi_err = MPI_Gather(&total, 1, MPI_INT,
                        back_ray, 1, MPI_INT,
                        mpi_root, MPI_COMM_WORLD);

    /* the root prints the global sum */
    if(myid == mpi_root){
        total=0;
        for(i = 0; i < numnodes; i++)
            total = total + back_ray[i];
        printf("results from all processors_= %d \n", total);
        printf("size= %d \n ", size);
    }

    mpi_err = MPI_Finalize();
}

The output:

myid = 0 total = 6
myid = 1 total = 22
myid = 2 total = 38
Last core
[ubuntu:11884] *** An error occurred in MPI_Recv
[ubuntu:11884] *** on communicator MPI_COMM_WORLD
[ubuntu:11884] *** MPI_ERR_TRUNCATE: message truncated
[ubuntu:11884] *** MPI_ERRORS_ARE_FATAL: your MPI job will now abort
--------------------------------------------------------------------------
mpiexec has exited due to process rank 2 with PID 11884 on
node ubuntu exiting improperly. There are two reasons this could occur:

1. this process did not call "init" before exiting, but others in
the job did. This can cause a job to hang indefinitely while it waits
for all processes to call "init". By rule, if one process calls "init",
then ALL processes must call "init" prior to termination.

2. this process called "init", but exited without calling "finalize".
By rule, all processes that call "init" MUST call "finalize" prior to
exiting or it will be considered an "abnormal termination"

This may have caused other processes in the application to be
terminated by signals sent by mpiexec (as reported here).

I know I am doing wrong. I would appreciate if you can point me.

Your last_elements array does not have a size specified. MPI_Recv errors out because there is no space to put the items it is being sent. Your code is missing a malloc for last_elements.

May i am answering very late but may be other can get help.

please check out the following code

# include <cstdlib>
# include <iostream>
# include <iomanip>
# include <ctime>
# include <mpi.h>

using namespace std;

int main ( int argc, char *argv[] );
void timestamp ( );

//****************************************************************************80

int main ( int argc, char *argv[] )

//****************************************************************************80

{
  int *a;
  int dest;
  float factor;
  int global;
  int i;
  int id;
  int ierr;
  int n;
  int npart;
  int p;
  int source;
  int start;
  MPI_Status status;
  int tag;
  int tag_target = 1;
  int tag_size = 2;
  int tag_data = 3;
  int tag_found = 4;
  int tag_done = 5;
  int target;
  int workers_done;
  int x;
//
//  Initialize MPI.
//
  ierr = MPI_Init ( &argc, &argv );
//
//  Get this processes's rank.
//
  ierr = MPI_Comm_rank ( MPI_COMM_WORLD, &id );
//
//  Find out how many processes are available.
//
  ierr = MPI_Comm_size ( MPI_COMM_WORLD, &p );

  if ( id == 0 )
  {
    timestamp ( );
    cout << "\n";
    cout << "SEARCH - Master process:\n";
    cout << "  C++ version\n";
    cout << "  An example MPI program to search an array.\n";
    cout << "\n";
    cout << "  Compiled on " << __DATE__ << " at " << __TIME__ << ".\n";
    cout << "\n";
    cout << "  The number of processes is " << p << "\n";
  }

  cout << "\n";
  cout << "Process " << id << " is active.\n";
//
//  Have the master process generate the target and data.  In a more 
//  realistic application, the data might be in a file which the master 
//  process would read.  Here, the master process decides.
//
  if ( id == 0 )
  {
//
//  Pick the number of data items per process, and set the total.
//
    factor = ( float ) rand ( ) / ( float ) RAND_MAX;
    npart = 50 + ( int ) ( factor * 100.0E+00 );
    n = npart * p;

    cout << "\n";
    cout << "SEARCH - Master process:\n";
    cout << "  The number of data items per process is " << npart << "\n";
    cout << "  The total number of data items is       " << n << ".\n";
//
//  Now allocate the master copy of A, fill it with values, and pick 
//  a value for the target.
//
    a = new int[n];

    factor = ( float ) n / 10.0E+00 ;

    for ( i = 0; i < n; i++ ) 
    {
      a[i] = ( int ) ( factor * ( float ) rand ( ) / ( float ) RAND_MAX );
    }
    target = a[n/2];

    cout << "  The target value is " << target << ".\n";
//
//  The worker processes need to have the target value, the number of data items,
//  and their individual chunk of the data vector.
//
    for ( i = 1; i <= p-1; i++ )
    {
      dest = i;
      tag = tag_target;

      ierr = MPI_Send ( &target, 1, MPI_INT, dest, tag, MPI_COMM_WORLD );

      tag = tag_size;

      ierr = MPI_Send ( &npart, 1, MPI_INT, dest, tag, MPI_COMM_WORLD );

      start = ( i - 1 ) * npart;
      tag = tag_data;

      ierr = MPI_Send ( a+start, npart, MPI_INT, dest, tag,
    MPI_COMM_WORLD );
    }
//
//  Now the master process simply waits for each worker process to report that 
//  it is done.
//
    workers_done = 0;

    while ( workers_done < p-1 )
    {
      ierr = MPI_Recv ( &x, 1, MPI_INT, MPI_ANY_SOURCE, MPI_ANY_TAG,
    MPI_COMM_WORLD, &status );

      source = status.MPI_SOURCE;
      tag = status.MPI_TAG;

      if ( tag == tag_done )
      {
    workers_done = workers_done + 1;
      }
      else if ( tag == tag_found )
      {
    cout << "P" << source << "  " << x << "  " << a[x] << "\n";
      }
      else
      {
    cout << "  Master process received message with unknown tag = "
         << tag << ".\n";
      }

    }
//
//  The master process can throw away A now.
//
    delete [] a;
  }
//
//  Each worker process expects to receive the target value, the number of data
//  items, and the data vector.
//
  else 
  {
    source = 0;
    tag = tag_target;

    ierr = MPI_Recv ( &target, 1, MPI_INT, source, tag, MPI_COMM_WORLD,
      &status );

    source = master;
    tag = tag_size;

    ierr = MPI_Recv ( &npart, 1, MPI_INT, source, tag, MPI_COMM_WORLD, 
      &status );

    a = new int[npart];

    source = 0;
    tag = tag_data;

    ierr = MPI_Recv ( a, npart, MPI_INT, source, tag, MPI_COMM_WORLD,
      &status );
//
//  The worker simply checks each entry to see if it is equal to the target
//  value.
//
    for ( i = 0; i < npart; i++ )
    {
      if ( a[i] == target )
      {
    global = ( id - 1 ) * npart + i;
    dest = 0;
    tag = tag_found;

    ierr = MPI_Send ( &global, 1, MPI_INT, dest, tag, MPI_COMM_WORLD );
      }
    }
//
//  When the worker is finished with the loop, it sends a dummy data value with
//  the tag "TAG_DONE" indicating that it is done.
//
    dest = 0;
    tag = tag_done;

    ierr = MPI_Send ( &target, 1, MPI_INT, dest, tag, MPI_COMM_WORLD );

    delete [] ( a );
  }
//
//  Terminate MPI.
//
  MPI_Finalize ( );
//
//  Terminate.
//
  if ( id == 0 )
  {
    cout << "\n";
    cout << "SEARCH - Master process:\n";
    cout << "  Normal end of execution.\n";
    cout << "\n";
    timestamp ( );
  } 
  return 0;
}
//****************************************************************************80

void timestamp ( )

//****************************************************************************80

{
# define TIME_SIZE 40

  static char time_buffer[TIME_SIZE];
  const struct std::tm *tm_ptr;
  size_t len;
  std::time_t now;

  now = std::time ( NULL );
  tm_ptr = std::localtime ( &now );

  len = std::strftime ( time_buffer, TIME_SIZE, "%d %B %Y %I:%M:%S %p", tm_ptr );

  std::cout << time_buffer << "\n";

  return;
# undef TIME_SIZE
}

and the output is:

SEARCH - Master process:
A program using MPI, to search an array.
Compiled on jan  14 2018 at 11:21:45.

The number of processes is 4

Process 0 is active.

SEARCH - Master process:
The number of data items per process is 101
The total number of data items is       404.
The target value is 14.
P3  202  14
P2  145  14
P2  178  14
P2  180  14
P3  211  14
P3  240  14
P3  266  14
P3  295  14
P1  12  14
P1  23  14
P1  36  14
P1  71  14

SEARCH - Master process:
  Normal end of execution.

Process 1 is active.

Process 2 is active.

Process 3 is active.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM