简体   繁体   中英

Why MPI and OpenMP Merge Sort are slower than my sequential code?

I have written a code for merge sort in C. I applied this algorithm in sequential, with OpenMP and with MPI. I used an array of 100 random elements. The sequential code is the following:


int main(){
    int N = 100;
    int my_array[N];
    int outputArray[N];
    int length = sizeof(my_array) / sizeof(my_array[0]);
    double start_time, end_time;
    srand(time(NULL));
    int i;
    for (i=0; i<N; i++){
        my_array[i]=rand()%100 + 1;
    }
    //print the array 
    for (i=0; i<N; i++){
        printf("%d ", my_array[i]);
    }   

    printf("\n--------------\n");
    start_time = MPI_Wtime();
    mergeSort(my_array, 0, length-1, outputArray); 
    end_time = MPI_Wtime();
    for(i=0; i<N; i++){
        printf("%d ", my_array[i]);
    }
    printf("\n");
    printf("\n Tempo impiegato: %f ", (end_time - start_time));
} 


void merge(int arr[], int indexA, int indexB, int end, int arrOut[]){
    int i=indexA, j=indexB, k=indexA;
    while(i<=indexB-1 && j<=end){
        if(arr[i]<arr[j]){
            //i=i+1;
            arrOut[k]=arr[i++];
        }
        else{
            //j=j+1;
            arrOut[k]=arr[j++];
        }
        k++;
    }
    while(i<=indexB-1){
        //i++;
        arrOut[k]=arr[i++];
        k++;
    }
    while(j<=end){
        //j++;
        arrOut[k]=arr[j++];
        k++;
    }
    for(i=indexA; i<=end; i++)
        arr[i]=arrOut[i];
}

void mergeSort(int arr[], int inf, int sup, int arrOut[]){
    int medium;
    if(inf<sup){
        medium=(inf+sup)/2;
        mergeSort(arr, inf, medium, arrOut);
        mergeSort(arr, medium+1, sup, arrOut);
        merge(arr, inf, medium+1, sup, arrOut);
    }
}

Then, the implementation with MPI is the following (it start just after the creation of the random array):

    MPI_Init(&argc, &argv);
    MPI_Comm_rank(MPI_COMM_WORLD, &rank);
    MPI_Comm_size(MPI_COMM_WORLD, &n_ranks);
    start_time = MPI_Wtime();

    size=N/n_ranks;
    sub_array=malloc(size*sizeof(int));
    temp=malloc(size*sizeof(int));
    MPI_Scatter(my_array, size, MPI_INT, sub_array, size, MPI_INT, 0, MPI_COMM_WORLD);
    mergeSort(sub_array, 0, length-1, temp);
    MPI_Gather(sub_array, size, MPI_INT, outputArray, size, MPI_INT, 0, MPI_COMM_WORLD);

    if(rank==0){
        int *temp_array=malloc(N*sizeof(int));
        mergeSort(outputArray, 0, length-1, temp_array);
        for(i=0; i<N; i++){
            printf("%d ", temp_array[i]);
        }
        free(temp_array);
    }

    //free(&my_array);
    free(sub_array);
    free(temp);

    //MPI_Barrier(MPI_COMM_WORLD);
    end_time = MPI_Wtime();

EDITED CODE OPENMP: And finally this is with OpenMP (the main is the same)

void parallelMergeSort(int arr[], int inf, int sup, int arrOut[], int level){
    if (level==0){
        #pragma omp parallel
        #pragma omp single
        parallelMergeSort(arr, inf, sup, arrOut, 1);
    }
    else if(level<8){
        #pragma omp task shared(arr, arrOut)
        {
            parallelMergeSort(arr, inf, (inf+sup)/2, arrOut, level+1);
        }
        #pragma omp task shared(arr, arrOut)
        {
            parallelMergeSort(arr, (inf+sup)/2 + 1, sup, arrOut, level+1);
        }
    }
    #pragma omp taskwait
    {
        mergeSort(arr, inf, sup, arrOut);
    }   
}

If I apply these code to an array of 100 elements, the time of execution is higher for the MPI and OpenMP code. Time sequential: 0.000044

Time OpenMP: 0.00949953

Time MPI: 0.003077

Edit: If I try with 10^6 random elements, results are these:

Time sequential: 0.899016

Time OpenMP: Segmentation error

Time MPI: 25.625195 How can I improve these results?

I do not know MPI, so I only answer the OpenMP part of the question. Without changing the algorithm, the OpenMP version of your mergeSort function should look something like this:

void parallelMergeSort(int arr[], int inf, int sup, int arrOut[], int level){
    if(inf<sup){
        int medium=(inf+sup)/2;
        #pragma omp task shared(arr, arrOut) if(level>0)
          parallelMergeSort(arr, inf, medium, arrOut, level-1);   
        parallelMergeSort(arr, medium+1, sup, arrOut, level-1);
        #pragma omp taskwait
         merge(arr, inf, medium+1, sup, arrOut);
    }
}

I have used the if(level>0) clause to avoid starting too much tasks. On my computer using level=4 gives the shortes runtimes, but of course it depends on the number of cores available and on the size of the array. Note that I did not use a second #pragma omp task line before the second parallelMergeSort function call, because it will run faster this way. You should call this function using:

#pragma omp parallel
#pragma omp single
parallelMergeSort(my_array, 0, length-1, outputArray,4); 

If you whish to change the algorithm for better parallelization, please read the documents I have linked in the comments.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM