简体   繁体   English

为什么 MPI 和 OpenMP 合并排序比我的顺序代码慢?

[英]Why MPI and OpenMP Merge Sort are slower than my sequential code?

I have written a code for merge sort in C.我在 C 中编写了合并排序的代码。 I applied this algorithm in sequential, with OpenMP and with MPI.我使用 OpenMP 和 MPI 依次应用了这个算法。 I used an array of 100 random elements.我使用了一个包含 100 个随机元素的数组。 The sequential code is the following:顺序代码如下:


int main(){
    int N = 100;
    int my_array[N];
    int outputArray[N];
    int length = sizeof(my_array) / sizeof(my_array[0]);
    double start_time, end_time;
    srand(time(NULL));
    int i;
    for (i=0; i<N; i++){
        my_array[i]=rand()%100 + 1;
    }
    //print the array 
    for (i=0; i<N; i++){
        printf("%d ", my_array[i]);
    }   

    printf("\n--------------\n");
    start_time = MPI_Wtime();
    mergeSort(my_array, 0, length-1, outputArray); 
    end_time = MPI_Wtime();
    for(i=0; i<N; i++){
        printf("%d ", my_array[i]);
    }
    printf("\n");
    printf("\n Tempo impiegato: %f ", (end_time - start_time));
} 


void merge(int arr[], int indexA, int indexB, int end, int arrOut[]){
    int i=indexA, j=indexB, k=indexA;
    while(i<=indexB-1 && j<=end){
        if(arr[i]<arr[j]){
            //i=i+1;
            arrOut[k]=arr[i++];
        }
        else{
            //j=j+1;
            arrOut[k]=arr[j++];
        }
        k++;
    }
    while(i<=indexB-1){
        //i++;
        arrOut[k]=arr[i++];
        k++;
    }
    while(j<=end){
        //j++;
        arrOut[k]=arr[j++];
        k++;
    }
    for(i=indexA; i<=end; i++)
        arr[i]=arrOut[i];
}

void mergeSort(int arr[], int inf, int sup, int arrOut[]){
    int medium;
    if(inf<sup){
        medium=(inf+sup)/2;
        mergeSort(arr, inf, medium, arrOut);
        mergeSort(arr, medium+1, sup, arrOut);
        merge(arr, inf, medium+1, sup, arrOut);
    }
}

Then, the implementation with MPI is the following (it start just after the creation of the random array):然后,使用 MPI 的实现如下(它在创建随机数组之后开始):

    MPI_Init(&argc, &argv);
    MPI_Comm_rank(MPI_COMM_WORLD, &rank);
    MPI_Comm_size(MPI_COMM_WORLD, &n_ranks);
    start_time = MPI_Wtime();

    size=N/n_ranks;
    sub_array=malloc(size*sizeof(int));
    temp=malloc(size*sizeof(int));
    MPI_Scatter(my_array, size, MPI_INT, sub_array, size, MPI_INT, 0, MPI_COMM_WORLD);
    mergeSort(sub_array, 0, length-1, temp);
    MPI_Gather(sub_array, size, MPI_INT, outputArray, size, MPI_INT, 0, MPI_COMM_WORLD);

    if(rank==0){
        int *temp_array=malloc(N*sizeof(int));
        mergeSort(outputArray, 0, length-1, temp_array);
        for(i=0; i<N; i++){
            printf("%d ", temp_array[i]);
        }
        free(temp_array);
    }

    //free(&my_array);
    free(sub_array);
    free(temp);

    //MPI_Barrier(MPI_COMM_WORLD);
    end_time = MPI_Wtime();

EDITED CODE OPENMP: And finally this is with OpenMP (the main is the same) EDITED CODE OPENMP:最后是OpenMP(主要是一样的)

void parallelMergeSort(int arr[], int inf, int sup, int arrOut[], int level){
    if (level==0){
        #pragma omp parallel
        #pragma omp single
        parallelMergeSort(arr, inf, sup, arrOut, 1);
    }
    else if(level<8){
        #pragma omp task shared(arr, arrOut)
        {
            parallelMergeSort(arr, inf, (inf+sup)/2, arrOut, level+1);
        }
        #pragma omp task shared(arr, arrOut)
        {
            parallelMergeSort(arr, (inf+sup)/2 + 1, sup, arrOut, level+1);
        }
    }
    #pragma omp taskwait
    {
        mergeSort(arr, inf, sup, arrOut);
    }   
}

If I apply these code to an array of 100 elements, the time of execution is higher for the MPI and OpenMP code.如果我将这些代码应用于包含 100 个元素的数组,则 MPI 和 OpenMP 代码的执行时间会更长。 Time sequential: 0.000044时间顺序:0.000044

Time OpenMP: 0.00949953时间 OpenMP:0.00949953

Time MPI: 0.003077时间 MPI:0.003077

Edit: If I try with 10^6 random elements, results are these:编辑:如果我尝试使用 10^6 个随机元素,结果如下:

Time sequential: 0.899016时序:0.899016

Time OpenMP: Segmentation error时间 OpenMP:分段错误

Time MPI: 25.625195 How can I improve these results? Time MPI: 25.625195 如何改进这些结果?

I do not know MPI, so I only answer the OpenMP part of the question.我不知道 MPI,所以我只回答问题的 OpenMP 部分。 Without changing the algorithm, the OpenMP version of your mergeSort function should look something like this:在不更改算法的情况下,您的mergeSort function 的 OpenMP 版本应如下所示:

void parallelMergeSort(int arr[], int inf, int sup, int arrOut[], int level){
    if(inf<sup){
        int medium=(inf+sup)/2;
        #pragma omp task shared(arr, arrOut) if(level>0)
          parallelMergeSort(arr, inf, medium, arrOut, level-1);   
        parallelMergeSort(arr, medium+1, sup, arrOut, level-1);
        #pragma omp taskwait
         merge(arr, inf, medium+1, sup, arrOut);
    }
}

I have used the if(level>0) clause to avoid starting too much tasks.我使用了if(level>0)子句来避免启动太多任务。 On my computer using level=4 gives the shortes runtimes, but of course it depends on the number of cores available and on the size of the array.在我的计算机上,使用level=4可以缩短运行时间,但当然它取决于可用内核的数量和阵列的大小。 Note that I did not use a second #pragma omp task line before the second parallelMergeSort function call, because it will run faster this way.请注意,在第二个parallelMergeSort function 调用之前,我没有使用第二个#pragma omp task行,因为这样运行速度会更快。 You should call this function using:您应该使用以下命令调用此 function:

#pragma omp parallel
#pragma omp single
parallelMergeSort(my_array, 0, length-1, outputArray,4); 

If you whish to change the algorithm for better parallelization, please read the documents I have linked in the comments.如果您想更改算法以获得更好的并行化,请阅读我在评论中链接的文档。

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM