简体   繁体   中英

Optimising and why openmp is much slower than sequential way?

I am a newbie in programming with OpenMp. I wrote a simple c program to multiply matrix with a vector. Unfortunately, by comparing executing time I found that the OpenMP is much slower than the Sequential way.

Here is my code (Here the matrix is N*N int, vector is N int, result is N long long):

#pragma omp parallel for private(i,j) shared(matrix,vector,result,m_size)
for(i=0;i<m_size;i++)
{  
  for(j=0;j<m_size;j++)
  {  
    result[i]+=matrix[i][j]*vector[j];
  }
}

And this is the code for sequential way:

for (i=0;i<m_size;i++)
        for(j=0;j<m_size;j++)
            result[i] += matrix[i][j] * vector[j];

When I tried these two implementations with a 999x999 matrix and a 999 vector, the execution time is:

Sequential: 5439 ms Parallel: 11120 ms

I really cannot understand why OpenMP is much slower than sequential algo (over 2 times slower!) Anyone who can solve my problem?

Your code partially suffers from the so-called false sharing , typical for all cache-coherent systems. In short, many elements of the result[] array fit in the same cache line. When thread i writes to result[i] as a result of the += operator, the cache line holding that part of result[] becomes dirty. The cache coherency protocol then invalidates all copies of that cache line in the other cores and they have to refresh their copy from the upper level cache or from the main memory. As result is an array of long long , then one cache line (64 bytes on x86) holds 8 elements and besides result[i] there are 7 other array elements in the same cache line. Therefore it is possible that two "neighbouring" threads will constantly fight for ownership of the cache line (assuming that each thread runs on a separate core).

To mitigate false sharing in your case, the easiest thing to do is to ensure that each thread gets an iteration block, whose size is divisible by the number of elements in the cache line. For example you can apply the schedule(static,something*8) where something should be big enough so that the iteration space is not fragmented into too many pieces, but in the same time it should be small enough so that each thread gets a block. Eg for m_size equal to 999 and 4 threads you would apply the schedule(static,256) clause to the parallel for construct.

Another partial reason for the code to run slower might be that when OpenMP is enabled, the compiler might become reluctant to apply some code optimisations when shared variables are being assigned to. OpenMP provides for the so-called relaxed memory model where it is allowed that the local memory view of a shared variable in each threads is different and the flush construct is provided in order to synchronise the views. But compilers usually see shared variables as being implicitly volatile if they cannot prove that other threads would not need to access desynchronised shared variables. You case is one of those, since result[i] is only assigned to and the value of result[i] is never used by other threads. In the serial case the compiler would most likely create a temporary variable to hold the result from the inner loop and would only assign to result[i] once the inner loop has finished. In the parallel case it might decide that this would create a temporary desynchronised view of result[i] in the other threads and hence decide not to apply the optimisation. Just for the record, GCC 4.7.1 with -O3 -ftree-vectorize does the temporary variable trick with both OpenMP enabled and not.

Because when OpenMP distributes the work among threads there is a lot of administration/synchronisation going on to ensure the values in your shared matrix and vector are not corrupted somehow. Even though they are read-only: humans see that easily, your compiler may not.

Things to try out for pedagogic reasons:

0) What happens if matrix and vector are not shared ?

1) Parallelize the inner "j-loop" first, keep the outer "i-loop" serial. See what happens.

2) Do not collect the sum in result[i] , but in a variable temp and assign its contents to result[i] only after the inner loop is finished to avoid repeated index lookups. Don't forget to init temp to 0 before the inner loop starts.

I did this in reference to Hristo's comment. I tried using schedule(static, 256). For me it makes it does not help changing the default chunck size. Maybe it even makes it worse. I printed out the thread number and its index with and without setting the schedule and it's clear that OpenMP already chooses the thread indices to be far from one another so that false sharing does not seem to be an issue. For me this code already gives a good boost with OpenMP.

#include "stdio.h"
#include <omp.h>

void loop_parallel(const int *matrix, const int ld, const int*vector, long long* result, const int m_size) {
    #pragma omp parallel for schedule(static, 250)
    //#pragma omp parallel for
    for (int i=0;i<m_size;i++) {
        //printf("%d %d\n", omp_get_thread_num(), i);
        long long sum = 0;
        for(int j=0;j<m_size;j++) {
            sum += matrix[i*ld +j] * vector[j];
        }
        result[i] = sum;
    }
}

void loop(const int *matrix, const int ld, const int*vector, long long* result, const int m_size) {
    for (int i=0;i<m_size;i++) {
        long long sum = 0;
        for(int j=0;j<m_size;j++) {
            sum += matrix[i*ld +j] * vector[j];
        }
        result[i] = sum;
    }
}

int main() {
    const int m_size = 1000;
    int *matrix = new int[m_size*m_size];
    int *vector = new int[m_size];
    long long*result = new long long[m_size];
    double dtime;

    dtime = omp_get_wtime();
    loop(matrix, m_size, vector, result, m_size);
    dtime = omp_get_wtime() - dtime;
    printf("time %f\n", dtime);

    dtime = omp_get_wtime();
    loop_parallel(matrix, m_size, vector, result, m_size);
    dtime = omp_get_wtime() - dtime;
    printf("time %f\n", dtime);

}

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM