简体   繁体   中英

OpenMP parallel thread

I need to parallelize this loop, I though that to use was a good idea, but I never studied them before.

 #pragma omp parallel for

for(std::set<size_t>::const_iterator it=mesh->NEList[vid].begin();
        it!=mesh->NEList[vid].end(); ++it){

    worst_q = std::min(worst_q, mesh->element_quality(*it));
}

In this case the loop is not parallelized because it uses iterator and the compiler cannot understand how to slit it.

Can You help me?

OpenMP requires that the controlling predicate in parallel for loops has one of the following relational operators: < , <= , > or >= . Only random access iterators provide these operators and hence OpenMP parallel loops work only with containers that provide random access iterators. std::set provides only bidirectional iterators. You may overcome that limitation using explicit tasks. Reduction can be performed by first partially reducing over private to each thread variables followed by a global reduction over the partial values.

double *t_worst_q;
// Cache size on x86/x64 in number of t_worst_q[] elements
const int cb = 64 / sizeof(*t_worst_q);

#pragma omp parallel
{
   #pragma omp single
   {
      t_worst_q = new double[omp_get_num_threads() * cb];
      for (int i = 0; i < omp_get_num_threads(); i++)
         t_worst_q[i * cb] = worst_q;
   }

   // Perform partial min reduction using tasks
   #pragma omp single
   {
      for(std::set<size_t>::const_iterator it=mesh->NEList[vid].begin();
          it!=mesh->NEList[vid].end(); ++it) {
         size_t elem = *it;
         #pragma omp task
         {
            int tid = omp_get_thread_num();
            t_worst_q[tid * cb] = std::min(t_worst_q[tid * cb],
                                           mesh->element_quality(elem));
         }
      }
   }

   // Perform global reduction
   #pragma omp critical
   {
      int tid = omp_get_thread_num();
      worst_q = std::min(worst_q, t_worst_q[tid * cb]);
   }
}

delete [] t_worst_q;

(I assume that mesh->element_quality() returns double )

Some key points:

  • The loop is executed serially by one thread only, but each iteration creates a new task. These are most likely queued for execution by the idle threads.
  • Idle threads waiting at the implicit barrier of the single construct begin consuming tasks as soon as they are created.
  • The value pointed by it is dereferenced before the task body. If dereferenced inside the task body, it would be firstprivate and a copy of the iterator would be created for each task (ie on each iteration). This is not what you want.
  • Each thread performs partial reduction in its private part of the t_worst_q[] .
  • In order to prevent performance degradation due to false sharing, the elements of t_worst_q[] that each thread accesses are spaced out so to end up in separate cache lines. On x86/x64 the cache line is 64 bytes, therefore the thread number is multiplied by cb = 64 / sizeof(double) .
  • The global min reduction is performed inside a critical construct to protect worst_q from being accessed by several threads at once. This is for illustrative purposes only since the reduction could also be performed by a loop in the main thread after the parallel region.

Note that explicit tasks require compiler which supports OpenMP 3.0 or 3.1. This rules out all versions of Microsoft C/C++ Compiler (it only supports OpenMP 2.0).

Random-Access Container

The simplest solution is to just throw everything into a random-access container (like std::vector ) and use the index-based loops that are favoured by OpenMP:

// Copy elements
std::vector<size_t> neListVector(mesh->NEList[vid].begin(), mesh->NEList[vid].end());

// Process in a standard OpenMP index-based for loop
#pragma omp parallel for reduction(min : worst_q)
for (int i = 0; i < neListVector.size(); i++) {
    worst_q = std::min(worst_q, complexCalc(neListVector[i]));
}

Apart from being incredibly simple, in your situation (tiny elements of type size_t that can easily be copied) this is also the solution with the best performance and scalability.

Avoiding copies

However, in a different situation than yours you may have elements that aren't copied as easily (larger elements) or cannot be copied at all. In this case you can just throw the corresponding pointers in a random-access container:

// Collect pointers
std::vector<const nonCopiableObjectType *> neListVector;
for (const auto &entry : mesh->NEList[vid]) {
    neListVector.push_back(&entry);
}

// Process in a standard OpenMP index-based for loop
#pragma omp parallel for reduction(min : worst_q)
for (int i = 0; i < neListVector.size(); i++) {
    worst_q = std::min(worst_q, mesh->element_quality(*neListVector[i]));
}

This is slightly more complex than the first solution, still has the same good performance on small elements and increased performance on larger elements.

Tasks and Dynamic Scheduling

Since someone else brought up OpenMP Tasks in his answer, I want to comment on that to. Tasks are a very powerful construct, but they have a huge overhead (that even increases with the number of threads) and in this case just make things more complex.

For the min reduction the use of Tasks is never justified because the creation of a Task in the main thread costs much more than just doing the std::min itself!

For the more complex operation mesh->element_quality you might think that the dynamic nature of Tasks can help you with load-balancing problems, in case that the execution time of mesh->element_quality varies greatly between iterations and you don't have enough iterations to even it out. But even in that case, there is a simpler solution: Simply use dynamic scheduling by adding the schedule(dynamic) directive to your parallel for line in one of my previous solutions. It achieves the same behaviour which far less overhead.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM