简体   繁体   English

在firstprivate变量构造中允许OpenMP调用和指令?

[英]OpenMP calls and directives allowed in firstprivate variable construction?

I have the following code which works on the compilers I have available (xlC and gcc) but I don't know if it is fully compliant (I didn't find anything in the OpenMP 3.0 spec that explicitly disallows it): 我有以下代码可用于我可用的编译器(xlC和gcc),但我不知道它是否完全兼容(我没有在OpenMP 3.0规范中找到明确禁止它的任何内容):

#include <iostream>
#include <vector>
#include <omp.h>

struct A {
  int tid;
  A() : tid(-1) { }
  A(const A&) { tid = omp_get_thread_num(); }
};

int main() {
  A a;

  std::vector<int> v(10);
  std::vector<int>::iterator it;
#pragma omp parallel for firstprivate(a)
  for (it=v.begin(); it<v.end(); ++it)
    *it += a.tid;

  for (it=v.begin(); it<v.end(); ++it)
    std::cout << *it << ' ';
  std::cout << std::endl;
  return 0;
}

My motivation is to figure out how many threads and each thread's id in the omp parallel for section (I do not wish to call it for each element that is being processed though). 我的动机是弄清楚omp并行部分中有多少个线程和每个线程的id(我不希望为每个正在处理的元素调用它)。 Is there any chance that I'm causing undefined behavior? 我有没有机会导致未定义的行为?

I would just decouple (start of) the parallel region from the loop, and use private variable to keep tid: 我只是从循环中解耦(开始)并行区域,并使用私有变量来保持tid:

std::vector<int>::iterator it;
int tid;
#pragma omp parallel private(tid)
{
    tid = omp_get_thread_num();
    #pragma omp for 
    for (it=v.begin(); it<v.end(); ++it)
        *it += tid; 
}

Added: below are the quotes from the OpenMP specification (Section 2.9.3.4) that make me think your code is conformant and so does not produce UB (however see another addition below): 补充:以下是OpenMP规范 (第2.9.3.4节)中的引用,这些引号使我认为您的代码符合要求,因此不会产生UB (但请参见下面的其他添加内容):

... the new list item is initialized from the original list item existing before the construct. ...新的列表项是从构造之前存在的原始列表项初始化的。 The initialization of the new list item is done once for each task that references the list item in any statement in the construct. 对于引用构造中任何语句中的列表项的每个任务,都会对新列表项的初始化执行一次。 The initialization is done prior to the execution of the construct. 初始化在构造执行之前完成。

For a firstprivate clause on a parallel or task construct, the initial value of the new list item is the value of the original list item that exists immediately prior to the construct in the task region where the construct is encountered. 对于paralleltask构造的firstprivate子句,新列表项的初始值是在遇到构造的任务区域中紧接构造之前存在的原始列表项的值。

C/C++: ... For variables of class type, a copy constructor is invoked to perform the initialization. C / C ++:...对于类类型的变量,调用复制构造函数来执行初始化。 The order in which copy constructors for different variables of class type are called is unspecified. 调用类型的不同变量的复制构造函数的顺序是未指定的。

C/C++: A variable of class type (or array thereof) that appears in a firstprivate clause requires an accessible, unambiguous copy constructor for the class type. C / C ++:出现在firstprivate子句中的类类型(或其数组)的变量需要类类型的可访问的,明确的复制构造函数。

Added-2: However, it is not specified which thread executes the copy constructor for a firstprivate variable. Added-2:但是,没有指定哪个线程执行firstprivate变量的复制构造函数。 So in theory, it can be done by the master thread of the region for all copies of the variable. 所以从理论上讲,它可以由区域的主线程完成变量的所有副本。 In this case, the value of omp_get_thread_num() will be equal in all copies, either 0 or, in case of nested parallel regions, the thread number in the outer region. 在这种情况下, omp_get_thread_num()的值在所有副本中都相等,0或者在嵌套的并行区域的情况下,外部区域中的线程号。 So, being a defined behavior from OpenMP standpoint, it may result in a data race in your program. 因此,作为OpenMP立场的定义行为,它可能会导致程序中的数据竞争。

When you iterate through the vector, you should be using it != v.end(), and not it < v.end(). 当你遍历向量时,你应该使用它!= v.end(),而不是<v.end()。 However, in this case your parallel for loop is no longer valid. 但是,在这种情况下,并行for循环不再有效。 I would restructure that section of the code in the following manner: 我将按以下方式重构代码的这一部分:

  #pragma omp parallel for firstprivate(a)
  for (int i = 0 ; i < v.size() ; i++ )
     v[i] += a.tid;

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM