[英]C++: std::vector insert with OpenMP
I'm getting a segmentation fault in the following function that creates a grid of points in parallel with OpenMP using vector insert.我在以下 function 中遇到分段错误,它使用向量插入创建与 OpenMP 并行的点网格。
std::vector<n_point_t> fill_points(size_t Nn1, size_t Nn2) {
std::vector<n_point_t> grid;
grid.reserve(Nn1*Nn2);
#pragma omp parallel for
for (size_t i=0; i<Nn1; i++) {
std::vector<n_point_t> subgrid = get_subgrid(Nn2);
grid.insert(grid.begin()+i*Nn2, subgrid.begin(), subgrid.end());
}
return grid;
}
n_point_t
is defined as n_point_t
定义为
union n_point_t {
double coords[6];
struct {
double n1x;
double n1y;
double n1z;
double n2x;
double n2y;
double n2z;
};
};
and get_subgrid(size_t Nn2)
creates a grid of n_point_t
of size Nn2
. get_subgrid(size_t Nn2)
创建一个大小为Nn2
的n_point_t
网格。
The insert is definitely responsible for the segmentation fault.插入肯定是造成分段错误的原因。 I don't understand the problem here.
我不明白这里的问题。 Each thread should be inserting into a different part of the
grid
because of the insert indexing.由于插入索引,每个线程都应该插入到
grid
的不同部分。
I get a segmentation fault even if I protect the insert with #pragma omp critical
.即使我使用
#pragma omp critical
保护插入,我也会遇到分段错误。
Since you do call reserve()
in advance, no reallocation will happen here.由于您确实提前调用了
reserve()
,因此这里不会发生重新分配。 But you are passing a dangerous argument grid.begin()+i*Nn2
to insert
.但是您将一个危险的参数
grid.begin()+i*Nn2
传递给insert
。 It is not guaranteed to be a valid iterator.不能保证它是有效的迭代器。
What if the length of subgrid
is less than N2
?如果
subgrid
的长度小于N2
怎么办? Will you get an uncontinuous vector?你会得到一个不连续的向量吗? Please do not do so.
请不要这样做。 It works in a single thread, just because
grid.begin()+i*Nn2
happens to be valid.它在单个线程中工作,只是因为
grid.begin()+i*Nn2
恰好是有效的。 In another word, do not try to touch the unused memory of a vector.换句话说,不要试图去触碰向量的未使用的 memory。
One suggested solution maybe be resize()
the vector and assign them if you must use multiple threads.如果必须使用多个线程,一种建议的解决方案可能是
resize()
向量并分配它们。
声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.