繁体   English   中英

使用OpenMP并行化时出现分段错误

[英]Segmentation fault on parallelization with OpenMP

我正在使用OpenMP并行执行以下功能

float myfunc ( Class1 *class1, int *feas, int numfeas, float z, long *k, double cost, long iter, float e )
{
    long i;
    long x;
    double sum;

    sum = cost;
    while ( change/cost > 1.0*e ) {
        change = 0.0;

        intshuffle ( feas, numfeas );

        #pragma omp parallel for private (i,x) firstprivate(z,k) reduction (+:sum)
        for ( i=0; i<iter; i++ ) {
            x = i%numfeas;
            sum += pgain ( feas[x], class1, z, k );
        }
        cost -= sum;
    }
    return ( cost );
}

这将返回“ 分段错误(核心已转储) ”错误

当我将语用改变为

#pragma omp parallel for private (i,x) firstprivate(class1,feas,z,k) reduction (+:sum)`

我收到“ 双重释放或损坏(输出):0xb6a00468 ”错误。

据我对OpenMp的有限了解,我知道这是由于对与class1feas有关的指针的内存访问错误。

作为参考,我还发布了我的Class1代码

typedef struct {
    float w;
    float *c;
    long a;  
    float co;  
} Class1;

请提供有关并行化上述功能的正确方法的建议。

更新:

pgain功能

double pgain ( long x, Class1 *class1, double z, long int *numcenters )
{
    int i;
    int number_of_centers_to_close = 0;

    static double *work_mem;
    static double gl_cost_of_opening_x;
    static int gl_number_of_centers_to_close;

    int stride = *numcenters + 2;
    //make stride a multiple of CACHE_LINE
    int cl = CACHE_LINE/sizeof ( double );
    if ( stride % cl != 0 ) {
        stride = cl * ( stride / cl + 1 );
    }
    int K = stride - 2 ; // K==*numcenters

    //my own cost of opening x
    double cost_of_opening_x = 0;

    work_mem = ( double* ) malloc ( 2 * stride * sizeof ( double ) );
    gl_cost_of_opening_x = 0;
    gl_number_of_centers_to_close = 0;

    /*
     * For each center, we have a *lower* field that indicates
     * how much we will save by closing the center.
     */
    int count = 0;
    for ( int i = 0; i < class1->num; i++ ) {
        if ( is_center[i] ) {
            center_table[i] = count++;
        }
    }
    work_mem[0] = 0;

    //now we finish building the table. clear the working memory.
    memset ( switch_membership, 0, class1->num * sizeof ( bool ) );
    memset ( work_mem, 0, stride*sizeof ( double ) );
    memset ( work_mem+stride,0,stride*sizeof ( double ) );

    //my *lower* fields
    double* lower = &work_mem[0];
    //global *lower* fields
    double* gl_lower = &work_mem[stride];

    #pragma omp parallel for 
    for ( i = 0; i < class1->num; i++ ) {
        float x_cost = dist ( class1->p[i], class1->p[x], class1->dim ) * class1->p[i].weight;
        float current_cost = class1->p[i].cost;

        if ( x_cost < current_cost ) {

            // point i would save cost just by switching to x
            // (note that i cannot be a median,
            // or else dist(p[i], p[x]) would be 0)

            switch_membership[i] = 1;
            cost_of_opening_x += x_cost - current_cost;

        } else {

            // cost of assigning i to x is at least current assignment cost of i

            // consider the savings that i's **current** median would realize
            // if we reassigned that median and all its members to x;
            // note we've already accounted for the fact that the median
            // would save z by closing; now we have to subtract from the savings
            // the extra cost of reassigning that median and its members
            int assign = class1->p[i].assign;
            lower[center_table[assign]] += current_cost - x_cost;
        }
    }

    // at this time, we can calculate the cost of opening a center
    // at x; if it is negative, we'll go through with opening it

    for ( int i = 0; i < class1->num; i++ ) {
        if ( is_center[i] ) {
            double low = z + work_mem[center_table[i]];
            gl_lower[center_table[i]] = low;
            if ( low > 0 ) {
                // i is a median, and
                // if we were to open x (which we still may not) we'd close i

                // note, we'll ignore the following quantity unless we do open x
                ++number_of_centers_to_close;
                cost_of_opening_x -= low;
            }
        }
    }
    //use the rest of working memory to store the following
    work_mem[K] = number_of_centers_to_close;
    work_mem[K+1] = cost_of_opening_x;

    gl_number_of_centers_to_close = ( int ) work_mem[K];
    gl_cost_of_opening_x = z + work_mem[K+1];

    // Now, check whether opening x would save cost; if so, do it, and
    // otherwise do nothing

    if ( gl_cost_of_opening_x < 0 ) {
        //  we'd save money by opening x; we'll do it
        #pragma omp parallel for
        for ( int i = 0; i < class1->num; i++ ) {
            bool close_center = gl_lower[center_table[class1->p[i].assign]] > 0 ;
            if ( switch_membership[i] || close_center ) {
                // Either i's median (which may be i itself) is closing,
                // or i is closer to x than to its current median
                #pragma omp critical
                {
                class1->p[i].cost = class1->p[i].weight * dist ( class1->p[i], class1->p[x], class1->dim );
                class1->p[i].assign = x;
                }
            }
        }
        for ( int i = 0; i < class1->num; i++ ) {
            if ( is_center[i] && gl_lower[center_table[i]] > 0 ) {
                is_center[i] = false;
            }
        }
        if ( x >= 0 && x < class1->num ) {
            is_center[x] = true;
        }

        *numcenters = *numcenters + 1 - gl_number_of_centers_to_close;
    } else {
        gl_cost_of_opening_x = 0;  // the value we'll return
    }

    free ( work_mem );

    return -gl_cost_of_opening_x;
}

更新2

修复了我版本上的pgain原因,我在pgain中也有#pragma指令

这是堆栈跟踪

#0  0xb7d50750 in ?? () from /lib/i386-linux-gnu/libc.so.6
#1  0xb7eaa198 in ?? () from /usr/lib/i386-linux-gnu/libgomp.so.1
#2  0x080493a0 in pgain (x=2982, class1=0xbffff218, z=1461.919921875, 
    numce=0x804d128) at streamcluster-openmp.cpp:232
#3  0x0804aaf6 in myfunc(Class1*, int*, int, float, long*, double, long, float) [clone ._omp_fn.0] () at openmp.cpp:347
#4  0xb7ea9889 in ?? () from /usr/lib/i386-linux-gnu/libgomp.so.1
#5  0xb7cbcd4c in start_thread () from /lib/i386-linux-gnu/libpthread.so.0
#6  0xb7dc9bae in clone () from /lib/i386-linux-gnu/libc.so.6

问题可能是在pgain这一部分中可能存在的竞争条件:

// Either i's median (which may be i itself) is closing,
// or i is closer to x than to its current median
class1->p[i].cost = class1->p[i].weight * dist ( class1->p[i], class1->p[x], class1->dim );
class1->p[i].assign = x;

由于class1是指针,因此使用语句firstprivate(class1)是裸指针,而不是基础资源。

该问题的解决方案在很大程度上取决于程序的语义。 如果可以随机排序*class1更新,并且资源确实是要共享的,那么可以进行以下修改:

// Either i's median (which may be i itself) is closing,
// or i is closer to x than to its current median
#pragma omp critical LOCK_CLASS1
{
  // Lock this part of the code for thread-safety
  class1->p[i].cost = class1->p[i].weight * dist ( class1->p[i], class1->p[x], class1->dim );
  class1->p[i].assign = x
}

将修复上面的代码。 否则,应在调用pgain之前为每个线程创建*class1私有副本。

最后,您应该注意,对于任何资源 ,上述推理仍然适用。 例如,如果Class1中的float *c指针指向共享的资源并且您不同步内存更新,则它会显示相同的临界点。

暂无
暂无

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM