简体   繁体   English

从另一个内核调用sum减少内核

[英]Calling sum reduction kernel from another kernel

I'm trying to sum reduce an array from a kernel without needing to send the data back to the CPU host, but I'm not getting the right results. 我试图总结从内核减少一个数组,而不需要将数据发送回CPU主机,但我没有得到正确的结果。 Here is the sum kernel I use (slightly modified from the NVIDIA provided one): 这是我使用的总和内核(从NVIDIA提供的一个稍微修改过):

template <class T, unsigned int blockSize, bool nIsPow2>
__device__ void
reduce(T *g_idata, T *g_odata, unsigned int n)
{
    __shared__ T sdata[blockSize];

    // perform first level of reduction,
    // reading from global memory, writing to shared memory
    unsigned int tid = threadIdx.x;
    unsigned int i = blockIdx.x*blockSize*2 + threadIdx.x;
    unsigned int gridSize = blockSize*2*gridDim.x;

    T mySum = 0;

    // we reduce multiple elements per thread.  The number is determined by the 
    // number of active thread blocks (via gridDim).  More blocks will result
    // in a larger gridSize and therefore fewer elements per thread
    while (i < n)
    {         
        mySum += g_idata[i];
        // ensure we don't read out of bounds -- this is optimized away for powerOf2 sized arrays
        if (nIsPow2 || i + blockSize < n) 
            mySum += g_idata[i+blockSize];  
        i += gridSize;
    } 

    // each thread puts its local sum into shared memory 
    sdata[tid] = mySum;
    __syncthreads();


    // do reduction in shared mem
    if (blockSize >= 512) { if (tid < 256) { sdata[tid] = mySum = mySum + sdata[tid + 256]; } __syncthreads(); }
    if (blockSize >= 256) { if (tid < 128) { sdata[tid] = mySum = mySum + sdata[tid + 128]; } __syncthreads(); }
    if (blockSize >= 128) { if (tid <  64) { sdata[tid] = mySum = mySum + sdata[tid +  64]; } __syncthreads(); }

#ifndef __DEVICE_EMULATION__
    if (tid < 32)
#endif
    {
        // now that we are using warp-synchronous programming (below)
        // we need to declare our shared memory volatile so that the compiler
        // doesn't reorder stores to it and induce incorrect behavior.
        volatile T* smem = sdata;
        if (blockSize >=  64) { smem[tid] = mySum = mySum + smem[tid + 32]; EMUSYNC; }
        if (blockSize >=  32) { smem[tid] = mySum = mySum + smem[tid + 16]; EMUSYNC; }
        if (blockSize >=  16) { smem[tid] = mySum = mySum + smem[tid +  8]; EMUSYNC; }
        if (blockSize >=   8) { smem[tid] = mySum = mySum + smem[tid +  4]; EMUSYNC; }
        if (blockSize >=   4) { smem[tid] = mySum = mySum + smem[tid +  2]; EMUSYNC; }
        if (blockSize >=   2) { smem[tid] = mySum = mySum + smem[tid +  1]; EMUSYNC; }
    }

    // write result for this block to global mem 
    if (tid == 0) 
        g_odata[blockIdx.x] = sdata[0];
}

template <unsigned int blockSize>
__global__ void compute(   int *values, int *temp, int *temp2, int* results, unsigned int N, unsigned int M )
{   
    int tdx = threadIdx.x;
    int idx = blockIdx.x * blockDim.x + tdx;

    int val = 0;
    int cpt = 0;

    if( idx < N )
    {
        for( int i = 0; i < M; ++i )
        {

            for( int j = i+1; j < M; ++j )
            {

                val = values[i*N+idx];
                __syncthreads();

                reduce<int, blockSize, false>( temp, temp2, N );
                __syncthreads();

                if( tdx == 0 )
                {

                    val = 0;

                    for( int k=0; k < gridDim.x; ++k )
                    {
                        val += temp2[k];
                        temp2[k] = 0;
                    }


                    results[cpt] = val;
                }

                __syncthreads();
                ++cpt;
            }
        }

    }
}

Am I missing something? 我错过了什么吗? Thanks! 谢谢!

Keep in mind that you cannot synchronise blocks in the grid. 请记住,您无法同步网格中的块。 Block 1 might execute the reduce function and write a value to temp2[1], while Block2 might still be waiting and temp2[2] still contains some garbage. 块1可能执行reduce函数并将值写入temp2 [1],而Block2可能仍在等待,temp2 [2]仍然包含一些垃圾。

If you really want, you can enforce block synchronisation but it is hacky, cumbersome and not really efficient. 如果你真的想要,你可以强制执行块同步,但它很麻烦,很麻烦而且效率不高。 Consider some alternatives: 考虑一些选择:

  • You can assign one array to a single block to perform reduction; 您可以将一个数组分配给单个块以执行缩减; have different blocks perform independent reductions on independent arrays. 有不同的块对独立数组执行独立的减少。
  • You can have the reduction as a separate kernel call (as in the original CUDA examples), but you may decide not to transfer the resulting data back to host. 您可以将还原作为单独的内核调用(如在原始CUDA示例中那样),但您可以决定将结果数据传回主机。 Instead, you launch another kernel which then process the output of the previous one. 相反,您启动另一个内核,然后处理前一个内核的输出。 The content of global memory is preserved between kernel calls. 内核调用之间保留全局内存的内容。

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM