简体   繁体   中英

About compact operation in cuddpp

The following kernel function is the compact operation in the cudpp, a cuda library (http://gpgpu.org/developer/cudpp).

My question is why the developer repeats the the writing part 8 times? And why it can improve the performance?

And why one thread process 8 elements, why not each thread process one element?

template <class T, bool isBackward>
__global__ void compactData(T                  *d_out, 
                        size_t             *d_numValidElements,
                        const unsigned int *d_indices, // Exclusive Sum-Scan Result
                        const unsigned int *d_isValid,
                        const T            *d_in,
                        unsigned int       numElements)
{
  if (threadIdx.x == 0)
  {
        if (isBackward)
            d_numValidElements[0] = d_isValid[0] + d_indices[0];
    else
        d_numValidElements[0] = d_isValid[numElements-1] + d_indices[numElements-1];
   }

   // The index of the first element (in a set of eight) that this
   // thread is going to set the flag for. We left shift
   // blockDim.x by 3 since (multiply by 8) since each block of 
   // threads processes eight times the number of threads in that
   // block
   unsigned int iGlobal = blockIdx.x * (blockDim.x << 3) + threadIdx.x;

   // Repeat the following 8 (SCAN_ELTS_PER_THREAD) times
   // 1. Check if data in input array d_in is null
   // 2. If yes do nothing
   // 3. If not write data to output data array d_out in
   //    the position specified by d_isValid
   if (iGlobal < numElements && d_isValid[iGlobal] > 0) {
       d_out[d_indices[iGlobal]] = d_in[iGlobal];
   }
   iGlobal += blockDim.x;  
   if (iGlobal < numElements && d_isValid[iGlobal] > 0) {
       d_out[d_indices[iGlobal]] = d_in[iGlobal];       
   }
   iGlobal += blockDim.x;
   if (iGlobal < numElements && d_isValid[iGlobal] > 0) {
       d_out[d_indices[iGlobal]] = d_in[iGlobal];
   }
   iGlobal += blockDim.x;
   if (iGlobal < numElements && d_isValid[iGlobal] > 0) {
       d_out[d_indices[iGlobal]] = d_in[iGlobal];
   }
   iGlobal += blockDim.x;
   if (iGlobal < numElements && d_isValid[iGlobal] > 0) {
       d_out[d_indices[iGlobal]] = d_in[iGlobal];
   }
   iGlobal += blockDim.x;
   if (iGlobal < numElements && d_isValid[iGlobal] > 0) {
       d_out[d_indices[iGlobal]] = d_in[iGlobal];
   }
   iGlobal += blockDim.x;
   if (iGlobal < numElements && d_isValid[iGlobal] > 0) {
       d_out[d_indices[iGlobal]] = d_in[iGlobal];
   }
   iGlobal += blockDim.x;
   if (iGlobal < numElements && d_isValid[iGlobal] > 0) {
       d_out[d_indices[iGlobal]] = d_in[iGlobal];
   }
}

My question is why the developer repeats the the writing part 8 times? And why it can improve the performance?

As @torrential_coding stated, loop unrolling can help performance. In particular in case like this, where the loop is very tight (it has little logic in it). However, the coder should have used CUDA's support for automatic loop unrolling instead of doing it manually.

And why one thread process 8 elements, why not each thread process one element?

There might be some small performance gain in only calculating the full index of iGlobal and checking for threadIdx.x of zero for every 8 operations instead of for each operation, which would have to be done if each kernel did only one element.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM