Hey, I'm trying to write a kernel to essentially do the following in C
float sum = 0.0;
for(int i = 0; i < N; i++){
sum += valueArray[i]*valueArray[i];
}
sum += sum / N;
At the moment I have this inside my kernel, but it is not giving correct values.
int i0 = blockIdx.x * blockDim.x + threadIdx.x;
for(int i=i0; i<N; i += blockDim.x*gridDim.x){
*d_sum += d_valueArray[i]*d_valueArray[i];
}
*d_sum= __fdividef(*d_sum, N);
The code used to call the kernel is
kernelName<<<64,128>>>(N, d_valueArray, d_sum);
cudaMemcpy(&sum, d_sum, sizeof(float) , cudaMemcpyDeviceToHost);
I think that each kernel is calculating a partial sum, but the final divide statement is not taking into account the accumulated value from each of the threads. Every kernel is producing it's own final value for d_sum?
Does anyone know how could I go about doing this in an efficient way? Maybe using shared memory between threads? I'm very new to GPU programming. Cheers
You're updating d_sum from multiple threads.
See the following SDK sample:
http://developer.download.nvidia.com/compute/cuda/sdk/website/samples.html
Here's the code from that sample. Note how it's a two step process. Sum each thread block and then __syncthreads before attempting to accumulate the final result.
#define ACCUM_N 1024
__global__ void scalarProdGPU(
float *d_C,
float *d_A,
float *d_B,
int vectorN,
int elementN
){
//Accumulators cache
__shared__ float accumResult[ACCUM_N];
////////////////////////////////////////////////////////////////////////////
// Cycle through every pair of vectors,
// taking into account that vector counts can be different
// from total number of thread blocks
////////////////////////////////////////////////////////////////////////////
for(int vec = blockIdx.x; vec < vectorN; vec += gridDim.x){
int vectorBase = IMUL(elementN, vec);
int vectorEnd = vectorBase + elementN;
////////////////////////////////////////////////////////////////////////
// Each accumulator cycles through vectors with
// stride equal to number of total number of accumulators ACCUM_N
// At this stage ACCUM_N is only preferred be a multiple of warp size
// to meet memory coalescing alignment constraints.
////////////////////////////////////////////////////////////////////////
for(int iAccum = threadIdx.x; iAccum < ACCUM_N; iAccum += blockDim.x){
float sum = 0;
for(int pos = vectorBase + iAccum; pos < vectorEnd; pos += ACCUM_N)
sum += d_A[pos] * d_B[pos];
accumResult[iAccum] = sum;
}
////////////////////////////////////////////////////////////////////////
// Perform tree-like reduction of accumulators' results.
// ACCUM_N has to be power of two at this stage
////////////////////////////////////////////////////////////////////////
for(int stride = ACCUM_N / 2; stride > 0; stride >>= 1){
__syncthreads();
for(int iAccum = threadIdx.x; iAccum < stride; iAccum += blockDim.x)
accumResult[iAccum] += accumResult[stride + iAccum];
}
if(threadIdx.x == 0) d_C[vec] = accumResult[0];
}
}
The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.