简体   繁体   中英

CUDA kernel and printf strange behaviour.

I wrote simple kernel code, trying to manipulate one dimensional array elements:

    #include "stdio.h"

__global__ void Loop(double *X, int CellsNum, int VarNum,const double constant1)
{

int idx = threadIdx.x+blockDim.x*blockIdx.x;
int i = (idx+1)*VarNum ;
double exp1,exp2,exp3,exp4 ;

if(idx<CellsNum-2) {

exp1=double(0.5)*(X[i+6+VarNum]+X[i+6])+X[i+10] ;
exp2=double(0.5)*(X[i+8+VarNum]+X[i+8]) ;

if(i==0) {
printf("%e %e",exp1,exp2) ;
}

exp3=X[i+11]-constant1*(exp1*exp2)/X[i+5] ;

exp4=constant1*(X[i+9]*exp1-X[i+9-VarNum]*exp2)/X[i+5] ;

X[i+12]=exp3+exp4;
}
}

extern "C" void cudacalc_(double *a, int* N1, int* N2, double* N3)
{
int Cells_Num = *N1;
int Var_Num = *N2;
double constant1 = *N3;

Loop<<<1,Cells_Num>>>(a,Cells_Num,Var_Num,constant1);

}

But it doesn't work if I comment this piece of code:

if(i==0) {
printf("%e %e",exp1,exp2) ;
}

even when variable i always greater then zero. Than I do comment this lines code produces NaN in X array. I'm trying to run this code compiled with -arch sm_20 flag on Tesla GPU. Maybe somebody can help me with this issue ?

This kernel has the opportunity for a race condition, because the kernel code is both reading from X and writing to X with no synchronization or protection.

The simplest way to fix this is probably to separate the output statement to write to a different array:

Xo[i+12]=exp3+exp4;

cuda-memcheck can help check for race conditions within a kernel. Use cuda-memcheck --help to find the specific racecheck options.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM