簡體   English   中英

CUDA-內核調用中的編譯錯誤

[英]CUDA - compilation error in kernel call

嗨,我想將Steam代碼從CPU修改為GPU版本。 理解整個代碼並不是真正必要的。 因此,如果有人感興趣,我將只展示片段,所有內容(源代碼和描述)都可以在這里找到: http : //www.dgp.toronto.edu/people/stam/reality/Research/pub.html =>“真實的游戲的實時流體動力學”。

這可能是很容易的任務。 但是我很長一段時間沒有使用C ++,而只是學習CUDA,所以這對我來說很難。 長時間嘗試,但沒有效果。

CPU版本(適用):

#define IX(i,j) ((i)+(N+2)*(j))

...

void lin_solve(int N, int b, float * x, float * x0, float a, float c)
{

    for (int k = 0; k<20; k++) 
    {
        for (int i = 1; i <= N; i++) 
        {
            for (int j = 1; j <= N; j++) 
            {
            x[IX(i, j)] = (x0[IX(i, j)] + a*(x[IX(i - 1, j)] + x[IX(i + 1, j)] + x[IX(i, j - 1)] + x[IX(i, j + 1)])) / c;
            }
        }


            set_bnd(N, b, x);
    }
}

我的GPU版本(無法編譯):

#define IX(i,j) ((i)+(N+2)*(j))

__global__
void GPU_lin_solve(int *N, int *b, float * x, float * x0, float *a, float *c)
{
    int i = threadIdx.x * blockIdx.x + threadIdx.x;
    int j = threadIdx.y * blockIdx.y + threadIdx.y;

    if (i < N && j < N)
    x[IX(i, j)] = (x0[IX(i, j)] + a*(x[IX(i - 1, j)] + x[IX(i + 1, j)] + x[IX(i, j - 1)] + x[IX(i, j + 1)])) / c;
}

void lin_solve(int N, int b, float * x, float * x0, float a, float c)
{

    for (int k = 0; k<20; k++) 
    {

        int *d_N, *d_b;
        float **d_x, **d_x0;
        float *d_a, *d_c, *d_xx, *d_xx0;

        *d_xx = **d_x;
        *d_xx0 = **d_x0;

        cudaMalloc(&d_N, sizeof(int));
        cudaMalloc(&d_b, sizeof(int));
        cudaMalloc(&d_xx, sizeof(float));
        cudaMalloc(&d_xx0, sizeof(float));
        cudaMalloc(&d_a, sizeof(float));
        cudaMalloc(&d_c, sizeof(float));

        cudaMemcpy(d_N, &N, sizeof(int), cudaMemcpyHostToDevice);
        cudaMemcpy(d_b, &b, sizeof(int), cudaMemcpyHostToDevice);
        cudaMemcpy(d_xx, &*x, sizeof(float), cudaMemcpyHostToDevice);
        cudaMemcpy(d_xx0, &*x0, sizeof(float), cudaMemcpyHostToDevice);
        cudaMemcpy(d_a, &a, sizeof(float), cudaMemcpyHostToDevice);
        cudaMemcpy(d_c, &c, sizeof(float), cudaMemcpyHostToDevice);

        GPU_lin_solve << <1, 1 >> > (d_N, d_b, d_xx, d_xx0, d_a, d_c);

        // compilator showing problem in the line above
        // Error 23 error : argument of type "int *" is incompatible with parameter of type "int"

        cudaMemcpy(&*x, d_xx, sizeof(float), cudaMemcpyDeviceToHost); 


        cudaFree(d_N);
        cudaFree(d_b);
        cudaFree(d_xx);
        cudaFree(d_xx0);
        cudaFree(d_a);
        cudaFree(d_c);


            set_bnd(N, b, x);
    }
}

編譯器報告錯誤:

Error 23 error : argument of type "int *" is incompatible with parameter of type "int"

在內核啟動時

GPU_lin_solve << <1, 1 >> > (d_N, d_b, d_xx, d_xx0, d_a, d_c);

我做錯了什么?

if (i < N && j < N)
    x[IX(i, j)] = (x0[IX(i, j)] + a*(x[IX(i - 1, j)] + x[IX(i + 1, j)] + x[IX(i, j - 1)] + x[IX(i, j + 1)])) / c;
}

您的條件和宏中的N是一個指針,您將其視為整數。 嘗試取消引用嗎?

暫無
暫無

聲明:本站的技術帖子網頁,遵循CC BY-SA 4.0協議,如果您需要轉載,請注明本站網址或者原文地址。任何問題請咨詢:yoyou2525@163.com.

 
粵ICP備18138465號  © 2020-2024 STACKOOM.COM