簡體   English   中英

CUDA,Qt創建者和Mac

[英]CUDA, Qt creator and Mac

我很難將CUDA整合到Qt Creator中。

我確定問題出在我的.pro文件中沒有正確的信息。 我已經發布了當前的.pro文件,.cu文件(DT_GPU.cu),然后發布了下面的錯誤。

我已經嘗試了許多從linux和Windows提取的.pro文件的組合,但是沒有任何效果。 此外,我從未見過Mac / CUDA .pro文件,因此對於希望將這三個工具放在一起的未來人們來說,這可能是有用的資源。

在此先感謝您的幫助。

.pro文件:

CUDA_SOURCES += ../../Source/DT_GPU/DT_GPU.cu

CUDA_DIR = "/Developer/NVIDIA/CUDA-7.5"


SYSTEM_TYPE = 64            # '32' or '64', depending on your system
CUDA_ARCH = sm_21           # Type of CUDA architecture, for example 'compute_10', 'compute_11', 'sm_10'
NVCC_OPTIONS = --use_fast_math


# include paths
INCLUDEPATH += $$CUDA_DIR/include

# library directories
QMAKE_LIBDIR += $$CUDA_DIR/lib/

CUDA_OBJECTS_DIR = ./


# Add the necessary libraries
CUDA_LIBS = -lcublas_device \
    -lcublas_static \
    -lcudadevrt \
    -lcudart_static \
    -lcufft_static \
    -lcufftw_static \
    -lculibos \
    -lcurand_static \
    -lcusolver_static \
    -lcusparse_static \
    -lnppc_static \
    -lnppi_static \
    -lnpps_static

# The following makes sure all path names (which often include spaces) are put between quotation marks
CUDA_INC = $$join(INCLUDEPATH,'" -I"','-I"','"')
LIBS += $$join(CUDA_LIBS,'.so ', '', '.so')
#LIBS += $$CUDA_LIBS

# Configuration of the Cuda compiler
CONFIG(debug, debug|release) {
    # Debug mode
    cuda_d.input = CUDA_SOURCES
    cuda_d.output = $$CUDA_OBJECTS_DIR/${QMAKE_FILE_BASE}_cuda.o
    cuda_d.commands = $$CUDA_DIR/bin/nvcc -D_DEBUG $$NVCC_OPTIONS $$CUDA_INC $$NVCC_LIBS --machine $$SYSTEM_TYPE -arch=$$CUDA_ARCH -c -o ${QMAKE_FILE_OUT} ${QMAKE_FILE_NAME}
    cuda_d.dependency_type = TYPE_C
    QMAKE_EXTRA_COMPILERS += cuda_d
}
else {
    # Release mode
    cuda.input = CUDA_SOURCES
    cuda.output = $$CUDA_OBJECTS_DIR/${QMAKE_FILE_BASE}_cuda.o
    cuda.commands = $$CUDA_DIR/bin/nvcc $$NVCC_OPTIONS $$CUDA_INC $$NVCC_LIBS --machine $$SYSTEM_TYPE -arch=$$CUDA_ARCH -c -o ${QMAKE_FILE_OUT} ${QMAKE_FILE_NAME}
    cuda.dependency_type = TYPE_C
    QMAKE_EXTRA_COMPILERS += cuda
}

DT_GPU.cu

#include <cuda.h>
#include <cuda_runtime.h>
#include <device_launch_parameters.h>

__global__ void zero_GPU(double *l_p_array_gpu)
{
    int i = threadIdx.x;
    printf("  %i: Hello World!\n", i);
    l_p_array_gpu[i] = 0.;
}

void zero(double *l_p_array, int a_numElements)
{
    double *l_p_array_gpu;

    int size = a_numElements * int(sizeof(double));

    cudaMalloc((void**) &l_p_array_gpu, size);

    cudaMemcpy(l_p_array_gpu, l_p_array, size, cudaMemcpyHostToDevice);

    zero_GPU<<<size,1>>>(l_p_array_gpu);

    cudaMemcpy(l_p_array, l_p_array_gpu, size, cudaMemcpyDeviceToHost);

    cudaFree(l_p_array_gpu);
}

警告:

Makefile:848: warning: overriding commands for target `DT_GPU_cuda.o'
Makefile:792: warning: ignoring old commands for target `DT_GPU_cuda.o'
Makefile:848: warning: overriding commands for target `DT_GPU_cuda.o'
Makefile:792: warning: ignoring old commands for target `DT_GPU_cuda.o'

錯誤:

In file included from ../SimplexSphereSource.cpp:8:
../../../Source/DT_GPU/DT_GPU.cu:75:19: error: expected expression
        zero_GPU<<<size,1>>>(l_p_array_gpu);
                  ^
../../../Source/DT_GPU/DT_GPU.cu:75:28: error: expected expression
        zero_GPU<<<size,1>>>(l_p_array_gpu);
                           ^
2 errors generated.
make: *** [SimplexSphereSource.o] Error 1
16:47:18: The process "/usr/bin/make" exited with code 2.
Error while building/deploying project SimplexSphereSource (kit: Desktop Qt 5.4.0 clang 64bit)
When executing step "Make"

我設法使您的示例在運行時對您的.pro文件進行了一些小的更正。 如果您或其他任何人仍然對Mac和Linux的大型C ++ / CUDA / Qt示例感興趣,請檢查幾個月前的答案 您的特定情況(或至少提供的情況)不需要所有其他Qt框架和GUI設置,因此.pro文件保持非常簡單。

如果尚未這樣做,則應確保已擁有最新的CUDA Mac驅動程序,並檢查一些基本的CUDA示例是否可以編譯和運行。 我目前正在使用:

  • OSX版本10.10.5
  • Qt 5.5.0
  • NVCC v7.5.17

我將主要方法添加到了您提供的DP_GPU.cu文件中,並使用.pro文件成功運行了程序,但做了一些更改:

#CUDA_SOURCES += ../../Source/DT_GPU/DT_GPU.cu
CUDA_SOURCES += DT_GPU.cu # <-- same dir for this small example

CUDA_DIR = "/Developer/NVIDIA/CUDA-7.5"


SYSTEM_TYPE = 64            # '32' or '64', depending on your system
CUDA_ARCH = sm_21           # (tested with sm_30 on my comp) Type of CUDA architecture, for example 'compute_10', 'compute_11', 'sm_10'
NVCC_OPTIONS = --use_fast_math


# include paths
INCLUDEPATH += $$CUDA_DIR/include

# library directories
QMAKE_LIBDIR += $$CUDA_DIR/lib/

CUDA_OBJECTS_DIR = ./


# Add the necessary libraries
CUDA_LIBS = -lcudart # <-- changed this

# The following makes sure all path names (which often include spaces) are put between quotation marks
CUDA_INC = $$join(INCLUDEPATH,'" -I"','-I"','"')
#LIBS += $$join(CUDA_LIBS,'.so ', '', '.so') <-- didn't need this
LIBS += $$CUDA_LIBS # <-- needed this


# SPECIFY THE R PATH FOR NVCC (this caused me a lot of trouble before)
QMAKE_LFLAGS += -Wl,-rpath,$$CUDA_DIR/lib # <-- added this
NVCCFLAGS = -Xlinker -rpath,$$CUDA_DIR/lib # <-- and this

# Configuration of the Cuda compiler
CONFIG(debug, debug|release) {
    # Debug mode
    cuda_d.input = CUDA_SOURCES
    cuda_d.output = $$CUDA_OBJECTS_DIR/${QMAKE_FILE_BASE}_cuda.o
    cuda_d.commands = $$CUDA_DIR/bin/nvcc -D_DEBUG $$NVCC_OPTIONS $$CUDA_INC $$NVCC_LIBS --machine $$SYSTEM_TYPE -arch=$$CUDA_ARCH -c -o ${QMAKE_FILE_OUT} ${QMAKE_FILE_NAME}
    cuda_d.dependency_type = TYPE_C
    QMAKE_EXTRA_COMPILERS += cuda_d
}
else {
    # Release mode
    cuda.input = CUDA_SOURCES
    cuda.output = $$CUDA_OBJECTS_DIR/${QMAKE_FILE_BASE}_cuda.o
    cuda.commands = $$CUDA_DIR/bin/nvcc $$NVCC_OPTIONS $$CUDA_INC $$NVCC_LIBS --machine $$SYSTEM_TYPE -arch=$$CUDA_ARCH -c -o ${QMAKE_FILE_OUT} ${QMAKE_FILE_NAME}
    cuda.dependency_type = TYPE_C
    QMAKE_EXTRA_COMPILERS += cuda
}

DP_GPU.cu文件具有主要功能和一些小的更改:

#include <cuda.h>
#include <cuda_runtime.h>
#include <device_launch_parameters.h>
#include <stdio.h> // <-- added for 'printf'


__global__ void zero_GPU(double *l_p_array_gpu)
{
    int i = blockIdx.x * blockDim.x + threadIdx.x; // <-- in case you use more blocks
    printf("  %i: Hello World!\n", i);
    l_p_array_gpu[i] = 0.;
}


void zero(double *l_p_array, int a_numElements)
{
    double *l_p_array_gpu;

    int size = a_numElements * int(sizeof(double));

    cudaMalloc((void**) &l_p_array_gpu, size);

    cudaMemcpy(l_p_array_gpu, l_p_array, size, cudaMemcpyHostToDevice);

    // use one block with a_numElements threads
    zero_GPU<<<1, a_numElements>>>(l_p_array_gpu);

    cudaMemcpy(l_p_array, l_p_array_gpu, size, cudaMemcpyDeviceToHost);

    cudaFree(l_p_array_gpu);
}

// added a main function to run the program
int main(void)
{
    // host variables
    const int a_numElements = 5;
    double l_p_array[a_numElements];

    // run cuda function
    zero(l_p_array, a_numElements);

    // Print l_p_array
    printf("l_p_array: { ");
    for (int i = 0; i < a_numElements; ++i)
    {
        printf("%.2f ", l_p_array[i]);
    }
    printf("}\n");

    return 0;
}

輸出:

  0: Hello World!
  1: Hello World!
  2: Hello World!
  3: Hello World!
  4: Hello World!
l_p_array: { 0.00 0.00 0.00 0.00 0.00 }

一旦完成這項工作,請務必花點時間檢查一下基本的CUDA語法和示例,然后再進行深入研究。否則,調試將是一個真正的麻煩。 因為我在這里,盡管我想我也要讓您知道CUDA內核語法是
kernel_function<<<block_size, thread_size>>>(args)
當您實際上想要相反時zero_GPU<<<size,1>>>(l_p_array_gpu)您的內核調用zero_GPU<<<size,1>>>(l_p_array_gpu)實際上將創建一個具有單個線程的zero_GPU<<<size,1>>>(l_p_array_gpu)

以下函數來自CUDA示例,可幫助確定給定數量的元素所需的線程和塊數:

typedef unsigned int uint;

inline uint iDivUp(uint a, uint b)
{
    return (a % b != 0) ? (a / b + 1) : (a / b);
}

// compute grid and thread block size for a given number of elements
inline void computeGridSize(uint n, uint blockSize, uint &numBlocks, uint &numThreads)
{
    numThreads = min(blockSize, n);
    numBlocks = iDivUp(n, numThreads);
}

您可以將它們添加到.cu文件的頂部或幫助程序頭文件中,並使用它們正確地調用內核函數。 如果要在DP_GPU.cu文件中使用它們,則只需添加:

// desired thread count (may change if there aren't enough elements)
dim3 threads(64);
// default block count (will also change based on number of elements)
dim3 blocks(1);
computeGridSize(a_numElements, threads.x, blocks.x, threads.x);

// run kernel
zero_GPU<<<blocks, threads>>>(l_p_array_gpu);

無論如何,有點偏頗,但我希望這會有所幫助! 干杯!

暫無
暫無

聲明:本站的技術帖子網頁,遵循CC BY-SA 4.0協議,如果您需要轉載,請注明本站網址或者原文地址。任何問題請咨詢:yoyou2525@163.com.

 
粵ICP備18138465號  © 2020-2024 STACKOOM.COM