简体   繁体   English

编译/添加现有项目的cuda代码(CMake)

[英]Compiling/adding cuda code to existing project (CMake)

I am trying to port parts of an existing project to GPUs via CUDA code. 我试图通过CUDA代码将现有项目的一部分移植到GPU。 I understand cmake has options (find_cuda...) to deal with .cu files separately, yet I am still trying to figure out how this ecosystem can be used in context of existing projects. 我理解cmake有选项(find_cuda ...)来单独处理.cu文件,但我仍然试图弄清楚如何在现有项目的上下文中使用这个生态系统。

My question is the following. 我的问题如下。 Let's say I have an existing C++ project with a cmake config file (CMakeLists). 假设我有一个带有cmake配置文件(CMakeLists)的现有C ++项目。 What is the current practice to eleganly (if possible) include CUDA kernels? 目前的做法是什么(如果可能的话)包括CUDA内核? Can CMakeLists be constructed in a way, .cu files are compiled only if GPU is present? 可以用某种方式构建CMakeLists,.cu文件仅在GPU存在的情况下编译吗?

My current idea is to create a separate folder, where only CUDA related code exists and then compile this as a static library. 我目前的想法是创建一个单独的文件夹,其中只存在与CUDA相关的代码,然后将其编译为静态库。 Is that the way to do it? 那是这样做的吗?

Having the CUDA files in separate folders is my recommended way but not required. 建议将CUDA文件放在单独的文件夹中,但不是必需的。 The basic principle is that you collect all .cu files in a CMake variable (let's call it CUDA_SRC ) and all .cpp files in a different variable (call it SRC ). 基本原则是你收集CMake变量中的所有.cu文件(让我们称之为CUDA_SRC )和所有.cpp文件在另一个变量中(称之为SRC )。 Now you compile both files and put them together. 现在编译两个文件并将它们放在一起。 The variable CUDA_FOUND provided by find_package(CUDA) can be used to determine if CUDA is installed on your system. find_package(CUDA)提供的变量CUDA_FOUND可用于确定系统上是否安装了CUDA。 The use of a static library for the cuda files is not required, but i'll show you both ways here. 不需要为cuda文件使用静态库,但我会在这里向您展示这两种方法。

In your top level cmake file you want to have something like this to find CUDA and set some nvcc flags: 在您的顶级cmake文件中,您希望找到类似这样的内容来查找CUDA并设置一些nvcc标志:

find_package(CUDA QUIET)
if(CUDA_FOUND)
    include_directories(${CUDA_INCLUDE_DIRS})
    SET(ALL_CUDA_LIBS ${CUDA_LIBRARIES} ${CUDA_cusparse_LIBRARY} ${CUDA_cublas_LIBRARY})
    SET(LIBS ${LIBS} ${ALL_CUDA_LIBS})
    message(STATUS "CUDA_LIBRARIES: ${CUDA_INCLUDE_DIRS} ${ALL_CUDA_LIBS}")
    set(CUDA_PROPAGATE_HOST_FLAGS ON)
    set(CUDA_SEPARABLE_COMPILATION OFF)
    list( APPEND CUDA_NVCC_FLAGS -gencode=arch=compute_30,code=compute_30 )
    list( APPEND CUDA_NVCC_FLAGS -gencode=arch=compute_52,code=sm_52 )
endif()

With static CUDA library 使用静态CUDA库

if(CUDA_FOUND)
     #collect CUDA files
     FILE(GLOB_RECURSE CUDA_SRC  *.cu)
     #build static library
     CUDA_ADD_LIBRARY(my_cuda_lib ${CUDA_SRC} STATIC)
     SET(LIBS ${LIBS} ${my_cuda_lib})
endif()

#collect cpp files
FILE(GLOB_RECURSE SRC  *.cpp)

#compile .cpp files and link it to all libraries
add_executable(${PROG_NAME} ${SRC})
target_link_libraries(${PROG_NAME} ${LIBS} )

Without Static CUDA lib 没有静态CUDA lib

FILE(GLOB_RECURSE SRC  *.cpp)

if(CUDA_FOUND)
    #compile cuda files and add the compiled object files to your normal source files
    FILE(GLOB_RECURSE CUDA_SRC  *.cu)
    cuda_compile(cuda_objs ${CUDA_SRC})
    SET(SRC ${SRC} ${cuda_objs})
endif()

#compile .cpp files and link it to all libraries
add_executable(${PROG_NAME} ${SRC})
target_link_libraries(${PROG_NAME} ${LIBS} )

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM