简体   繁体   中英

CUDA and C++ for host and device code

I started to integrate CUDA into my C++ applications weeks ago. I've been doing my own research about integrating CUDA and C++. However, I still feel uncomfortable about this topic.

Can somebody help me to clarify some questions based on the latest Toolkit 3.2 or 4.0 RC?

  1. It says Fermi fully support C++ in Fermi's white paper. Does that mean it support C++ in both host and device code or just host code?

  2. What kind of C++ features I can use in kernel code? I know templates are supported. What about classes or structs?

  3. Can I pass a user-defined class instance (which holds some pointers to device memory) into a kernel, and call its member function in the kernel code? Do classes and structs make any differences?

Any helps are appreciated! Thanks!

  1. Your host already supports C++, doesn't it? But now the GeForce 400 Series (codename Fermi) supports C++ code on the device.

  2. Classes too, with some restrictions. See appendix D of the programming guide for details.

  3. You can pass a reference of the class. Check section D.6.2 of the programming guide.

In general, appendix D shows the supported C++ constructs and pieces of code. It's worth reading it.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM