I started to integrate CUDA into my C++ applications weeks ago. I've been doing my own research about integrating CUDA and C++. However, I still feel uncomfortable about this topic.
Can somebody help me to clarify some questions based on the latest Toolkit 3.2 or 4.0 RC?
It says Fermi fully support C++ in Fermi's white paper. Does that mean it support C++ in both host and device code or just host code?
What kind of C++ features I can use in kernel code? I know templates are supported. What about classes or structs?
Can I pass a user-defined class instance (which holds some pointers to device memory) into a kernel, and call its member function in the kernel code? Do classes and structs make any differences?
Any helps are appreciated! Thanks!
Your host already supports C++, doesn't it? But now the GeForce 400 Series (codename Fermi) supports C++ code on the device.
Classes too, with some restrictions. See appendix D of the programming guide for details.
You can pass a reference of the class. Check section D.6.2 of the programming guide.
In general, appendix D shows the supported C++ constructs and pieces of code. It's worth reading it.
The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.