简体   繁体   中英

cuda program kernel code in device memory space

Is there any way to find out, how much memory occupies the kernel code (execution) in gpu (device) memory? If I have 512 MB device memory how can I know how much is available for allocation? Could visual profiler show such info?

Program code uses up very little memory. The rest of the CUDA context (local memory, constant memory, printf buffers, heap and stack) uses a lot more. The CUDA runtime API includes the cudeGetMemInfo call which will return the amount of free memory available to your code. Note that because of fragmentation and page size constraints, you won't be able to allocate every last free byte of memory. The best strategy is to start with the maximum and recursively attempt allocating successively smaller allocations until you get a successful allocation.

You can see a fuller explanation of device memory consumption in my answer to an earlier question along similar lines,

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM