简体   繁体   中英

Use RAM after GPU memory is not enough

Is there any way to use RAM after GPU memory(NVIDIA) is completely used up in CUDA?

What I have thought up to now is:

  1. Find a way to check if all the thread blocks are used
  2. Move the process to RAM

But obiviously this will need alot of syncronization things.

Thank you!

If the memory on the GPU is not enough you can use the host memory quite easily. What you are looking for is zero-copy memory allocated with cudaHostAlloc . Here is the example from the best-practice guide:

float *a_h, *a_map; 
... 
cudaGetDeviceProperties(&prop, 0);
if (!prop.canMapHostMemory) 
    exit(0); 
cudaSetDeviceFlags(cudaDeviceMapHost); 
cudaHostAlloc(&a_h, nBytes, cudaHostAllocMapped); 
cudaHostGetDevicePointer(&a_map, a_h, 0); 
kernel<<<gridSize, blockSize>>>(a_map);

However, the performance will be limited by the PCIe bandwitdh (around 6GB/s).

Here is the documentation in the best-practice guide: Zero-Copy

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM