简体   繁体   中英

can two process shared same GPU memory? (CUDA)

In CPU world one can do it via memory map. Can similar things done for GPU?

If two process can share a same CUDA context, I think it will be trivial - just pass GPU memory pointer around. Is it possible to share same CUDA context between two processes?

Another possibility I could think of is to map device memory to a memory mapped host memory. Since it's memory mapped, it can be shared between two processes. Does this make sense / possible, and are there any overhead?

CUDA MPS effectively allows CUDA activity emanating from 2 or more processes to share the same context on the GPU. However this won't provide for what you are asking for:

can two processes share the same GPU memory?

One method to achieve this is via CUDA IPC (interprocess communication) API.

This will allow you to share an allocated device memory region (ie a memory region allocated via cudaMalloc ) between multiple processes. This answer contains additional resources to learn about CUDA IPC.

However, according to my testing, this does not enable sharing of host pinned memory regions (eg a region allocated via cudaHostAlloc ) between multiple processes. The memory region itself can be shared using ordinary IPC mechanisms available for your particular OS, but it cannot be made to appear as "pinned" memory in 2 or more processes (according to my testing).

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM