简体   繁体   English

从pycuda.driver.DeviceAllocation创建mxnet.ndarray.NDArray

[英]Create mxnet.ndarray.NDArray from pycuda.driver.DeviceAllocation

I am trying to pass output of some pycuda operation to the input of mxnet computational graph. 我试图将某些pycuda操作的输出传递给mxnet计算图的输入。 I am able to achieve this via numpy conversion with the following code 我可以通过以下代码通过numpy转换来实现

import pycuda.driver as cuda
import pycuda.autoinit
import numpy as np
import mxnet as mx

batch_shape = (1, 1, 10, 10)
h_input = np.zeros(shape=batch_shape, dtype=np.float32)
# init output with ones to see if contents really changed
h_output = np.ones(shape=batch_shape, dtype=np.float32)
device_ptr = cuda.mem_alloc(input.nbytes)
stream = cuda.Stream()
cuda.memcpy_htod_async(d_input, h_input, stream)

# here some actions with d_input may be performed, e.g. kernel calls
# but for the sake of simplicity we'll just transfer it back to host
cuda.memcpy_dtoh_async(d_input, h_output, stream)
stream.synchronize()
mx_input = mx.nd(h_output, ctx=mx.gpu(0))

print('output after pycuda calls: ', h_output)
print('mx_input: ', mx_input)

However i would like to avoid the overhead of device-to-host and host-to-device memory copying. 但是我想避免设备到主机和主机到设备内存复制的开销。

I couldn't find a way to construct mxnet.ndarray.NDArray directly from h_output . 我找不到直接从h_output构造mxnet.ndarray.NDArray的方法。 The closest thing that i was able to find is construction of ndarray from dlpack . 我能够找到的最接近的东西是dlpack中ndarray 的构造 But it is not clear how to work with dlpack object from python. 但是目前尚不清楚如何使用python中的dlpack对象。

Is there a way fo achieve NDArray <-> pycuda interoperability without copying memory via host? 有没有一种方法可以实现NDArray <-> pycuda互操作性而无需通过主机复制内存?

不幸的是,目前尚不可能。

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM