简体   繁体   中英

Convert PyTorch CUDA tensor to NumPy array

How do I convert a torch.Tensor (on GPU) to a numpy.ndarray (on CPU)?

I believe you also have to use .detach() . I had to convert my Tensor to a numpy array on Colab which uses CUDA and GPU. I did it like the following:

embedding = learn.model.u_weight

embedding_list = list(range(0, 64382))

input = torch.cuda.LongTensor(embedding_list)
tensor_array = embedding(input)
# the output of the line bwlow is a numpy array
tensor_array.cpu().detach().numpy() 

If the tensor is on gpu or cuda as you say.

You can use self.tensor.weight.data.cpu().numpy() It will copy the tensor to cpu and convert it to numpy array.

If the tensor is on cpu already you can do self.tensor.weight.data.numpy() as you correctly figured out, but you can also do self.tensor.weight.data.cpu().numpy() in this case since tensor is already on cpu , .cpu() operation will have no effect. and this could be used as a device-agnostic way to convert the tensor to numpy array.

some_tensor.detach().cpu().numpy()

  • .detach() detaches from the backward graph to avoid copying gradients.
  • .cpu() moves the data to CPU.
  • .numpy() converts the torch.Tensor to a np.ndarray .

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM