简体   繁体   中英

Pytorch in-place concatenation and conversion to tensor from numpy

Suppose I have a list of tensors of the same size which could be concatenated along a dimension, say 0. Do any of the commands torch.cat or torch.stack or any numpy commands do the concatenation in-place? Also, if I want to convert a numpy ndarray to tensor. If I do the following, are two copies existing in the memory at any given time? I am dealing with a massive dataset so big only one copy of it can exist in the memory at any given time.

# initially data is a huge ndarray
data = torch.Tensor(data)

From your comment, assuming that:

  1. you want to do something in the spirit of: B = A + a + b + ... + z where + represents concatenation along a compatible axis, B and A are huge and a , b , etc., are comparatively small, and
  2. you can predict a reasonnable upper bound for the size of B

I would allocate beforehand a huge array for B using np.empty , and I would fill-in this array directly with your data as needed.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM