Suppose I have a list of tensors of the same size which could be concatenated along a dimension, say 0. Do any of the commands torch.cat or torch.stack or any numpy commands do the concatenation in-place? Also, if I want to convert a numpy ndarray to tensor. If I do the following, are two copies existing in the memory at any given time? I am dealing with a massive dataset so big only one copy of it can exist in the memory at any given time.
# initially data is a huge ndarray
data = torch.Tensor(data)
From your comment, assuming that:
B = A + a + b + ... + z
where +
represents concatenation along a compatible axis, B
and A
are huge and a
, b
, etc., are comparatively small, and B
I would allocate beforehand a huge array for B
using np.empty
, and I would fill-in this array directly with your data as needed.
The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.