简体   繁体   English

PyTorch:使用 1-D 张量和多通道 (3-D) 图像张量执行 add/sub/mul 操作

[英]PyTorch: Perform add/sub/mul operations using 1-D tensor and multi-channel (3-D) image tensor

Note: I am looking for the fastest/optimal way of doing or improving on this, as my constraint is time注意:我正在寻找最快/最佳的方式来做或改进,因为我的限制是时间

Using PyTorch, I have an image as a 3D tensor, lets say of dimension 64 x 400 x 400, where 64 refers to channels and 400 is the image dimensions.使用 PyTorch,我有一个图像作为 3D 张量,可以说尺寸为 64 x 400 x 400,其中 64 是指通道,400 是图像尺寸。 With this, I have a 64 length 1D tensor, all with different values, where the intention is to use one value per channel.有了这个,我有一个 64 长度的 1D 张量,所有张量都有不同的值,目的是每个通道使用一个值。 I want to use a value per 1D tensor to apply to the entire 400x400 block of the channel.我想对每个 1D 张量使用一个值来应用于通道的整个 400x400 块。 So for example, when I want to add 3d_tensor + 1d_tensor , I want 1d_tensor[i] to be added to all 400x400 = 160000 values in 3d_tensor[i] , with [i] ranging from 0 to 63.因此,例如,当我要添加3d_tensor + 1d_tensor ,我想1d_tensor[i]被添加到所有400×400 = 160000值3d_tensor[i]其中[i]从0到63。

What I previously did:我之前做的:
I tried doing it directly, by using the operators only:我试着直接做,只使用运算符:

output_add = 1d_tensor + 3d_tensor

This returned an error that the dimension of 3d_tensor (400) and 1d_tensor (64) are incompatible.这返回了一个错误,即3d_tensor (400) 和1d_tensor (64) 的维度不兼容。

So my current form is using a for loop所以我目前的形式是使用 for 循环

for a, b in zip(3d_tensor, 1d_tensor):
   a += b

However at one stage I have four different 1D tensors to use at once in either addition, subtraction or multiplication, so is this for loop method the most efficient?然而,在一个阶段,我有四个不同的一维张量可以在加法、减法或乘法中同时使用,那么这种 for 循环方法是最有效的吗? I'm also planning on doing it 20+ times per image, so speed is key.我还计划对每张图像执行 20 次以上,因此速度是关键。 I tried maybe extending the 1d tensor to a dimension of 64 x 400 x 400 also, so it could be used directly, but I could not get this right using tensor.repeat()我也尝试将 1d 张量扩展到 64 x 400 x 400 的维度,所以它可以直接使用,但我无法使用tensor.repeat()

You should add some dimension to the 1D array: convert from 64 to (64 x 1 x 1):您应该为一维数组添加一些维度:从 64 转换为 (64 x 1 x 1):

output_add = 1d_tensor[:, None, None] + 3d_tensor

with this None type indexing you can add dimension any where.使用此 None 类型索引,您可以在任何位置添加维度。 The [:, None, None] will add two additional dimension to the 1D array after the existing dimension. [:, None, None]将在现有维度之后向一维数组添加两个额外维度。

Or you can use view for the same result see:或者您可以使用view获得相同的结果,请参见:

output_add = 1d_tensor.view(-1, 1, 1) + 3d_tensor

The result has the same dimension as the 3D array: (64, 1, 1).结果与 3D 数组具有相同的维度:(64, 1, 1)。 So the pytorch can use broadcasting.所以pytorch可以使用广播。

Here a good explanation for broad casting: How does pytorch broadcasting work?这里对广播有一个很好的解释: pytorch 广播是如何工作的?

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM