[英]Is there a better way to multiply & sum two Pytorch tensors along the first dimension?
I have two Pytorch tensors, a
& b
, of shape (S, M)
and (S, M, H)
respectively.我有两个形状分别为(S, M)
和(S, M, H)
的 Pytorch 张量a
& b
。 M
is my batch dimension. M
是我的批次维度。 I want to multiply & sum the two tensors such that the output is of shape (M, H)
.我想将两个张量相乘并求和,使得 output 的形状为(M, H)
。 That is, I want to compute the sum over s
of a[s] * b[s]
.也就是说,我想计算a[s] * b[s]
s
总和。
For example, for S=2
, M=2
, H=3
:例如,对于S=2
、 M=2
、 H=3
:
>>> import torch
>>> S, M, H = 2, 2, 3
>>> a = torch.arange(S*M).view((S,M))
tensor([[0, 1],
[2, 3]])
>>> b = torch.arange(S*M*H).view((S,M,H))
tensor([[[ 0, 1, 2],
[ 3, 4, 5]],
[[ 6, 7, 8],
[ 9, 10, 11]]])
'''
DESIRED OUTPUT:
= [[0*[0, 1, 2] + 2*[6, 7, 8]],
[1*[3, 4, 5] + 3*[9, 10, 11]]]
= [[12, 14, 16],
[30, 34, 38]]
note: shape is (2, 3) = (M, H)
'''
I've found one way that sort of works, using torch.tensordot
:我找到了一种工作方式,使用torch.tensordot
:
>>> output = torch.tensordot(a, b, ([0], [0]))
tensor([[[12, 14, 16],
[18, 20, 22]],
[[18, 22, 26],
[30, 34, 38]]])
>>> output.shape
torch.Size([2, 2, 3]) # always (M, M, H)
>>> output = output[torch.arange(M), torch.arange(M), :]
tensor([[12, 14, 16],
[30, 34, 38]])
But as you can see, it makes a lot of unnecessary computations and I have to slice the ones that are relevant for me.但正如你所看到的,它会产生很多不必要的计算,我必须对与我相关的计算进行切片。
Is there a better way to do this that doesn't involve the unnecessary computations?有没有更好的方法来做到这一点,而不涉及不必要的计算?
This should work:这应该有效:
(torch.unsqueeze(a, 2)*b).sum(axis=0)
>>> tensor([[12, 14, 16],
[30, 34, 38]])
声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.