[英]Numpy elementwise product of 3d array
I have two 3d arrays A and B with shape (N, 2, 2) that I would like to multiply element-wise according to the N-axis with a matrix product on each of the 2x2 matrix. 我有两个形状为(N,2,2)的3d数组A和B,我想根据N轴将每个2x2矩阵的矩阵乘积逐元素相乘。 With a loop implementation, it looks like 通过循环实现,它看起来像
C[i] = dot(A[i], B[i])
Is there a way I could do this without using a loop? 有没有办法不用循环就可以做到这一点? I've looked into tensordot, but haven't been able to get it to work. 我研究过Tensordot,但无法使其正常工作。 I think I might want something like tensordot(a, b, axes=([1,2], [2,1]))
but that's giving me an NxN matrix. 我想我可能想要类似tensordot(a, b, axes=([1,2], [2,1]))
但这给了我一个NxN矩阵。
It seems you are doing matrix-multiplications for each slice along the first axis. 似乎您正在沿第一个轴对每个切片进行矩阵乘法。 For the same, you can use np.einsum
like so - 同样,您可以像这样使用np.einsum
np.einsum('ijk,ikl->ijl',A,B)
We can also use np.matmul
- 我们也可以使用np.matmul
np.matmul(A,B)
On Python 3.x, this matmul
operation simplifies with @
operator - 在Python 3.x上,此matmul
操作使用@
运算符简化-
A @ B
Approaches - 方法-
def einsum_based(A,B):
return np.einsum('ijk,ikl->ijl',A,B)
def matmul_based(A,B):
return np.matmul(A,B)
def forloop(A,B):
N = A.shape[0]
C = np.zeros((N,2,2))
for i in range(N):
C[i] = np.dot(A[i], B[i])
return C
Timings - 时间-
In [44]: N = 10000
...: A = np.random.rand(N,2,2)
...: B = np.random.rand(N,2,2)
In [45]: %timeit einsum_based(A,B)
...: %timeit matmul_based(A,B)
...: %timeit forloop(A,B)
100 loops, best of 3: 3.08 ms per loop
100 loops, best of 3: 3.04 ms per loop
100 loops, best of 3: 10.9 ms per loop
You just need to perform the operation on the first dimension of your tensors, which is labeled by 0
: 您只需要在张量的第一维上执行操作,该维用0
标记:
c = tensordot(a, b, axes=(0,0))
This will work as you wish. 这将如您所愿。 Also you don't need a list of axes, because it's just along one dimension you're performing the operation. 另外,您不需要轴列表,因为它只是沿一维执行操作。 With axes([1,2],[2,1])
you're cross multiplying the 2nd and 3rd dimensions. 使用axes([1,2],[2,1])
交叉乘以第二维和第三维。 If you write it in index notation (Einstein summing convention) this corresponds to c[i,j] = a[i,k,l]*b[j,k,l]
, thus you're contracting the indices you want to keep. 如果您以索引符号(爱因斯坦求和惯例)来编写它,则它对应于c[i,j] = a[i,k,l]*b[j,k,l]
,因此您正在收缩要压缩的索引保持。
EDIT: Ok, the problem is that the tensor product of a two 3d object is a 6d object. 编辑:好的,问题是两个3d对象的张量积是6d对象。 Since contractions involve pairs of indices, there's no way you'll get a 3d object by a tensordot
operation. 由于收缩涉及成对的索引,因此您无法通过tensordot
操作获得3d对象。 The trick is to split your calculation in two: first you do the tensordot
on the index to do the matrix operation and then you take a tensor diagonal in order to reduce your 4d object to 3d. 诀窍是一分为二的计算:首先你做的tensordot
对指数做矩阵运算,然后你把张量对角线,以减少你的4D对象为3D。 In one command: 在一个命令中:
d = np.diagonal(np.tensordot(a,b,axes=()), axis1=0, axis2=2)
In tensor notation d[i,j,k] = c[i,j,i,k] = a[i,j,l]*b[i,l,k]
. 用张量表示法d[i,j,k] = c[i,j,i,k] = a[i,j,l]*b[i,l,k]
。
声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.