[英]Multiplying 3d matrix and 3d matrix
I'm trying to do the multiplying of 3d matrix and 3d matrix, my matrix is as follows:我正在尝试将 3d 矩阵和 3d 矩阵相乘,我的矩阵如下:
Z = np.array([
[[0,0,0.25],[0.25,0.5,0.75],[0,0,0.25],[0.75,1.0,1.0],[0.75,1.0,1.0]],
[[0,0,0.25],[0,0,0.25],[0.5,0.75,1.0],[0,0,0.25],[0,0,0.25]],
[[0,0,0.25],[0,0,0.25],[0,0,0.25],[0,0.25,0.5],[0,0,0.25]],
[[0,0,0.25],[0.25,0.5,0.75],[0,0,0.25],(0,0,0.25),[0,0,0.25]],
[[0,0,0.25],[0,0,0.25],[0,0,0.25],[0,0,0.25],[0,0,0.25]]
])
print(Z)
print(type(Z))
print("np.shape = ",np.shape(Z))
The shape is (5,5,3), I want to do the multiplying like np.dot(Z,Z)
,but it can't work in 3d matrix.形状是 (5,5,3),我想像
np.dot(Z,Z)
那样做乘法,但它不能在 3d 矩阵中工作。
I've seen about using np.tensordot(Z,Z,axes=?)
, but I don't know how to set axes.我见过使用
np.tensordot(Z,Z,axes=?)
,但我不知道如何设置轴。
I suggest you take a look at the documentation of the tensordot()
function to actually understand what it is doing with the matrices:我建议您查看
tensordot()
函数的文档,以真正了解它对矩阵的作用:
import numpy as np
Z = np.array([
[[0,0,0.25],[0.25,0.5,0.75],[0,0,0.25],[0.75,1.0,1.0],[0.75,1.0,1.0]],
[[0,0,0.25],[0,0,0.25],[0.5,0.75,1.0],[0,0,0.25],[0,0,0.25]],
[[0,0,0.25],[0,0,0.25],[0,0,0.25],[0,0.25,0.5],[0,0,0.25]],
[[0,0,0.25],[0.25,0.5,0.75],[0,0,0.25],(0,0,0.25),[0,0,0.25]],
[[0,0,0.25],[0,0,0.25],[0,0,0.25],[0,0,0.25],[0,0,0.25]]
])
B = np.tensordot(Z, Z, axes=[1, 0])
print(B)
Output:输出:
[[[[0. 0. 0.4375]
[0.1875 0.375 0.8125]
[0.125 0.1875 0.625 ]
[0. 0. 0.4375]
[0. 0. 0.4375]]
[[0. 0. 0.625 ]
[0.25 0.5 1.125 ]
[0.25 0.375 1. ]
[0. 0. 0.625 ]
[0. 0. 0.625 ]]
[[0. 0. 0.8125]
[0.3125 0.625 1.4375]
[0.375 0.5625 1.375 ]
[0.1875 0.3125 1.0625]
[0.1875 0.25 1. ]]]
[[[0. 0. 0.125 ]
[0. 0. 0.125 ]
[0. 0. 0.125 ]
[0. 0.125 0.25 ]
[0. 0. 0.125 ]]
[[0. 0. 0.1875]
[0. 0. 0.1875]
[0. 0. 0.1875]
[0. 0.1875 0.375 ]
[0. 0. 0.1875]]
[[0. 0. 0.5 ]
[0.125 0.25 0.75 ]
[0.125 0.1875 0.6875]
[0.1875 0.5 0.9375]
[0.1875 0.25 0.6875]]]
[[[0. 0. 0. ]
[0. 0. 0. ]
[0. 0. 0. ]
[0. 0. 0. ]
[0. 0. 0. ]]
[[0. 0. 0.0625]
[0.0625 0.125 0.1875]
[0. 0. 0.0625]
[0. 0. 0.0625]
[0. 0. 0.0625]]
[[0. 0. 0.375 ]
[0.1875 0.375 0.75 ]
[0.125 0.1875 0.5625]
[0.1875 0.3125 0.625 ]
[0.1875 0.25 0.5625]]]
[[[0. 0. 0.0625]
[0. 0. 0.0625]
[0.125 0.1875 0.25 ]
[0. 0. 0.0625]
[0. 0. 0.0625]]
[[0. 0. 0.125 ]
[0. 0. 0.125 ]
[0.25 0.375 0.5 ]
[0. 0. 0.125 ]
[0. 0. 0.125 ]]
[[0. 0. 0.4375]
[0.125 0.25 0.6875]
[0.375 0.5625 1. ]
[0.1875 0.3125 0.6875]
[0.1875 0.25 0.625 ]]]
[[[0. 0. 0. ]
[0. 0. 0. ]
[0. 0. 0. ]
[0. 0. 0. ]
[0. 0. 0. ]]
[[0. 0. 0. ]
[0. 0. 0. ]
[0. 0. 0. ]
[0. 0. 0. ]
[0. 0. 0. ]]
[[0. 0. 0.3125]
[0.125 0.25 0.5625]
[0.125 0.1875 0.5 ]
[0.1875 0.3125 0.5625]
[0.1875 0.25 0.5 ]]]]
With a 3d array, there are various ways of doing a matrix product:对于 3d 数组,有多种方法可以进行矩阵乘积:
In [365]: Z.shape
Out[365]: (5, 5, 3)
You say np.dot
isn't valid, but don't show why:您说
np.dot
无效,但不说明原因:
In [366]: np.dot(Z,Z).shape
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
Input In [366], in <cell line: 1>()
----> 1 np.dot(Z,Z).shape
File <__array_function__ internals>:5, in dot(*args, **kwargs)
ValueError: shapes (5,5,3) and (5,5,3) not aligned: 3 (dim 2) != 5 (dim 1)
If you take time to read the np.dot
docs, you see that it trys to do the sum-of-products on the last axis of A and 2nd to the last of B, hence the mismatch between 3 and 5.如果您花时间阅读
np.dot
文档,您会发现它试图在 A 的最后一个轴和 B 的最后一个轴上进行乘积之和,因此 3 和 5 之间的不匹配。
You'd get the same error is Z
was (5,3) shape.你会得到同样的错误是
Z
是 (5,3) 形状。
One way around this is to change the 2nd Z
to (5,3,5) shape:解决此问题的一种方法是将第二个
Z
更改为 (5,3,5) 形状:
In [367]: np.dot(Z,Z.transpose(0,2,1)).shape
Out[367]: (5, 5, 5, 5)
But dot
does a kind of outer-product on the leading dimensions.但是
dot
在领先的维度上做了一种外积。
I think the tensordot
in the other answer does this as well.我认为另一个
tensordot
中的张量点也可以做到这一点。 tensordot
just reduces the calculation down to a dot
call, with some reshape and transposes. tensordot
只是将计算简化为一个dot
调用,并进行了一些整形和转置。
matmul/@
treats the leading dimensions as 'batch' and applies normal broadcasting
rules: matmul/@
将前导维度视为“批处理”并应用正常的broadcasting
规则:
In [368]: np.matmul(Z,Z.transpose(0,2,1)).shape
Out[368]: (5, 5, 5)
With einsum
we can specify other combinations of axes.使用
einsum
,我们可以指定其他轴组合。
In [369]: np.einsum('ijk,ijk->ij',Z,Z).shape
Out[369]: (5, 5)
In [370]: np.einsum('ijk,ijk->ik',Z,Z).shape
Out[370]: (5, 3)
In [371]: np.einsum('ijk,ilk->ijl',Z,Z).shape
Out[371]: (5, 5, 5)
In [373]: np.einsum('ijk,ijl->ijl',Z,Z).shape
Out[373]: (5, 5, 3)
In [374]: np.einsum('ijk,jlk->ilk',Z,Z).shape
Out[374]: (5, 5, 3)
声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.