[英]Multiplying a matrix with array of vectors in NumPy
I'm trying to geometrically rotate an array of vectors in NumPy. First I generate the coordinate vectors of a grid.我正在尝试对 NumPy 中的向量数组进行几何旋转。首先,我生成了网格的坐标向量。
width = 128
height = 128
x_axis = np.linspace(-1, 1, width)
y_axis = np.linspace(-1, 1, height)
x, y = np.meshgrid(x_axis, y_axis)
z = np.full((width, height), 0)
vectors = np.stack((x, y, z), axis=2)
So 'vectors' has the shape (128, 128, 3)所以“向量”的形状为 (128, 128, 3)
I already prepared the rotation matrix with a, b and c as the rotation angles along the axes.我已经准备好旋转矩阵,其中 a、b 和 c 作为沿轴的旋转角度。
rotation_matrix = np.array([
[np.cos(b) * np.cos(c),
- np.cos(b) * np.sin(c),
np.sin(b)],
[np.sin(a) * np.sin(b) * np.cos(c) + np.cos(a) * np.sin(c),
- np.sin(a) * np.sin(b) * np.sin(c) + np.cos(a) * np.cos(c),
- np.sin(a) * np.cos(b)],
[- np.cos(a) * np.sin(b) * np.cos(c) + np.sin(a) * np.sin(c),
np.cos(a) * np.sin(b) * np.sin(c) + np.sin(a) * np.cos(c),
np.cos(a) * np.cos(b)]
])
Now I want every vector of the array to be matrix multiplied with 'rotation_matrix' like现在我希望数组的每个向量都是矩阵乘以'rotation_matrix'就像
vector_rotated = rotation_matrix @ vector
so the resulting array also should have the shape (128, 128, 3).因此生成的数组也应具有 (128, 128, 3) 的形状。 I have some problems with handling this 3 dimensional array.
我在处理这个 3 维数组时遇到了一些问题。 Matmul is only capable of handling 2d arrays. Is there any elegant way in NumPy for this use case or do I have to use a for loop to solve this issue?
Matmul 只能处理 2d arrays。对于这个用例,NumPy 中有什么优雅的方法吗?还是我必须使用 for 循环来解决这个问题?
Thanks a lot and have a nice day!非常感谢,祝你有美好的一天!
There are a few different ways to solve this problem.有几种不同的方法可以解决这个问题。
The most straightforward is to reshape the array vectors
so that it has shape (3, 128 * 128)
, then call the builtin np.dot
function, and reshape the result back to your desired shape.最直接的方法是重塑数组
vectors
,使其具有形状(3, 128 * 128)
,然后调用内置的np.dot
function,并将结果重塑回您想要的形状。
(Note that the (128, 128)
part of the array's shape is not really relevant to the rotation; it's an interpretation that you probably want to make your problem clearer, but makes no difference to the linear transformation you want to apply. Said another way, you are rotating 3-vectors. There are 128 * 128 == 16384
of them, they just happen to be organized into a 3D array like above.) (请注意,数组形状的
(128, 128)
部分与旋转并不真正相关;这是一种解释,您可能想让您的问题更清楚,但对您要应用的线性变换没有影响。另一个人说方式,你正在旋转 3 向量。它们有128 * 128 == 16384
,它们恰好被组织成一个 3D 数组,就像上面那样。)
This approach would look like:这种方法看起来像:
>>> v = vectors.reshape(-1, 3).T
>>> np.dot(rotation_matrix, v).shape
(3, 16384)
>>> rotated = np.dot(rotation_matrix, v).T.reshape(vectors.shape)
>>> rotated.shape == vectors.shape
True
Another method that does not involve any reshaping is to use NumPy's Einstein summation .另一种不涉及任何整形的方法是使用 NumPy 的爱因斯坦求和。 Einstein summation is very flexible, and takes a while to understand, but its power justifies its complexity.
爱因斯坦求和非常灵活,需要一段时间才能理解,但它的强大证明了它的复杂性。 In its simplest form, you "label" the axes that you want to multiply together.
在最简单的形式中,您“标记”了要相乘的轴。 Axes that are omitted are "contracted", meaning a sum across that axis is computed.
省略的轴是“收缩的”,这意味着计算该轴上的总和。 For your case, it would be:
对于您的情况,它将是:
>>> np.einsum('ij,klj->kli', rotation_matrix, vectors).shape
(128, 128, 3)
>>> np.allclose(rotated, np.einsum('ij,klj->kli', rotation_matrix_vectors))
True
Here's a quick explanation of the indexing.这是对索引的快速解释。 We are labeling the axes of the rotation matrix
i
and j
, and the axes of the vectors k
, l
, and j
.我们正在标记旋转矩阵
i
和j
的轴,以及向量k
、 l
和j
的轴。 The repeated j
means those are the axes multiplied together.重复的
j
表示这些轴相乘。 This is equivalent to right-multiplying the reshaped array above with the rotation matrix (ie, it's a rotation).这等效于将上面重塑的数组与旋转矩阵右乘(即,它是一个旋转)。
The output axes are labeled kli
. output 轴标记为
kli
。 This means we're preserving the k
and l
axes of the vectors.这意味着我们保留了向量的
k
轴和l
轴。 Since j
is not in the output labels, there is a summation across that axis.由于
j
不在 output 标签中,因此在该轴上有一个总和。 Instead we have the axis i
, hence the final shape of (128, 128, 3)
.相反,我们有轴
i
,因此最终形状为(128, 128, 3)
。 You can see above that the dot-product method and the einsum
method agree.您可以在上面看到点积方法和
einsum
方法是一致的。
It can take a while to wrap your head around Einstein summation, but it is super awesome and powerful.围绕爱因斯坦求和可能需要一段时间,但它非常棒且功能强大。 I highly recommend learning more about it, especially if this sort of linear algebra is a common problem for you.
我强烈建议您更多地了解它,特别是如果这种线性代数对您来说是一个常见问题。
声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.