[英]Pytorch torch.linalg.svd returning U and V^T, which are not orthogonal
Using U,S, VT = torch.linalg.svd(M), the matrix 'M' is large, so I am getting the matrices U and VT as non orthogonal.使用 U、S、VT = torch.linalg.svd(M),矩阵“M”很大,所以我得到的矩阵 U 和 VT 是非正交的。 When I compute torch.norm(torch.mm(matrix, matrix.t()) - identity_matrix)) its 0.004 and also when I print MM^T, the diagonal entries are not 1, rather 0.2 or 0.4 and non diagonals are not 0, but 0.0023.
当我计算 torch.norm(torch.mm(matrix, matrix.t()) - identity_matrix)) 它的 0.004 以及当我打印 MM^T 时,对角线条目不是 1,而是 0.2 或 0.4,非对角线不是0,但 0.0023。 IS there a way to get SVD with orthogonal U and V^T?
有没有办法通过正交 U 和 V^T 获得 SVD? But the singular values ie diagonal elements of S are nera to 1 only.
但是奇异值,即 S 的对角元素仅小于 1。
matrix = torch.randn(4096, 4096)
u, s, vh = torch.linalg.svd(matrix)
matrix = torch.mm(u, vh)
print('norm ||WTW - I||: ',torch.norm(torch.mm(matrix, matrix.t()) - torch.eye(matrix.shape[0])))
print(matrix)
I have done some numerical analysis, and it seems Pytorch's linalg_svd is not returning orthogonal u and vh.我做了一些数值分析,Pytorch 的 linalg_svd 似乎没有返回正交的 u 和 vh。 Can others verify this behaviour is with others too or I am doing something wrong?
其他人是否也可以验证这种行为是否与其他人有关,或者我做错了什么?
Matlab: I tried inbuilt svd decomposition in matlab, and there norm(u*transpose(u) - eye(4096))
, there its 1E-13. Matlab:我在 matlab 中尝试了内置 svd 分解,那里有
norm(u*transpose(u) - eye(4096))
,那里有它的 1E-13。
Why do you expect matrix @ matrix.T
to be close to I
?为什么你期望
matrix @ matrix.T
接近I
?
SVD
is a decomposition of the input matrix matrix
. SVD
是输入矩阵matrix
的分解。 It does not alter it, it only produces three matrices u
, s
and vh
st matrix = u @ s @ vh
.它不会改变它,它只会产生三个矩阵
u
, s
和vh
st matrix = u @ s @ vh
。 The special thing about SVD
is that the matrices u
, s
and vh
are not arbitrary, but unique: u
and v
are orthogonal, and s
is diagonal. SVD
的特殊之处在于矩阵u
、 s
和vh
不是任意的,而是唯一的: u
和v
是正交的,而s
是对角线的。
What you should actually expect is:您实际上应该期望的是:
matrix = torch.randn(4096, 4096)
u, s, vh = torch.linalg.svd(matrix)
print(f'||uuT - I|| = {torch.norm(u@u.t() - torch.eye(u.shape[0]))}')
print(f'||vvT - I|| = {torch.norm(vh.t()@vh - torch.eye(vh.shape[0]))}')
Note that due to numeric issues the difference ||uuT -I||
请注意,由于数字问题,差异
||uuT -I||
is not likely to be exactly zero, but some small number depending on the dimensions of your matrix (the larger the matrix -- the greater the error), and the precision of the dtype
you used: float32
(aka single
) will likely to result with larger error compared to float64
(aka double
).不太可能完全为零,而是一些小数字,具体取决于矩阵的维度(矩阵越大——误差越大),以及您使用的
dtype
的精度: float32
(又名single
)可能会导致与float64
(又名double
)相比误差更大。
PS, the operator @
stands for matrix multiplication. PS,运算符
@
代表矩阵乘法。
声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.