简体   繁体   English

scipy eigh给出正半定矩阵的负特征值

[英]scipy eigh gives negative eigenvalues for positive semidefinite matrix

I am having some issues with scipy's eigh function returning negative eigenvalues for positive semidefinite matrices. 我有一些问题scipy的eigh函数返回正半定矩阵的负特征值。 Below is a MWE. 以下是MWE。

The hess_R function returns a positive semidefinite matrix (it is the sum of a rank one matrix and a diagonal matrix, both with nonnegative entries). hess_R函数返回正半定矩阵(它是秩1矩阵和对角矩阵的总和,两者都具有非负条目)。

import numpy as np
from scipy import linalg as LA

def hess_R(x):
    d = len(x)
    H = np.ones(d*d).reshape(d,d) / (1 - np.sum(x))**2
    H = H + np.diag(1 / (x**2))
    return H.astype(np.float64)

x = np.array([  9.98510710e-02 ,  9.00148922e-01 ,  4.41547488e-10])
H = hess_R(x)
w,v = LA.eigh(H)
print w

The eigenvalues printed are 印刷的特征值是

[ -6.74055241e-271   4.62855397e+016   5.15260753e+018]

If I replace np.float64 with np.float32 in the return statement of hess_R I get 如果我更换np.float64np.float32在return语句hess_R我得到

[ -5.42905303e+10   4.62854925e+16   5.15260506e+18]

instead, so I am guessing this is some sort of precision issue. 相反,所以我猜这是一种精确的问题。

Is there a way to fix this? 有没有办法来解决这个问题? Technically I do not need to use eigh, but I think this is the underlying problem with my other errors (taking square roots of these matrices, getting NaNs, etc.) 从技术上讲,我不需要使用eigh,但我认为这是我的其他错误的根本问题(取这些矩阵的平方根,得到NaN等)

I think the issue is that you've hit the limits of floating-point precision. 我认为问题在于你已经达到了浮点精度的极限。 A good rule-of-thumb for linear algebra results is that they're good to about one part in 10^8 for float32, and about one part in 10^16 for float 64. It appears that the ratio of your smallest to largest eigenvalue here is less than 10^-16. 对于线性代数结果来说,一个很好的经验法则是它们对于float32来说大约是10 ^ 8中的一个部分,对于浮动64来说它们在10 ^ 16中大约是一个部分。看起来你的最小到最大的比例这里的特征值小于10 ^ -16。 Because of this, the returned value cannot really be trusted and will depend on the details of the eigenvalue implementation you use. 因此,返回的值实际上不可信,并且取决于您使用的特征值实现的详细信息。

For example, here are four different solvers you should have available; 例如,这里有四种不同的求解器; take a look at their results: 看看他们的结果:

# using the 64-bit version
for impl in [np.linalg.eig, np.linalg.eigh, LA.eig, LA.eigh]:
    w, v = impl(H)
    print(np.sort(w))
    reconstructed = np.dot(v * w, v.conj().T)
    print("Allclose:", np.allclose(reconstructed, H), '\n')

Output: 输出:

[ -3.01441754e+02   4.62855397e+16   5.15260753e+18]
Allclose: True 

[  3.66099625e+02   4.62855397e+16   5.15260753e+18]
Allclose: True 

[ -3.01441754e+02+0.j   4.62855397e+16+0.j   5.15260753e+18+0.j]
Allclose: True 

[  3.83999999e+02   4.62855397e+16   5.15260753e+18]
Allclose: True 

Notice they all agree on the larger two eigenvalues, but that the value of the smallest eigenvalue changes from implementation to implementation. 请注意,他们都同意较大的两个特征值,但最小特征值的值从实现变为实现。 Still, in all four cases the input matrix can be reconstructed up to 64-bit precision: this means the algorithms are operating as expected up to the precision available to them. 尽管如此,在所有四种情况下,输入矩阵可以重建为64位精度:这意味着算法按预期运行,直到达到它们可用的精度。

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM