简体   繁体   English

特征向量很复杂,但仅适用于大型矩阵

[英]Eigenvectors are complex but only for large matrices

I'm trying to calculate the eigenvectors and eigenvalues of this matrix我正在尝试计算这个矩阵的特征向量和特征值

三对角矩阵示例

import numpy as np
la = 0.02
mi = 0.08
n = 500

d1 = np.full(n, -(la+mi), np.double)
d1[0] = -la
d1[-1] = -mi
d2 = np.full(n-1, la, np.double)
d3 = np.full(n-1, mi, np.double)

A = np.diagflat(d1) + np.diagflat(d2, -1) + np.diag(d3, 1)
e_values, e_vectors = np.linalg.eig(A)

If I set the dimensions of the matrix to n < 110 the output is fine.如果我将矩阵的维度设置为 n < 110,则 output 就可以了。 However, if I set it to n >= 110 both the eigenvalues and the eigenvector components become complex numbers with significant imaginary parts.但是,如果我将其设置为 n >= 110,则特征值和特征向量分量都将变为具有重要虚部的复数。 Why does this happen?为什么会这样? Is it supposed to happen?它应该发生吗? It is very strange behavior and frankly I'm kind of stuck.这是非常奇怪的行为,坦率地说,我有点卡住了。

What you are seeing appears to be fairly normal roundoff error.您所看到的似乎是相当正常的舍入误差。 This is an unfortunate result of storing floating point numbers with a finite precision.这是以有限精度存储浮点数的不幸结果。 It naturally gets relatively worse for large problems.对于大问题,它自然会变得相对更糟。 Here is a plot of the real vs. imaginary components of the eigenvalues:这是特征值的实部与虚部的 plot:

在此处输入图像描述

You can see that the imaginary numbers are effectively noise.您可以看到虚数实际上是噪声。 This is not to say that they are not important.这并不是说它们不重要。 Here is a plot of the imaginary vs. real part, showing that the ratio can get as large as 0.06 in the worst case:这是虚部与实部的 plot,表明在最坏的情况下该比率可以达到 0.06:

在此处输入图像描述

This ratio changes with respect to the absolute and relative quantities la and mi .该比率随绝对量和相对量lami变化。 If you multiply both by 10, you get如果将两者都乘以 10,则得到

在此处输入图像描述

If you keep la = 0.02 and set mi = 0.8 , you get a smaller imaginary part:如果你保持la = 0.02并设置mi = 0.8 ,你会得到一个更小的虚部:

在此处输入图像描述

Things get really weird when you do the opposite, and increase la by a factor of 10, keeping mi as-is:当你做相反的事情时,事情会变得非常奇怪,并将la增加 10 倍,保持mi原样:

在此处输入图像描述

The relative precision of the calculation decreases for smaller eigenvalues, so this is not too surprising.对于较小的特征值,计算的相对精度会降低,所以这并不奇怪。

Given the relatively small magnitudes of the imaginary parts (at least for the important eigenvalues), you can either take the magnitude or the real part of the result since you know that all the eigenvalues are real.鉴于虚部的幅度相对较小(至少对于重要的特征值),您可以取结果的幅度或实部,因为您知道所有特征值都是实数。

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM