简体   繁体   中英

Using the Eigen library, can I get the kernel of a matrix which only has one "approximately"?

I need to find the kernel (aka null space) of a 3*3 matrix which is of full rank 3, but very close to being singular and thus of rank 2. This means it has no null space unless you look for it with some "tolerance".

In Scilab (which is a free and fewer-featured alternative to Matlab) I can do the following:

--> Rx_1  = 

  -0.000004    0.0000376  -0.0000261
  -0.0000017  -0.459682    0.84146  
   0.0000458  -0.841457   -0.459684 


--> kernel(Rx_1)  ans  =

    []         <--- no exact null space


--> kernel(Rx_1, 0.0001)  ans  =    <--- this 0.0001 is my tolerance

   1.
   0.0000411        <--- null space basis found with "tolerance".
   0.0000244

However, when I go to C++, I don't know how to introduce the tolerance in Eigen:

FullPivLU<MatrixXd> lu(Rx_1);        <--- lu won't take any extra arguments
MatrixXd A_null_space = lu.kernel();

What can I do?

I am not very familiar with Eigen library. But I know it can be used to perform many decomposition, but also to find eigen spaces (that would be surprising, if not, considering the name). For example using EigenSolver , or probably starting from any decomposition.

A null space is also the eigen space associated with eigen value 0. If there is a noise in the matrix values (or just a numerical error), then, there won't be an exactly 0 eigen value (as you know, a numerical matrix is in practice never exactly singular. There is always some numerical error)

And I surmise that the "tolerance" in scilab, is that: the maximum value of the eigen value or something similar. After some reading, I see that scilab relies on SVD decomposition by default to extract things like kernel. So, tol is probably the maximum values of singular values (that is square roots of eigen values of Mᵀ·M.). Null space being the span of the V lines matching the "null enough" singular values.

So, maybe you should do the same with Eigen : perform a SVD. And then select rows (or columns, depending on the libraries. Some return V, some other return Vᵀ) of V matching singular values small enough for you.

For example, in python (sorry, I don't have neither scilab nor Eigen . I know it is then strange that I answer this question. But that is more linear algebra problem, and no one else is answering:D).

import numpy as np
import scipy

# Same matrix
M=np.array([[-0.000004,   0.0000376, -0.0000261],
            [-0.0000017, -0.459682,   0.84146  ],
            [ 0.0000458, -0.841457,  -0.459684 ]])

scipy.linalg.null_space(M)
# returns array([], shape=(3, 0), dtype=float64)
# Same problem: no null space

np.linalg.eig(M)
# Returns
# (array([-3.99909409e-06+0.j       , 
#         -4.59683000e-01+0.8414585j,
#         -4.59683000e-01-0.8414585j]),
# array([[ 9.99999999e-01+0.00000000e+00j, -3.01853493e-05-1.51067336e-05j, -3.01853493e-05+1.51067336e-05j],
#        [ 4.10693606e-05+0.00000000e+00j,  7.07107411e-01+0.00000000e+00j,  7.07107411e-01-0.00000000e+00j],
#        [ 2.44559237e-05+0.00000000e+00j, -8.40775572e-07+7.07106151e-01j, -8.40775572e-07-7.07106151e-01j]]))

# No 0 eigen value. But you may think that the first one is small enough.
# Therefore, the associated eigen vectors are a basis for a "almost null space"
# So here, "almost null-space" is the span of first column
# That is [1, 4.11×10⁻⁵, 2.45×10⁻⁵]

# Using SVD
np.linalg.svd(M)
# Returns
#(array([[-2.54689055e-05,  4.03741349e-05, -9.99999999e-01],
#        [ 8.55645091e-01, -5.17563016e-01, -4.26885031e-05],
#        [-5.17563018e-01, -8.55645091e-01, -2.13641668e-05]]),
# array([9.58834879e-01, 9.58831275e-01, 3.99909409e-06]), 
# array([[-2.62390131e-05,  4.39933685e-02,  9.99031823e-01],
#        [-3.99536921e-05,  9.99031822e-01, -4.39933695e-02],
#        [ 9.99999999e-01,  4.10693525e-05,  2.44559116e-05]]))

# No null singular value. But the last one is "null enough"
# So the "almost null space" is the span of the last row of V
# [ 9.99999999e-01,  4.10693525e-05,  2.44559116e-05]

Edit

Just now, I saw in a documentation that python scipy's null_space has also a tolerance parameter.

So scipy.linalg.null_space(M, 0.0001) returns

array([[9.99999999e-01],
       [4.10693525e-05],
       [2.44559116e-05]])

But, more importantly, I read in null_space documentation that it

Construct an orthonormal basis for the null space of A using SVD

And about this tolerance:

Relative condition number. Singular values s smaller than rcond * max(s) are considered zero.

So, it is official. At least for python scipy, a null space is the span of rows of Vᵀ matching a singular value small enough. The only difference with my previous speculations is that it is not sᵢ<tol , but sᵢ<tol*max(s) that characterize what is "small enough".

So, back to your Eigen library. I would do the exact same. Compute a SVD decomposition of M. And then select the V rows (or columns, but if the documentation does not say which one, just use your example, and see if it is in a row or a column that you find your [1, 4.1×10⁻⁵, 2.4×10⁻⁵]) matching singular value smaller than tol × bigger singular value .

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM