[英]Null space basis from QR decomposition with GSL
I'm trying to get the basis for the null space of a relatively large matrix, A^T
, using GSL. 我正在尝试使用GSL获得相对较大的矩阵
A^T
的零空间的基础。 So far I've been extracting right-singular vectors of the SVD corresponding to vanishing singular values, but this is becoming too slow for the sizes of matrices I'm interested in. 到目前为止,我一直在提取对应于消失奇异值的SVD的右奇异向量,但这对于我感兴趣的矩阵的大小来说变得太慢了。
I know that the nullspace can be extracted as the last mr
columns of the Q-matrix in the QR decomposition of A
, where r
is the rank of A
, but I'm not sure how rank-revealing decompositions work. 我知道可以提取零空间作为
A
的QR分解中Q矩阵的最后一个mr
列,其中r
是A
的等级,但我不确定等级揭示分解是如何工作的。
Here's my first attempt using gsl_linalg_QR_decomp : 这是我第一次尝试使用gsl_linalg_QR_decomp :
int m = 4;
int n = 3;
gsl_matrix* A = gsl_matrix_calloc(m,n);
gsl_matrix_set(A, 0,0, 3);gsl_matrix_set(A, 0,1, 6);gsl_matrix_set(A, 0,2, 1);
gsl_matrix_set(A, 1,0, 1);gsl_matrix_set(A, 1,1, 2);gsl_matrix_set(A, 1,2, 1);
gsl_matrix_set(A, 2,0, 1);gsl_matrix_set(A, 2,1, 2);gsl_matrix_set(A, 2,2, 1);
gsl_matrix_set(A, 3,0, 1);gsl_matrix_set(A, 3,1, 2);gsl_matrix_set(A, 3,2, 1);
std::cout<<"A:"<<endl;
for(int i=0;i<m;i++){ for(int j=0;j<n;j++) printf(" %5.2f",gsl_matrix_get(A,i,j)); std::cout<<std::endl;}
gsl_matrix* Q = gsl_matrix_alloc(m,m);
gsl_matrix* R = gsl_matrix_alloc(m,n);
gsl_vector* tau = gsl_vector_alloc(std::min(m,n));
gsl_linalg_QR_decomp(A, tau);
gsl_linalg_QR_unpack(A, tau, Q, R);
std::cout<<"Q:"<<endl;
for(int i=0;i<m;i++){ for(int j=0;j<m;j++) printf(" %5.2f",gsl_matrix_get(Q,i,j)); std::cout<<std::endl;}
std::cout<<"R:"<<endl;
for(int i=0;i<m;i++){ for(int j=0;j<n;j++) printf(" %5.2f",gsl_matrix_get(R,i,j)); std::cout<<std::endl;}
This outputs 这输出
A:
3.00 6.00 1.00
1.00 2.00 1.00
1.00 2.00 1.00
1.00 2.00 1.00
Q:
-0.87 -0.29 0.41 -0.00
-0.29 0.96 0.06 -0.00
-0.29 -0.04 -0.64 -0.71
-0.29 -0.04 -0.64 0.71
R:
-3.46 -6.93 -1.73
0.00 0.00 0.58
0.00 0.00 -0.82
0.00 0.00 0.00
but I'm not sure how to compute the rank, r
, from this. 但我不知道如何从中计算等级
r
。 My second attempt uses gsl_linalg_QRPT_decomp by replacing the last part with 我的第二次尝试使用gsl_linalg_QRPT_decomp替换最后一部分
gsl_vector* tau = gsl_vector_alloc(std::min(m,n));
gsl_permutation* perm = gsl_permutation_alloc(n);
gsl_vector* norm = gsl_vector_alloc(n);
int* sign = new int(); *sign = 1;
gsl_linalg_QRPT_decomp2(A, Q, R, tau, perm, sign, norm );
std::cout<<"Q:"<<endl;
for(int i=0;i<m;i++){ for(int j=0;j<m;j++) printf(" %5.2f",gsl_matrix_get(Q,i,j)); std::cout<<std::endl;}
std::cout<<"R:"<<endl;
for(int i=0;i<m;i++){ for(int j=0;j<n;j++) printf(" %5.2f",gsl_matrix_get(R,i,j)); std::cout<<std::endl;}
std::cout<<"Perm:"<<endl;
for(int i=0;i<n;i++) std::cout<<" "<<gsl_permutation_get(perm,i);
which results in 结果
Q:
-0.87 0.50 0.00 0.00
-0.29 -0.50 -0.58 -0.58
-0.29 -0.50 0.79 -0.21
-0.29 -0.50 -0.21 0.79
R:
-6.93 -1.73 -3.46
0.00 -1.00 0.00
0.00 0.00 0.00
0.00 0.00 0.00
Perm:
1 2 0
Here, I believe that the rank is the number of non-zero diagonal elements in R
, but I'm not sure which elements to extract from Q
. 在这里,我认为秩是
R
非零对角元素的数量,但我不确定从Q
提取哪些元素。 Which approach should I take? 我应该采取哪种方法?
For 4×3 A
, the “null space” will consist of 3-dimensional vectors, whereas the QR decomposition on A
only gives you 4-dimensional vectors. 对于4×3
A
,“零空间”将由3维向量组成,而A
上的QR分解仅为您提供4维向量。 (And of course you can generalize this for A
with size M
× N
where M
> N
.) (当然,你可以概括为
A
,大小为M
× N
,其中M
> N
)
Therefore, take the QR decomposition of the transpose of A
, whose Q
is now 3×3. 因此,对
A
的转置进行QR分解,其Q
现在为3×3。
Sketching the process using Python/Numpy in IPython (sorry, I can't seem to figure out how to call gsl_linalg_QR_decomp
using PyGSL): 在写生的IPython使用Python / numpy的(对不起,我似乎无法弄清楚如何调用过程
gsl_linalg_QR_decomp
使用PyGSL):
In [16]: import numpy as np
In [17]: A = np.array([[3.0, 6, 1], [1.0, 2, 1], [1.0, 2, 1], [1.0, 2, 1]])
In [18]: Q, R = np.linalg.qr(A.T) # <---- A.T means transpose(A)
In [19]: np.diag(R)
Out[19]: array([ -6.78232998e+00, 6.59380473e-01, 2.50010468e-17])
In [20]: np.round(Q * 1000) / 1000 # <---- Q to 3 decimal places
Out[20]:
array([[-0.442, -0.066, -0.894],
[-0.885, -0.132, 0.447],
[-0.147, 0.989, 0. ]])
The 19th output (ie, Out[19]
, result of np.diag(R)
) tells us the column-rank of A
is 2. And looking at the 3rd column of Out[20]
( Q
to three decimal places), we see that the right answer is returned: [-0.894, 0.447, 0]
is proportional to [1, 0.5, 0]
, and we know this is right because the first two columns of A
are linearly-dependent. 第19个输出(即
Out[19]
, np.diag(R)
)告诉我们A
的列级别为2.并且查看Out[20]
的第3列( Q
到3个小数位),我们看到,则返回正确的答案: [-0.894, 0.447, 0]
成正比[1, 0.5, 0]
而我们知道,这是正确的,因为前两列A
是线性相关的。
Can you check with larger matrixes that the QR decomposition of transpose(A)
gives you equivalent null spaces as your current SVD method? 你能用更大的矩阵来检查
transpose(A)
的QR分解transpose(A)
是否为你当前的SVD方法提供了等效的零空间?
声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.