简体   繁体   English

带有 sklearn 的 PCA 矩阵

[英]PCA matrix with sklearn

I did PCA on some data and I want to extract the PCA matrix.我对一些数据进行了 PCA,我想提取 PCA 矩阵。 This is my code (excluding loading the data):这是我的代码(不包括加载数据):

from sklearn.decomposition import PCA
pca = PCA(n_components=5)
pca_result = pca.fit_transform(recon.data.cpu().numpy())
M = pca.components_

I thought that M should be the PCA matrix.我认为 M 应该是 PCA 矩阵。 However when I print pca_result (first few rows) I get this:但是,当我打印pca_result (前几行)时,我得到了:

[-21.08167   ,  -5.67821   ,   0.17554353,  -0.732398  ,0.04658243],
[-25.936056  ,  -6.535223  ,   0.6887493 ,  -0.8394666 ,0.06557591],
[-30.755266  ,  -6.0098953 ,   1.1643354 ,  -0.82322127,0.07585468]

But when I print np.transpose(np.matmul(M,np.transpose(recon))) , I get this:但是当我打印np.transpose(np.matmul(M,np.transpose(recon)))时,我得到了这个:

[-27.78438   ,  -2.5913327 ,   0.87771094,  -1.0819707 ,0.1037216 ],
[-32.63887   ,  -3.4483302 ,   1.3909296 ,  -1.1890743 ,0.12274324],
[-37.45802   ,  -2.9229708 ,   1.8665184 ,  -1.1728177 ,0.13301012]

What am I doing wrong and how do I get the actual PCA matrix?我做错了什么,如何获得实际的 PCA 矩阵? Thank you!谢谢!

in a PCA you go from an n-dimensional space to a different (rotated) n-dimensional space.在 PCA 中,您 go 从 n 维空间到不同的(旋转的)n 维空间。 This change is done using an nxn matrix此更改是使用 nxn 矩阵完成的

This is indeed the matrix returned by pca.components_ ;这确实是pca.components_返回的矩阵; when multiplied by the PCA-transformed data it gives the reconstruction of the original data X.当乘以 PCA 转换的数据时,它给出了原始数据 X 的重建。

Here is a demonstration with the iris data:这是一个带有虹膜数据的演示:

import numpy as np
from sklearn.decomposition import PCA
from sklearn.datasets import load_iris

X = load_iris().data
mu = np.mean(X, axis=0) # mean value

pca = PCA()
X_pca = pca.fit_transform(X)
M = pca.components_
M
# result:
array([[ 0.36138659, -0.08452251,  0.85667061,  0.3582892 ],
       [ 0.65658877,  0.73016143, -0.17337266, -0.07548102],
       [-0.58202985,  0.59791083,  0.07623608,  0.54583143],
       [-0.31548719,  0.3197231 ,  0.47983899, -0.75365743]])

ie a 4x4 matrix indeed (the iris data have 4 features).即确实是一个 4x4 矩阵(虹膜数据有 4 个特征)。

Let's reconstruct the original data using all PCs:让我们使用所有 PC 重建原始数据:

X_hat = np.matmul(X_pca, M)
X_hat = X_hat + mu # add back the mean
print(X_hat[0]) # reconstructed
print(X_[0])    # original

Result:结果:

[5.1 3.5 1.4 0.2]
[5.1 3.5 1.4 0.2]

ie perfect reconstruction.即完美的重建。

Reconstructing with fewer PCs, let's say 2 (out of 4):用更少的 PC 进行重建,假设 2 台(共 4 台):

n_comp = 2
X_hat2 = np.matmul(X_pca[:,:n_comp], pca.components_[:n_comp,:])
X_hat2 = X_hat2 + mu
print(X_hat2[0])

Result:结果:

[5.08303897 3.51741393 1.40321372 0.21353169]

ie a less accurate reconstruction, as we should expect due to the truncation in used PCs (2 instead of all 4).即由于使用过的 PC 的截断(2 台而不是全部 4 台),我们应该预期的不太准确的重建。

(Code adapted from the great thread How to reverse PCA and reconstruct original variables from several principal components? at Cross Validated.) (代码改编自伟大的线程如何反转 PCA 并从几个主成分重建原始变量?在交叉验证。)

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM