简体   繁体   中英

Why Sklearn PCA needs more samples than new features(n_components)?

When using Sklearn PCA algorithm like this

x_orig = np.random.choice([0,1],(4,25),replace = True)
pca = PCA(n_components=15)
pca.fit_transform(x_orig).shape

I get output

(4, 4)

I expected(want) it to be:

(4,15)

I get why its happening. In the documentation of sklearn ( here ) it says(assuming their '==' is assignment operator):

n_components == min(n_samples, n_features)

But why are they doing this? Also, how can I convert an input with shape [1,25] to [1,10] directly (without stacking dummy arrays)?

Each principal component is the projection of the data on an eigenvector of the data covariance matrix. If you have less samples n than features the covariance matrix has only n non-zero eigenvalues. Thus, there are only n eigenvectors/components that make sense.

In principle it could be possible to have more components than samples, but the superfluous components would be useless noise.

Scikit-learn raises an error instead of silently doing anything . This prevents users from shooting themselves in the foot. Having less samples than features can indicate a problem with the data, or a misconception about the methods involved.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM