I have a data set in which I am performing a principal components analysis (PCA). I get a ValueError
message when I try to transform the data. Below is some of the code:
import pandas as pd
import numpy as np
import matplotlib as mpl
from sklearn.preprocessing import StandardScaler
from sklearn.decomposition import PCA as sklearnPCA
data = pd.read_csv('test.csv',header=0)
X = data.ix[:,0:1000].values # values of 1000 predictor variables
Y = data.ix[:,1000].values # values of binary outcome variable
sklearn_pca = sklearnPCA(n_components=2)
X_std = StandardScaler().fit_transform(X)
It is here that I get the following error message:
ValueError: Input contains NaN, infinity or a value too large for dtype('float64').
So I then checked whether the original data set had any NaN values:
print(data.isnull().values.any()) # prints True
data.fillna(0) # replace NaN values with 0
print(data.isnull().values.any()) # prints True
I don't understand why data.isnull().values.any()
is still printing True
even after I replaced the NaN values with 0.
There are two way to achieve, try replace in place:
data.fillna(0, inplace=True)
Or, use returned object:
data1 = data.fillna(0)
You have to replace data by the returned object from fillna
Small reproducer:
import pandas as pd
data = pd.DataFrame(data=[0,float('nan'),2,3])
print(data.isnull().values.any()) # prints True
data = data.fillna(0) # replace NaN values with 0
print(data.isnull().values.any()) # prints False now :)
The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.