![](/img/trans.png)
[英]ValueError: Input contains NaN, infinity or a value too large for dtype('float64'). While fitting in model
[英]ValueError: Input contains NaN, infinity or a value too large for dtype('float64') while preprocessing Data
我有兩個CSV文件( 訓練集和測試集 )。 由於少數列中存在可見的NaN
值( status
, hedge_value
, indicator_code
, portfolio_id
, desk_id
, office_id
)。
我通過將NaN
值替換為與列對應的一些巨大值來啟動該過程。 然后我做LabelEncoding
刪除文本數據並將它們轉換為數字數據。 現在,當我嘗試對分類數據執行OneHotEncoding
時,我收到錯誤。 我嘗試將輸入逐個輸入到OneHotEncoding
構造函數中,但是每列都會得到相同的錯誤。
基本上,我的最終目標是預測返回值,但由於這個原因,我被困在數據預處理部分。 我該如何解決這個問題?
我正在使用Python3.6
與Pandas
和Sklearn
進行數據處理。
碼
import pandas as pd
import matplotlib.pyplot as plt
import numpy as np
test_data = pd.read_csv('test.csv')
train_data = pd.read_csv('train.csv')
# Replacing Nan values here
train_data['status']=train_data['status'].fillna(2.0)
train_data['hedge_value']=train_data['hedge_value'].fillna(2.0)
train_data['indicator_code']=train_data['indicator_code'].fillna(2.0)
train_data['portfolio_id']=train_data['portfolio_id'].fillna('PF99999999')
train_data['desk_id']=train_data['desk_id'].fillna('DSK99999999')
train_data['office_id']=train_data['office_id'].fillna('OFF99999999')
x_train = train_data.iloc[:, :-1].values
y_train = train_data.iloc[:, 17].values
# =============================================================================
# from sklearn.preprocessing import Imputer
# imputer = Imputer(missing_values="NaN", strategy="mean", axis=0)
# imputer.fit(x_train[:, 15:17])
# x_train[:, 15:17] = imputer.fit_transform(x_train[:, 15:17])
#
# imputer.fit(x_train[:, 12:13])
# x_train[:, 12:13] = imputer.fit_transform(x_train[:, 12:13])
# =============================================================================
# Encoding categorical data, i.e. Text data, since calculation happens on numbers only, so having text like
# Country name, Purchased status will give trouble
from sklearn.preprocessing import LabelEncoder, OneHotEncoder
labelencoder_X = LabelEncoder()
x_train[:, 0] = labelencoder_X.fit_transform(x_train[:, 0])
x_train[:, 1] = labelencoder_X.fit_transform(x_train[:, 1])
x_train[:, 2] = labelencoder_X.fit_transform(x_train[:, 2])
x_train[:, 3] = labelencoder_X.fit_transform(x_train[:, 3])
x_train[:, 6] = labelencoder_X.fit_transform(x_train[:, 6])
x_train[:, 8] = labelencoder_X.fit_transform(x_train[:, 8])
x_train[:, 14] = labelencoder_X.fit_transform(x_train[:, 14])
# =============================================================================
# import numpy as np
# x_train[:, 3] = x_train[:, 3].reshape(x_train[:, 3].size,1)
# x_train[:, 3] = x_train[:, 3].astype(np.float64, copy=False)
# np.isnan(x_train[:, 3]).any()
# =============================================================================
# =============================================================================
# from sklearn.preprocessing import StandardScaler
# sc_X = StandardScaler()
# x_train = sc_X.fit_transform(x_train)
# =============================================================================
onehotencoder = OneHotEncoder(categorical_features=[0,1,2,3,6,8,14])
x_train = onehotencoder.fit_transform(x_train).toarray() # Replace Country Names with One Hot Encoding.
錯誤
Traceback (most recent call last):
File "<ipython-input-4-4992bf3d00b8>", line 58, in <module>
x_train = onehotencoder.fit_transform(x_train).toarray() # Replace Country Names with One Hot Encoding.
File "/Users/parthapratimneog/anaconda3/lib/python3.6/site-packages/sklearn/preprocessing/data.py", line 2019, in fit_transform
self.categorical_features, copy=True)
File "/Users/parthapratimneog/anaconda3/lib/python3.6/site-packages/sklearn/preprocessing/data.py", line 1809, in _transform_selected
X = check_array(X, accept_sparse='csc', copy=copy, dtype=FLOAT_DTYPES)
File "/Users/parthapratimneog/anaconda3/lib/python3.6/site-packages/sklearn/utils/validation.py", line 453, in check_array
_assert_all_finite(array)
File "/Users/parthapratimneog/anaconda3/lib/python3.6/site-packages/sklearn/utils/validation.py", line 44, in _assert_all_finite
" or a value too large for %r." % X.dtype)
ValueError: Input contains NaN, infinity or a value too large for dtype('float64').
我在發布問題后再次瀏覽數據集,然后找到另一個帶NaN
列。 當我可以使用Pandas函數獲取具有NaN
的列列表時,我無法相信我浪費了太多時間。 因此,使用以下代碼,我發現我錯過了三列。 當我剛剛使用這個功能時,我在視覺上搜索NaN
。 處理完這些新的NaN
,代碼運行正常。
pd.isnull(train_data).sum() > 0
結果
portfolio_id False
desk_id False
office_id False
pf_category False
start_date False
sold True
country_code False
euribor_rate False
currency False
libor_rate True
bought True
creation_date False
indicator_code False
sell_date False
type False
hedge_value False
status False
return False
dtype: bool
該錯誤出現在您將其視為非分類功能的其他功能中。
像'hedge_value'
, 'indicator_code'
等那些列包含來自原始csv的TRUE
, FALSE
和來自fillna()
調用的2.0
混合類型數據。 OneHotEncoder無法處理它們。
如OneHotEncoder fit()
文檔中所述:
fit(X, y=None)
Fit OneHotEncoder to X.
Parameters:
X : array-like, shape [n_samples, n_feature]
Input array of type int.
你可以看到它需要所有的X都是數字(int,但是浮點數)類型。
作為解決方法,您可以執行此操作來編碼分類功能:
X_train_categorical = x_train[:, [0,1,2,3,6,8,14]]
onehotencoder = OneHotEncoder()
X_train_categorical = onehotencoder.fit_transform(X_train_categorical).toarray()
然后將其與您的非分類功能連接起來。
要在生產中使用它,最佳做法是使用Imputer,然后使用模型保存在pkl中
這是一個蠢貨
df[df==np.inf]=np.nan
df.fillna(df.mean(), inplace=True)
最好使用它
聲明:本站的技術帖子網頁,遵循CC BY-SA 4.0協議,如果您需要轉載,請注明本站網址或者原文地址。任何問題請咨詢:yoyou2525@163.com.