[英]How to use sklearn preprocessor fit_transform with pandas.groupby.transform
[英]How to use sklearn fit_transform with pandas and return dataframe instead of numpy array?
我想将缩放(使用 sklearn.preprocessing 中的 StandardScaler())应用于 pandas dataframe。以下代码返回 numpy 数组,因此我丢失了所有列名和索引。 这不是我想要的。
features = df[["col1", "col2", "col3", "col4"]]
autoscaler = StandardScaler()
features = autoscaler.fit_transform(features)
我在网上找到的“解决方案”是:
features = features.apply(lambda x: autoscaler.fit_transform(x))
它似乎有效,但会导致弃用警告:
/usr/lib/python3.5/site-packages/sklearn/preprocessing/data.py:583: DeprecationWarning: Passing 1d arrays as data is deprecated in 0.17 and will raise ValueError in 0.19. 如果您的数据具有单个特征,则使用 X.reshape(-1, 1) 重塑您的数据,如果它包含单个样本,则使用 X.reshape(1, -1) 。
因此我尝试:
features = features.apply(lambda x: autoscaler.fit_transform(x.reshape(-1, 1)))
但这给出了:
回溯(最近调用最后):文件“./analyse.py”,第 91 行,在 features = features.apply(lambda x: autoscaler.fit_transform(x.reshape(-1, 1))) 文件“/usr/ lib/python3.5/site-packages/pandas/core/frame.py”,第 3972 行,在 apply return self._apply_standard(f, axis, reduce=reduce) File “/usr/lib/python3.5/site- packages/pandas/core/frame.py”,第 4081 行,在 _apply_standard result = self._constructor(data=results, index=index) 文件“/usr/lib/python3.5/site-packages/pandas/core/frame .py", line 226, in init mgr = self._init_dict(data, index, columns, dtype=dtype) File "/usr/lib/python3.5/site-packages/pandas/core/frame.py", line 363,在 _init_dict dtype=dtype) 文件“/usr/lib/python3.5/site-packages/pandas/core/frame.py”,第 5163 行,在 _arrays_to_mgr arrays = _homogenize(arrays, index, dtype) 文件“/ usr/lib/python3.5/site-packages/pandas/core/frame.py”,第 5477 行,在 _homogenize raise_cast_failure=False) 文件“/usr/lib/python3.5/site-packages/pandas/core/series .py”,第 2885 行,在 _sa nitize_array raise Exception('Data must be 1-dimensional') 异常:Data must be 1-dimensional
如何对 pandas dataframe 应用缩放,而使 dataframe 完好无损? 如果可能,不复制数据。
您可以使用as_matrix()
将 DataFrame 转换为 numpy 数组。 随机数据集的示例:
编辑:更改as_matrix()
到values
,(它不会改变结果)每最后一句as_matrix()
文档上面:
一般建议使用'.values'。
import pandas as pd
import numpy as np #for the random integer example
df = pd.DataFrame(np.random.randint(0.0,100.0,size=(10,4)),
index=range(10,20),
columns=['col1','col2','col3','col4'],
dtype='float64')
注意,索引是 10-19:
In [14]: df.head(3)
Out[14]:
col1 col2 col3 col4
10 3 38 86 65
11 98 3 66 68
12 88 46 35 68
现在fit_transform
DataFrame 以获取scaled_features
array
:
from sklearn.preprocessing import StandardScaler
scaled_features = StandardScaler().fit_transform(df.values)
In [15]: scaled_features[:3,:] #lost the indices
Out[15]:
array([[-1.89007341, 0.05636005, 1.74514417, 0.46669562],
[ 1.26558518, -1.35264122, 0.82178747, 0.59282958],
[ 0.93341059, 0.37841748, -0.60941542, 0.59282958]])
将缩放后的数据分配给 DataFrame(注意:使用index
和columns
关键字参数来保留原始索引和列名:
scaled_features_df = pd.DataFrame(scaled_features, index=df.index, columns=df.columns)
In [17]: scaled_features_df.head(3)
Out[17]:
col1 col2 col3 col4
10 -1.890073 0.056360 1.745144 0.466696
11 1.265585 -1.352641 0.821787 0.592830
12 0.933411 0.378417 -0.609415 0.592830
编辑2:
遇到了sklearn-pandas包。 它专注于使 scikit-learn 更易于与 Pandas 一起使用。 当您需要对DataFrame
列子集应用多种类型的转换时, sklearn-pandas
特别有用,这是一种更常见的场景。 它已记录在案,但这就是您实现我们刚刚执行的转换的方式。
from sklearn_pandas import DataFrameMapper
mapper = DataFrameMapper([(df.columns, StandardScaler())])
scaled_features = mapper.fit_transform(df.copy(), 4)
scaled_features_df = pd.DataFrame(scaled_features, index=df.index, columns=df.columns)
import pandas as pd
from sklearn.preprocessing import StandardScaler
df = pd.read_csv('your file here')
ss = StandardScaler()
df_scaled = pd.DataFrame(ss.fit_transform(df),columns = df.columns)
df_scaled 将是“相同”的数据帧,只有现在具有缩放值
features = ["col1", "col2", "col3", "col4"]
autoscaler = StandardScaler()
df[features] = autoscaler.fit_transform(df[features])
为我工作:
from sklearn.preprocessing import StandardScaler
cols = list(train_df_x_num.columns)
scaler = StandardScaler()
train_df_x_num[cols] = scaler.fit_transform(train_df_x_num[cols])
重新分配回 df.values 保留索引和列。
df.values[:] = StandardScaler().fit_transform(df)
这就是我所做的:
X.Column1 = StandardScaler().fit_transform(X.Column1.values.reshape(-1, 1))
from neuraxle.pipeline import Pipeline
from neuraxle.base import NonFittableMixin, BaseStep
class PandasToNumpy(NonFittableMixin, BaseStep):
def transform(self, data_inputs, expected_outputs):
return data_inputs.values
pipeline = Pipeline([
PandasToNumpy(),
StandardScaler(),
])
然后,您按预期进行:
features = df[["col1", "col2", "col3", "col4"]] # ... your df data
pipeline, scaled_features = pipeline.fit_transform(features)
你甚至可以用这样的包装器来做到这一点:
from neuraxle.pipeline import Pipeline
from neuraxle.base import MetaStepMixin, BaseStep
class PandasValuesChangerOf(MetaStepMixin, BaseStep):
def transform(self, data_inputs, expected_outputs):
new_data_inputs = self.wrapped.transform(data_inputs.values)
new_data_inputs = self._merge(data_inputs, new_data_inputs)
return new_data_inputs
def fit_transform(self, data_inputs, expected_outputs):
self.wrapped, new_data_inputs = self.wrapped.fit_transform(data_inputs.values)
new_data_inputs = self._merge(data_inputs, new_data_inputs)
return self, new_data_inputs
def _merge(self, data_inputs, new_data_inputs):
new_data_inputs = pd.DataFrame(
new_data_inputs,
index=data_inputs.index,
columns=data_inputs.columns
)
return new_data_inputs
df_scaler = PandasValuesChangerOf(StandardScaler())
然后,您按预期进行:
features = df[["col1", "col2", "col3", "col4"]] # ... your df data
df_scaler, scaled_features = df_scaler.fit_transform(features)
这与 MinMaxScaler 一起将数组值恢复到原始数据帧。 它也应该适用于 StandardScaler。
data_scaled = pd.DataFrame(scaled_features, index=df.index, columns=df.columns)
其中,data_scaled 是新数据框,scaled_features = 归一化后的数组,df = 我们需要返回索引和列的原始数据框。
从 sklearn 1.2 版开始,estiamtors 可以返回一个保留列名的 DataFrame。 set_output
可以通过调用set_output
方法或通过设置set_config(transform_output="pandas")
全局配置每个估计器
请参阅scikit-learn 1.2 的发布亮点 - 使用 set_output API 的 Pandas 输出
set_output()
示例:
from sklearn.preprocessing import StandardScaler
scaler = StandardScaler().set_output(transform="pandas")
set_config()
示例:
from sklearn import set_config
set_config(transform_output="pandas")
你可以试试这个代码,这会给你一个带索引的 DataFrame
import pandas as pd
from sklearn.preprocessing import StandardScaler
from sklearn.datasets import load_boston # boston housing dataset
dt= load_boston().data
col= load_boston().feature_names
# Make a dataframe
df = pd.DataFrame(data=dt, columns=col)
# define a method to scale data, looping thru the columns, and passing a scaler
def scale_data(data, columns, scaler):
for col in columns:
data[col] = scaler.fit_transform(data[col].values.reshape(-1, 1))
return data
# specify a scaler, and call the method on boston data
scaler = StandardScaler()
df_scaled = scale_data(df, col, scaler)
# view first 10 rows of the scaled dataframe
df_scaled[0:10]
您可以使用slicing直接将 numpy 数组分配给数据框。
from sklearn.preprocessing import StandardScaler
features = df[["col1", "col2", "col3", "col4"]]
autoscaler = StandardScaler()
features[:] = autoscaler.fit_transform(features.values)
class StandardScalerDF:
def __init__(self, with_mean: bool = True, with_std: bool = True):
self.with_mean = with_mean
self.with_std = with_std
def fit(self, data):
self.scaler = StandardScaler(copy=True, with_mean=self.with_mean,
with_std=self.with_std).fit(data)
return self.scaler
def transform(self, data):
return pd.DataFrame(self.scaler.transform(data), columns=data.columns,
index=data.index)
#example:
obj = StandardScalerDF()
obj.fit(data_as_df)
scaled_data_df = obj.transform(data_as_df) #data type: DataFrame
声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.