简体   繁体   English

如何将 sklearn fit_transform 与 pandas 一起使用并返回 dataframe 而不是 numpy 数组?

[英]How to use sklearn fit_transform with pandas and return dataframe instead of numpy array?

I want to apply scaling (using StandardScaler() from sklearn.preprocessing) to a pandas dataframe. The following code returns a numpy array, so I lose all the column names and indeces.我想将缩放(使用 sklearn.preprocessing 中的 StandardScaler())应用于 pandas dataframe。以下代码返回 numpy 数组,因此我丢失了所有列名和索引。 This is not what I want.这不是我想要的。

features = df[["col1", "col2", "col3", "col4"]]
autoscaler = StandardScaler()
features = autoscaler.fit_transform(features)

A "solution" I found online is:我在网上找到的“解决方案”是:

features = features.apply(lambda x: autoscaler.fit_transform(x))

It appears to work, but leads to a deprecationwarning:它似乎有效,但会导致弃用警告:

/usr/lib/python3.5/site-packages/sklearn/preprocessing/data.py:583: DeprecationWarning: Passing 1d arrays as data is deprecated in 0.17 and will raise ValueError in 0.19. /usr/lib/python3.5/site-packages/sklearn/preprocessing/data.py:583: DeprecationWarning: Passing 1d arrays as data is deprecated in 0.17 and will raise ValueError in 0.19. Reshape your data either using X.reshape(-1, 1) if your data has a single feature or X.reshape(1, -1) if it contains a single sample.如果您的数据具有单个特征,则使用 X.reshape(-1, 1) 重塑您的数据,如果它包含单个样本,则使用 X.reshape(1, -1) 。

I therefore tried:因此我尝试:

features = features.apply(lambda x: autoscaler.fit_transform(x.reshape(-1, 1)))

But this gives:但这给出了:

Traceback (most recent call last): File "./analyse.py", line 91, in features = features.apply(lambda x: autoscaler.fit_transform(x.reshape(-1, 1))) File "/usr/lib/python3.5/site-packages/pandas/core/frame.py", line 3972, in apply return self._apply_standard(f, axis, reduce=reduce) File "/usr/lib/python3.5/site-packages/pandas/core/frame.py", line 4081, in _apply_standard result = self._constructor(data=results, index=index) File "/usr/lib/python3.5/site-packages/pandas/core/frame.py", line 226, in init mgr = self._init_dict(data, index, columns, dtype=dtype) File "/usr/lib/python3.5/site-packages/pandas/core/frame.py", line 363, in _init_dict dtype=dtype) File "/usr/lib/python3.5/site-packages/pandas/core/frame.py", line 5163, in _arrays_to_mgr arrays = _homogenize(arrays, index, dtype) File "/usr/lib/python3.5/site-packages/pandas/core/frame.py", line 5477, in _homogenize raise_cast_failure=False) File "/usr/lib/python3.5/site-packages/pandas/core/series.py", line 2885, in _sa回溯(最近调用最后):文件“./analyse.py”,第 91 行,在 features = features.apply(lambda x: autoscaler.fit_transform(x.reshape(-1, 1))) 文件“/usr/ lib/python3.5/site-packages/pandas/core/frame.py”,第 3972 行,在 apply return self._apply_standard(f, axis, reduce=reduce) File “/usr/lib/python3.5/site- packages/pandas/core/frame.py”,第 4081 行,在 _apply_standard result = self._constructor(data=results, index=index) 文件“/usr/lib/python3.5/site-packages/pandas/core/frame .py", line 226, in init mgr = self._init_dict(data, index, columns, dtype=dtype) File "/usr/lib/python3.5/site-packages/pandas/core/frame.py", line 363,在 _init_dict dtype=dtype) 文件“/usr/lib/python3.5/site-packages/pandas/core/frame.py”,第 5163 行,在 _arrays_to_mgr arrays = _homogenize(arrays, index, dtype) 文件“/ usr/lib/python3.5/site-packages/pandas/core/frame.py”,第 5477 行,在 _homogenize raise_cast_failure=False) 文件“/usr/lib/python3.5/site-packages/pandas/core/series .py”,第 2885 行,在 _sa nitize_array raise Exception('Data must be 1-dimensional') Exception: Data must be 1-dimensional nitize_array raise Exception('Data must be 1-dimensional') 异常:Data must be 1-dimensional

How do I apply scaling to the pandas dataframe, leaving the dataframe intact?如何对 pandas dataframe 应用缩放,而使 dataframe 完好无损? Without copying the data if possible.如果可能,不复制数据。

You could convert the DataFrame as a numpy array using as_matrix() .您可以使用as_matrix()将 DataFrame 转换为 numpy 数组。 Example on a random dataset:随机数据集的示例:

Edit: Changing as_matrix() to values , (it doesn't change the result) per the last sentence of the as_matrix() docs above:编辑:更改as_matrix()values ,(它不会改变结果)每最后一句as_matrix()文档上面:

Generally, it is recommended to use '.values'.一般建议使用'.values'。

import pandas as pd
import numpy as np #for the random integer example
df = pd.DataFrame(np.random.randint(0.0,100.0,size=(10,4)),
              index=range(10,20),
              columns=['col1','col2','col3','col4'],
              dtype='float64')

Note, indices are 10-19:注意,索引是 10-19:

In [14]: df.head(3)
Out[14]:
    col1    col2    col3    col4
    10  3   38  86  65
    11  98  3   66  68
    12  88  46  35  68

Now fit_transform the DataFrame to get the scaled_features array :现在fit_transform DataFrame 以获取scaled_features array

from sklearn.preprocessing import StandardScaler
scaled_features = StandardScaler().fit_transform(df.values)

In [15]: scaled_features[:3,:] #lost the indices
Out[15]:
array([[-1.89007341,  0.05636005,  1.74514417,  0.46669562],
       [ 1.26558518, -1.35264122,  0.82178747,  0.59282958],
       [ 0.93341059,  0.37841748, -0.60941542,  0.59282958]])

Assign the scaled data to a DataFrame (Note: use the index and columns keyword arguments to keep your original indices and column names:将缩放后的数据分配给 DataFrame(注意:使用indexcolumns关键字参数来保留原始索引和列名:

scaled_features_df = pd.DataFrame(scaled_features, index=df.index, columns=df.columns)

In [17]:  scaled_features_df.head(3)
Out[17]:
    col1    col2    col3    col4
10  -1.890073   0.056360    1.745144    0.466696
11  1.265585    -1.352641   0.821787    0.592830
12  0.933411    0.378417    -0.609415   0.592830

Edit 2:编辑2:

Came across the sklearn-pandas package.遇到了sklearn-pandas包。 It's focused on making scikit-learn easier to use with pandas.它专注于使 scikit-learn 更易于与 Pandas 一起使用。 sklearn-pandas is especially useful when you need to apply more than one type of transformation to column subsets of the DataFrame , a more common scenario.当您需要对DataFrame列子集应用多种类型的转换时, sklearn-pandas特别有用,这是一种更常见的场景。 It's documented, but this is how you'd achieve the transformation we just performed.它已记录在案,但这就是您实现我们刚刚执行的转换的方式。

from sklearn_pandas import DataFrameMapper

mapper = DataFrameMapper([(df.columns, StandardScaler())])
scaled_features = mapper.fit_transform(df.copy(), 4)
scaled_features_df = pd.DataFrame(scaled_features, index=df.index, columns=df.columns)
import pandas as pd    
from sklearn.preprocessing import StandardScaler

df = pd.read_csv('your file here')
ss = StandardScaler()
df_scaled = pd.DataFrame(ss.fit_transform(df),columns = df.columns)

The df_scaled will be the 'same' dataframe, only now with the scaled values df_scaled 将是“相同”的数据帧,只有现在具有缩放值

features = ["col1", "col2", "col3", "col4"]
autoscaler = StandardScaler()
df[features] = autoscaler.fit_transform(df[features])

Works for me:为我工作:

from sklearn.preprocessing import StandardScaler

cols = list(train_df_x_num.columns)
scaler = StandardScaler()
train_df_x_num[cols] = scaler.fit_transform(train_df_x_num[cols])

重新分配回 df.values 保留索引和列。

df.values[:] = StandardScaler().fit_transform(df)

这就是我所做的:

X.Column1 = StandardScaler().fit_transform(X.Column1.values.reshape(-1, 1))

You can mix multiple data types in scikit-learn using Neuraxle :您可以使用Neuraxle在 scikit-learn 中混合多种数据类型:

Option 1: discard the row names and column names选项 1:丢弃行名和列名

from neuraxle.pipeline import Pipeline
from neuraxle.base import NonFittableMixin, BaseStep

class PandasToNumpy(NonFittableMixin, BaseStep):
    def transform(self, data_inputs, expected_outputs): 
        return data_inputs.values

pipeline = Pipeline([
    PandasToNumpy(),
    StandardScaler(),
])

Then, you proceed as you intended:然后,您按预期进行:

features = df[["col1", "col2", "col3", "col4"]]  # ... your df data
pipeline, scaled_features = pipeline.fit_transform(features)

Option 2: to keep the original column names and row names选项2:保留原来的列名和行名

You could even do this with a wrapper as such:你甚至可以用这样的包装器来做到这一点:

from neuraxle.pipeline import Pipeline
from neuraxle.base import MetaStepMixin, BaseStep

class PandasValuesChangerOf(MetaStepMixin, BaseStep):
    def transform(self, data_inputs, expected_outputs): 
        new_data_inputs = self.wrapped.transform(data_inputs.values)
        new_data_inputs = self._merge(data_inputs, new_data_inputs)
        return new_data_inputs

    def fit_transform(self, data_inputs, expected_outputs): 
        self.wrapped, new_data_inputs = self.wrapped.fit_transform(data_inputs.values)
        new_data_inputs = self._merge(data_inputs, new_data_inputs)
        return self, new_data_inputs

    def _merge(self, data_inputs, new_data_inputs): 
        new_data_inputs = pd.DataFrame(
            new_data_inputs,
            index=data_inputs.index,
            columns=data_inputs.columns
        )
        return new_data_inputs

df_scaler = PandasValuesChangerOf(StandardScaler())

Then, you proceed as you intended:然后,您按预期进行:

features = df[["col1", "col2", "col3", "col4"]]  # ... your df data
df_scaler, scaled_features = df_scaler.fit_transform(features)

This worked with MinMaxScaler in getting back the array values to original dataframe.这与 MinMaxScaler 一起将数组值恢复到原始数据帧。 It should work on StandardScaler as well.它也应该适用于 StandardScaler。

data_scaled = pd.DataFrame(scaled_features, index=df.index, columns=df.columns)

where, data_scaled is the new data frame, scaled_features = the array post normalization, df = original dataframe for which we need the index and columns back.其中,data_scaled 是新数据框,scaled_features = 归一化后的数组,df = 我们需要返回索引和列的原始数据框。

Since sklearn Version 1.2, estiamtors can return a DataFrame keeping the column names.从 sklearn 1.2 版开始,estiamtors 可以返回一个保留列名的 DataFrame。 set_output can be configured per estimator by calling the set_output method or globally by setting set_config(transform_output="pandas") set_output可以通过调用set_output方法或通过设置set_config(transform_output="pandas")全局配置每个估计器

See Release Highlights for scikit-learn 1.2 - Pandas output with set_output API请参阅scikit-learn 1.2 的发布亮点 - 使用 set_output API 的 Pandas 输出

Example for set_output() : set_output()示例:

from sklearn.preprocessing import StandardScaler
scaler = StandardScaler().set_output(transform="pandas")

Example for set_config() : set_config()示例:

from sklearn import set_config
set_config(transform_output="pandas")

You can try this code, this will give you a DataFrame with indexes你可以试试这个代码,这会给你一个带索引的 DataFrame

import pandas as pd
from sklearn.preprocessing import StandardScaler
from sklearn.datasets import load_boston # boston housing dataset

dt= load_boston().data
col= load_boston().feature_names

# Make a dataframe
df = pd.DataFrame(data=dt, columns=col)

# define a method to scale data, looping thru the columns, and passing a scaler
def scale_data(data, columns, scaler):
    for col in columns:
        data[col] = scaler.fit_transform(data[col].values.reshape(-1, 1))
    return data

# specify a scaler, and call the method on boston data
scaler = StandardScaler()
df_scaled = scale_data(df, col, scaler)

# view first 10 rows of the scaled dataframe
df_scaled[0:10]

You could directly assign a numpy array to a data frame by using slicing .您可以使用slicing直接将 numpy 数组分配给数据框。

from sklearn.preprocessing import StandardScaler
features = df[["col1", "col2", "col3", "col4"]]
autoscaler = StandardScaler()
features[:] = autoscaler.fit_transform(features.values)
class  StandardScalerDF:

    def __init__(self, with_mean: bool = True, with_std: bool = True):
        self.with_mean = with_mean
        self.with_std = with_std
        
    def fit(self, data):
        self.scaler = StandardScaler(copy=True, with_mean=self.with_mean, 
                                     with_std=self.with_std).fit(data)
        return self.scaler
    
    def transform(self, data):
        return pd.DataFrame(self.scaler.transform(data), columns=data.columns,
                            index=data.index)

#example: 
obj = StandardScalerDF()
obj.fit(data_as_df)
scaled_data_df = obj.transform(data_as_df) #data type: DataFrame

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

相关问题 如何将 sklearn 预处理器 fit_transform 与 pandas.groupby.transform 一起使用 - How to use sklearn preprocessor fit_transform with pandas.groupby.transform 如何在两列上使用 sklearn TfidfVectorizer fit_transform - How to use sklearn TfidfVectorizer fit_transform on two columns 矢量化fit_transform如何在sklearn中工作? - How vectorizer fit_transform work in sklearn? Pandas DataFrame与sci-kit fit_transform()函数不兼容 - Pandas DataFrame incompatible with sci-kit fit_transform() function fit_transform(image)TSNE方法的数字数组格式 - Numpy array format for fit_transform(image) TSNE method sklearn countvectorizer 中的 fit_transform 和 transform 有什么区别? - What is the difference between fit_transform and transform in sklearn countvectorizer? sklearn中的'transform'和'fit_transform'有什么区别 - what is the difference between 'transform' and 'fit_transform' in sklearn 无法使用 sklearn 库中的 fit_transform 估算一维数组(拆分测试) - Cannot impute 1D array with fit_transform from sklearn library (split-test) Python fit_transform 仅返回零 - Python fit_transform return only zeros 有什么理由做.fit()和.transform()而不是just.fit_transform()? - Is there any reason to do .fit() and .transform() instead of just .fit_transform()?
 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM