简体   繁体   English

在 PySpark 中的多列上应用 MinMaxScaler

[英]Apply MinMaxScaler on multiple columns in PySpark

I want to apply MinMaxScalar of PySpark to multiple columns of PySpark data frame df .我想将MinMaxScalar的 MinMaxScalar 应用于 PySpark 数据框df多列。 So far, I only know how to apply it to a single column, eg x .到目前为止,我只知道如何将它应用于单个列,例如x

from pyspark.ml.feature import MinMaxScaler

pdf = pd.DataFrame({'x':range(3), 'y':[1,2,5], 'z':[100,200,1000]})
df = spark.createDataFrame(pdf)

scaler = MinMaxScaler(inputCol="x", outputCol="x")
scalerModel = scaler.fit(df)
scaledData = scalerModel.transform(df)

What if I have 100 columns?如果我有 100 列怎么办? Is there any way to do min-max scaling for many columns in PySpark?有没有办法对 PySpark 中的许多列进行最小-最大缩放?

Update:更新:

Also, how to apply MinMaxScalar on integer or double values?另外,如何将MinMaxScalar应用于整数或双MinMaxScalar值? It throws the following error:它引发以下错误:

java.lang.IllegalArgumentException: requirement failed: Column length must be of type struct<type:tinyint,size:int,indices:array<int>,values:array<double>> but was actually int.

Question 1:问题 1:

How to change your example to run properly.如何更改您的示例以正常运行。 You need to prepare the data as a vector for the transformers to work.您需要准备数据作为转换器工作的向量。

from pyspark.ml.feature import MinMaxScaler
from pyspark.ml import Pipeline
from pyspark.ml.linalg import VectorAssembler

pdf = pd.DataFrame({'x':range(3), 'y':[1,2,5], 'z':[100,200,1000]})
df = spark.createDataFrame(pdf)

assembler = VectorAssembler(inputCols=["x"], outputCol="x_vec")
scaler = MinMaxScaler(inputCol="x_vec", outputCol="x_scaled")
pipeline = Pipeline(stages=[assembler, scaler])
scalerModel = pipeline.fit(df)
scaledData = scalerModel.transform(df)

Question 2:问题2:

To run MinMaxScaler on multiple columns you can use a pipeline that receives a list of transformation prepared with with a list comprehension:要在多个列上运行 MinMaxScaler,您可以使用接收带有列表理解准备的转换列表的管道:

from pyspark.ml import Pipeline
from pyspark.ml.feature import MinMaxScaler
columns_to_scale = ["x", "y", "z"]
assemblers = [VectorAssembler(inputCols=[col], outputCol=col + "_vec") for col in columns_to_scale]
scalers = [MinMaxScaler(inputCol=col + "_vec", outputCol=col + "_scaled") for col in columns_to_scale]
pipeline = Pipeline(stages=assemblers + scalers)
scalerModel = pipeline.fit(df)
scaledData = scalerModel.transform(df)

Check this example pipeline in the official documentation.在官方文档中查看此示例管道

Eventually, you will end with the results in this format:最终,您将得到以下格式的结果:

>>> scaledData.printSchema() 
root
 |-- x: long (nullable = true)
 |-- y: long (nullable = true)
 |-- z: long (nullable = true)
 |-- x_vec: vector (nullable = true)
 |-- y_vec: vector (nullable = true)
 |-- z_vec: vector (nullable = true)
 |-- x_scaled: vector (nullable = true)
 |-- y_scaled: vector (nullable = true)
 |-- z_scaled: vector (nullable = true)

>>> scaledData.show()
+---+---+----+-----+-----+--------+--------+--------+--------------------+
|  x|  y|   z|x_vec|y_vec|   z_vec|x_scaled|y_scaled|            z_scaled|
+---+---+----+-----+-----+--------+--------+--------+--------------------+
|  0|  1| 100|[0.0]|[1.0]| [100.0]|   [0.0]|   [0.0]|               [0.0]|
|  1|  2| 200|[1.0]|[2.0]| [200.0]|   [0.5]|  [0.25]|[0.1111111111111111]|
|  2|  5|1000|[2.0]|[5.0]|[1000.0]|   [1.0]|   [1.0]|               [1.0]|
+---+---+----+-----+-----+--------+--------+--------+--------------------+

Extra Post-processing:额外的后处理:

You can recover the columns in their original names with some post-processing.您可以通过一些后处理以原始名称恢复列。 For example:例如:

from pyspark.sql import functions as f
names = {x + "_scaled": x for x in columns_to_scale}
scaledData = scaledData.select([f.col(c).alias(names[c]) for c in names.keys()])

The output will be:输出将是:

scaledData.show()
+------+-----+--------------------+
|     y|    x|                   z|
+------+-----+--------------------+
| [0.0]|[0.0]|               [0.0]|
|[0.25]|[0.5]|[0.1111111111111111]|
| [1.0]|[1.0]|               [1.0]|
+------+-----+--------------------+

You could use a single MinMaxScaler instance for a "vector-assembled" set of features, rather than creating one MinMaxScaler per column you want to transform(scale in this case).您可以将单个 MinMaxScaler 实例用于一组“矢量组装”功能,而不是为要转换的每列创建一个 MinMaxScaler(在这种情况下为缩放)。

from pyspark.ml.feature import MinMaxScaler
from pyspark.ml.feature import VectorAssembler

#1. Your original dataset
#pdf = pd.DataFrame({'x':range(3), 'y':[1,2,5], 'z':[100,200,1000]})
#df = spark.createDataFrame(pdf)

df = spark.createDataFrame([(0, 10.0, 0.1), (1, 1.0, 0.20), (2, 1.0, 0.9)],["x", "y", "z"])

df.show()
+---+----+---+
|  x|   y|  z|
+---+----+---+
|  0|10.0|0.1|
|  1| 1.0|0.2|
|  2| 1.0|0.9|
+---+----+---+

#2. Vector assembled set of features 
# (assemble only the columns you want to MinMax Scale)
assembler = VectorAssembler(inputCols=["x", "y", "z"], 
outputCol="features")
output = assembler.transform(df)

output.show()

+---+----+---+--------------+
|  x|   y|  z|      features|
+---+----+---+--------------+
|  0|10.0|0.1|[0.0,10.0,0.1]|
|  1| 1.0|0.2| [1.0,1.0,0.2]|
|  2| 1.0|0.9| [2.0,1.0,0.9]|
+---+----+---+--------------+

#3. Applying MinMaxScaler to your assembled features 
scaler = MinMaxScaler(inputCol="features", outputCol="scaledFeatures")
# rescale each feature to range [min, max].
scaledData = scaler.fit(output).transform(output)
scaledData.show()

+---+----+---+--------------+---------------+
|  x|   y|  z|      features| scaledFeatures|
+---+----+---+--------------+---------------+
|  0|10.0|0.1|[0.0,10.0,0.1]|  [0.0,1.0,0.0]|
|  1| 1.0|0.2| [1.0,1.0,0.2]|[0.5,0.0,0.125]|
|  2| 1.0|0.9| [2.0,1.0,0.9]|  [1.0,0.0,1.0]|
+---+----+---+--------------+---------------+

Hope this helps.希望这可以帮助。

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM