[英]Join two Spark mllib pipelines together
我有两个独立的DataFrames
,每个都有几个不同的处理阶段,我在管道中使用mllib
变换器来处理。
我现在想要将这两个管道连接在一起,保留每个DataFrame
的功能(列)。
Scikit-learn有FeatureUnion
类来处理它,我似乎无法找到mllib
的等价mllib
。
我可以在一个管道的末尾添加一个自定义变换器阶段,该管道将另一个管道生成的DataFrame作为属性并将其连接到transform方法中,但这看起来很混乱。
Pipeline
或PipelineModel
是有效的PipelineStages
,因此可以组合在一个Pipeline
。 例如:
from pyspark.ml import Pipeline
from pyspark.ml.feature import VectorAssembler
df = spark.createDataFrame([
(1.0, 0, 1, 1, 0),
(0.0, 1, 0, 0, 1)
], ("label", "x1", "x2", "x3", "x4"))
pipeline1 = Pipeline(stages=[
VectorAssembler(inputCols=["x1", "x2"], outputCol="features1")
])
pipeline2 = Pipeline(stages=[
VectorAssembler(inputCols=["x3", "x4"], outputCol="features2")
])
你可以结合Pipelines
:
Pipeline(stages=[
pipeline1, pipeline2,
VectorAssembler(inputCols=["features1", "features2"], outputCol="features")
]).fit(df).transform(df)
+-----+---+---+---+---+---------+---------+-----------------+
|label|x1 |x2 |x3 |x4 |features1|features2|features |
+-----+---+---+---+---+---------+---------+-----------------+
|1.0 |0 |1 |1 |0 |[0.0,1.0]|[1.0,0.0]|[0.0,1.0,1.0,0.0]|
|0.0 |1 |0 |0 |1 |[1.0,0.0]|[0.0,1.0]|[1.0,0.0,0.0,1.0]|
+-----+---+---+---+---+---------+---------+-----------------+
或预先安装的PipelineModels
:
model1 = pipeline1.fit(df)
model2 = pipeline2.fit(df)
Pipeline(stages=[
model1, model2,
VectorAssembler(inputCols=["features1", "features2"], outputCol="features")
]).fit(df).transform(df)
+-----+---+---+---+---+---------+---------+-----------------+
|label| x1| x2| x3| x4|features1|features2| features|
+-----+---+---+---+---+---------+---------+-----------------+
| 1.0| 0| 1| 1| 0|[0.0,1.0]|[1.0,0.0]|[0.0,1.0,1.0,0.0]|
| 0.0| 1| 0| 0| 1|[1.0,0.0]|[0.0,1.0]|[1.0,0.0,0.0,1.0]|
+-----+---+---+---+---+---------+---------+-----------------+
所以我建议的方法是预先连接数据,并fit
和transform
整个DataFrame
。
也可以看看:
声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.