簡體   English   中英

Spark:匹配兩個數據幀的列

[英]Spark: Match columns from two dataframes

我有一個格式的數據框如下

+---+---+------+---+
| sp|sp2|colour|sp3|
+---+---+------+---+
|  0|  1|     1|  0|
|  1|  0|     0|  1|
|  0|  0|     1|  0|
+---+---+------+---+

另一個數據幀包含第一個數據幀中每列的系數。 例如

+------+------+---------+------+
| CE_sp|CE_sp2|CE_colour|CE_sp3|
+------+------+---------+------+
|  0.94|  0.31|     0.11|  0.72|
+------+------+---------+------+

現在我想在第一個數據幀中添加一個列,該列是通過添加第二個數據幀的分數來計算的。

對於前

+---+---+------+---+-----+
| sp|sp2|colour|sp3|Score|
+---+---+------+---+-----+
|  0|  1|     1|  0| 0.42|
|  1|  0|     0|  1| 1.66|
|  0|  0|     1|  0| 0.11|
+---+---+------+---+-----+

r -> row of first dataframe
score = r(0)*CE_sp + r(1)*CE_sp2 + r(2)*CE_colour + r(3)*CE_sp3

可以有n列,列的順序可以不同。

提前致謝!!!

快速簡單:

import org.apache.spark.sql.functions.col

val df = Seq(
  (0, 1, 1, 0), (1, 0, 0, 1), (0, 0, 1, 0)
).toDF("sp","sp2", "colour", "sp3")

val coefs = Map("sp" -> 0.94, "sp2" -> 0.32, "colour" -> 0.11, "sp3" -> 0.72)
val score = df.columns.map(
  c => col(c) * coefs.getOrElse(c, 0.0)).reduce(_ + _)

df.withColumn("score", score)

在PySpark中也是如此:

from pyspark.sql.functions import col

df = sc.parallelize([
    (0, 1, 1, 0), (1, 0, 0, 1), (0, 0, 1, 0)
]).toDF(["sp","sp2", "colour", "sp3"])

coefs = {"sp": 0.94, "sp2": 0.32, "colour": 0.11, "sp3": 0.72}
df.withColumn("score", sum(col(c) * coefs.get(c, 0) for c in df.columns))

我相信有很多方法可以完成你想要做的事情。 在所有情況下,您都不需要第二個DataFrame,就像我在評論中所說的那樣。

這是一種方式:

import org.apache.spark.ml.feature.{ElementwiseProduct, VectorAssembler}
import org.apache.spark.mllib.linalg.{Vectors,Vector => MLVector}

val df = Seq((0, 1, 1, 0), (1, 0, 0, 1), (0, 0, 1, 0)).toDF("sp", "sp2", "colour", "sp3")

// Your coefficient represents a dense Vector
val coeffSp = 0.94
val coeffSp2 = 0.31
val coeffColour = 0.11
val coeffSp3 = 0.72

val weightVectors = Vectors.dense(Array(coeffSp, coeffSp2, coeffColour, coeffSp3))

// You can assemble the features with VectorAssembler
val assembler = new VectorAssembler()
  .setInputCols(df.columns) // since you need to compute on all your columns
  .setOutputCol("features")

// Once these features assembled we can perform an element wise product with the weight vector
val output = assembler.transform(df)
val transformer = new ElementwiseProduct()
  .setScalingVec(weightVectors)
  .setInputCol("features")
  .setOutputCol("weightedFeatures")

// Create an UDF to sum the weighted vectors values
import org.apache.spark.sql.functions.udf
def score = udf((score: MLVector) => { score.toDense.toArray.sum })

// Apply the UDF on the weightedFeatures
val scores = transformer.transform(output).withColumn("score",score('weightedFeatures))
scores.show
// +---+---+------+---+-----------------+-------------------+-----+
// | sp|sp2|colour|sp3|         features|   weightedFeatures|score|
// +---+---+------+---+-----------------+-------------------+-----+
// |  0|  1|     1|  0|[0.0,1.0,1.0,0.0]|[0.0,0.31,0.11,0.0]| 0.42|
// |  1|  0|     0|  1|[1.0,0.0,0.0,1.0]|[0.94,0.0,0.0,0.72]| 1.66|
// |  0|  0|     1|  0|    (4,[2],[1.0])|     (4,[2],[0.11])| 0.11|
// +---+---+------+---+-----------------+-------------------+-----+

我希望這有幫助。 如果您有更多問題,請不要猶豫。

這是一個簡單的解決方案:

scala> df_wght.show
+-----+------+---------+------+
|ce_sp|ce_sp2|ce_colour|ce_sp3|
+-----+------+---------+------+
|    1|     2|        3|     4|
+-----+------+---------+------+

scala> df.show
+---+---+------+---+
| sp|sp2|colour|sp3|
+---+---+------+---+
|  0|  1|     1|  0|
|  1|  0|     0|  1|
|  0|  0|     1|  0|
+---+---+------+---+

然后我們可以做一個簡單的交叉連接和交叉產品。

val scored = df.join(df_wght).selectExpr("(sp*ce_sp + sp2*ce_sp2 + colour*ce_colour + sp3*ce_sp3) as final_score")

輸出:

scala> scored.show
+-----------+                                                                   
|final_score|
+-----------+
|          5|
|          5|
|          3|
+-----------+

暫無
暫無

聲明:本站的技術帖子網頁,遵循CC BY-SA 4.0協議,如果您需要轉載,請注明本站網址或者原文地址。任何問題請咨詢:yoyou2525@163.com.

 
粵ICP備18138465號  © 2020-2024 STACKOOM.COM