简体   繁体   English

Spark:匹配两个数据帧的列

[英]Spark: Match columns from two dataframes

I have a dataframe of format as below 我有一个格式的数据框如下

+---+---+------+---+
| sp|sp2|colour|sp3|
+---+---+------+---+
|  0|  1|     1|  0|
|  1|  0|     0|  1|
|  0|  0|     1|  0|
+---+---+------+---+

another dataframe contains coefficients for each column in first dataframe. 另一个数据帧包含第一个数据帧中每列的系数。 for example 例如

+------+------+---------+------+
| CE_sp|CE_sp2|CE_colour|CE_sp3|
+------+------+---------+------+
|  0.94|  0.31|     0.11|  0.72|
+------+------+---------+------+

Now I want to add a column to first dataframe which is calculated by adding scores from second dataframe. 现在我想在第一个数据帧中添加一个列,该列是通过添加第二个数据帧的分数来计算的。

for ex. 对于前

+---+---+------+---+-----+
| sp|sp2|colour|sp3|Score|
+---+---+------+---+-----+
|  0|  1|     1|  0| 0.42|
|  1|  0|     0|  1| 1.66|
|  0|  0|     1|  0| 0.11|
+---+---+------+---+-----+

ie

r -> row of first dataframe
score = r(0)*CE_sp + r(1)*CE_sp2 + r(2)*CE_colour + r(3)*CE_sp3

There can be n number of columns and order of columns can be different. 可以有n列,列的顺序可以不同。

Thanks in Advance!!! 提前致谢!!!

Quick and simple: 快速简单:

import org.apache.spark.sql.functions.col

val df = Seq(
  (0, 1, 1, 0), (1, 0, 0, 1), (0, 0, 1, 0)
).toDF("sp","sp2", "colour", "sp3")

val coefs = Map("sp" -> 0.94, "sp2" -> 0.32, "colour" -> 0.11, "sp3" -> 0.72)
val score = df.columns.map(
  c => col(c) * coefs.getOrElse(c, 0.0)).reduce(_ + _)

df.withColumn("score", score)

And the same thing in PySpark: 在PySpark中也是如此:

from pyspark.sql.functions import col

df = sc.parallelize([
    (0, 1, 1, 0), (1, 0, 0, 1), (0, 0, 1, 0)
]).toDF(["sp","sp2", "colour", "sp3"])

coefs = {"sp": 0.94, "sp2": 0.32, "colour": 0.11, "sp3": 0.72}
df.withColumn("score", sum(col(c) * coefs.get(c, 0) for c in df.columns))

I believe that there many way to accomplish what you are trying to do. 我相信有很多方法可以完成你想要做的事情。 In all cases you don't need that second DataFrame, like I said in the comments. 在所有情况下,您都不需要第二个DataFrame,就像我在评论中所说的那样。

Here is one way : 这是一种方式:

import org.apache.spark.ml.feature.{ElementwiseProduct, VectorAssembler}
import org.apache.spark.mllib.linalg.{Vectors,Vector => MLVector}

val df = Seq((0, 1, 1, 0), (1, 0, 0, 1), (0, 0, 1, 0)).toDF("sp", "sp2", "colour", "sp3")

// Your coefficient represents a dense Vector
val coeffSp = 0.94
val coeffSp2 = 0.31
val coeffColour = 0.11
val coeffSp3 = 0.72

val weightVectors = Vectors.dense(Array(coeffSp, coeffSp2, coeffColour, coeffSp3))

// You can assemble the features with VectorAssembler
val assembler = new VectorAssembler()
  .setInputCols(df.columns) // since you need to compute on all your columns
  .setOutputCol("features")

// Once these features assembled we can perform an element wise product with the weight vector
val output = assembler.transform(df)
val transformer = new ElementwiseProduct()
  .setScalingVec(weightVectors)
  .setInputCol("features")
  .setOutputCol("weightedFeatures")

// Create an UDF to sum the weighted vectors values
import org.apache.spark.sql.functions.udf
def score = udf((score: MLVector) => { score.toDense.toArray.sum })

// Apply the UDF on the weightedFeatures
val scores = transformer.transform(output).withColumn("score",score('weightedFeatures))
scores.show
// +---+---+------+---+-----------------+-------------------+-----+
// | sp|sp2|colour|sp3|         features|   weightedFeatures|score|
// +---+---+------+---+-----------------+-------------------+-----+
// |  0|  1|     1|  0|[0.0,1.0,1.0,0.0]|[0.0,0.31,0.11,0.0]| 0.42|
// |  1|  0|     0|  1|[1.0,0.0,0.0,1.0]|[0.94,0.0,0.0,0.72]| 1.66|
// |  0|  0|     1|  0|    (4,[2],[1.0])|     (4,[2],[0.11])| 0.11|
// +---+---+------+---+-----------------+-------------------+-----+

I hope this helps. 我希望这有帮助。 Don't hesitate if you have more questions. 如果您有更多问题,请不要犹豫。

Here is a simple solution: 这是一个简单的解决方案:

scala> df_wght.show
+-----+------+---------+------+
|ce_sp|ce_sp2|ce_colour|ce_sp3|
+-----+------+---------+------+
|    1|     2|        3|     4|
+-----+------+---------+------+

scala> df.show
+---+---+------+---+
| sp|sp2|colour|sp3|
+---+---+------+---+
|  0|  1|     1|  0|
|  1|  0|     0|  1|
|  0|  0|     1|  0|
+---+---+------+---+

Then we can just do a simple cross join and crossproduct. 然后我们可以做一个简单的交叉连接和交叉产品。

val scored = df.join(df_wght).selectExpr("(sp*ce_sp + sp2*ce_sp2 + colour*ce_colour + sp3*ce_sp3) as final_score")

The output: 输出:

scala> scored.show
+-----------+                                                                   
|final_score|
+-----------+
|          5|
|          5|
|          3|
+-----------+

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM