繁体   English   中英

Spark:按键减少/聚合

[英]Spark: reduce/aggregate by key

我是 Spark 和 Scala 的新手,所以我不知道如何调用这种问题(这使得搜索变得非常困难)。

我有以下结构的数据:

[(date1, (name1, 1)), (date1, (name1, 1)), (date1, (name2, 1)), (date2, (name3, 1))]

在某种程度上,这必须减少/汇总为:

[(date1, [(name1, 2), (name2, 1)]), (date2, [(name3, 1)])]

我知道如何在reduceByKey对列表上执行reduceByKey ,但是这个特殊问题对我来说是个谜。

提前致谢!

我的数据,但这里是逐步的:

val rdd1 = sc.makeRDD(Array( ("d1",("A",1)), ("d1",("A",1)), ("d1",("B",1)), ("d2",("E",1)) ),2)
val rdd2 = rdd1.map(x => ((x._1, x._2._1), x._2._2))
val rdd3 = rdd2.groupByKey
val rdd4 = rdd3.map{ 
   case (str, nums) => (str, nums.sum) 
}
val rdd5 =  rdd4.map(x => (x._1._1, (x._1._2, x._2))).groupByKey
rdd5.collect

返回:

res28: Array[(String, Iterable[(String, Int)])] = Array((d2,CompactBuffer((E,1))), (d1,CompactBuffer((A,2), (B,1))))

避免 groupByKey 的更好方法如下:

val rdd1 = sc.makeRDD(Array( ("d1",("A",1)), ("d1",("A",1)), ("d1",("B",1)), ("d2",("E",1)) ),2)
val rdd2 = rdd1.map(x => ((x._1, x._2._1), (x._2._2))) // Need to add quotes around V part for reduceByKey
val rdd3 = rdd2.reduceByKey(_+_)
val rdd4 = rdd3.map(x => (x._1._1, (x._1._2, x._2))).groupByKey // Necessary Shuffle
rdd4.collect

正如我在专栏中所说的,它可以用 DataFrames 来完成结构化数据,所以在下面运行:

// This above should be enough.
import org.apache.spark.sql.expressions._
import org.apache.spark.sql.functions._

val rddA = sc.makeRDD(Array( ("d1","A",1), ("d1","A",1), ("d1","B",1), ("d2","E",1) ),2)
val dfA = rddA.toDF("c1", "c2", "c3")

val dfB = dfA
   .groupBy("c1", "c2")
   .agg(sum("c3").alias("sum"))
dfB.show

返回:

+---+---+---+
| c1| c2|sum|
+---+---+---+
| d1|  A|  2|
| d2|  E|  1|
| d1|  B|  1|
+---+---+---+

但是你可以这样做来近似上面的 CompactBuffer 。

import org.apache.spark.sql.functions.{col, udf}

case class XY(x: String, y: Long)
val xyTuple = udf((x: String, y: Long) => XY(x, y))

val dfC = dfB
         .withColumn("xy", xyTuple(col("c2"), col("sum")))
         .drop("c2")
         .drop("sum")

dfC.printSchema
dfC.show

// Then ... this gives you the CompactBuffer answer but from a DF-perspective
val dfD = dfC.groupBy(col("c1")).agg(collect_list(col("xy")))   
dfD.show

返回 - 一些重命名 req'd 和可能的排序:

---+----------------+
| c1|collect_list(xy)|
+---+----------------+
| d2|        [[E, 1]]|
| d1|[[A, 2], [B, 1]]|
+---+----------------+

暂无
暂无

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM