简体   繁体   English

Apache Spark-Scala API-按顺序增加的密钥进行汇总

[英]Apache Spark - Scala API - Aggregate on sequentially increasing key

I have a data frame that looks something like this: 我有一个看起来像这样的数据框:

val df = sc.parallelize(Seq(
  (3,1,"A"),(3,2,"B"),(3,3,"C"),
  (2,1,"D"),(2,2,"E"),
  (3,1,"F"),(3,2,"G"),(3,3,"G"),
  (2,1,"X"),(2,2,"X")
)).toDF("TotalN", "N", "String")

+------+---+------+
|TotalN|  N|String|
+------+---+------+
|     3|  1|     A|
|     3|  2|     B|
|     3|  3|     C|
|     2|  1|     D|
|     2|  2|     E|
|     3|  1|     F|
|     3|  2|     G|
|     3|  3|     G|
|     2|  1|     X|
|     2|  2|     X|
+------+---+------+

I need to aggregate the strings by concatenating them together based on the TotalN and the sequentially increasing ID (N). 我需要通过基于TotalN和顺序增加的ID(N)将它们串联在一起来聚合字符串。 The problem is there is not a unique ID for each aggregation I can group by. 问题是我可以分组的每个聚合都没有唯一的ID。 So, I need to do something like "for each row look at the TotalN, loop through the next N rows and concatenate, then reset". 因此,我需要做类似“对于每一行查看TotalN,遍历下N行并连接,然后重置”的操作。

+------+------+
|TotalN|String|
+------+------+
|     3|   ABC|
|     2|    DE|
|     3|   FGG|
|     2|    XX|
+------+------+

Any pointers much appreciated. 任何指针,不胜感激。

Using Spark 2.3.1 and the Scala Api. 使用Spark 2.3.1和Scala Api。

Try this: 尝试这个:

val df = spark.sparkContext.parallelize(Seq(
  (3, 1, "A"), (3, 2, "B"), (3, 3, "C"),
  (2, 1, "D"), (2, 2, "E"),
  (3, 1, "F"), (3, 2, "G"), (3, 3, "G"),
  (2, 1, "X"), (2, 2, "X")
)).toDF("TotalN", "N", "String")


df.createOrReplaceTempView("data")

val sqlDF = spark.sql(
  """
    | SELECT TotalN d, N, String, ROW_NUMBER() over (order by TotalN) as rowNum
    | FROM data
  """.stripMargin)

sqlDF.withColumn("key", $"N" - $"rowNum")
  .groupBy("key").agg(collect_list('String).as("texts")).show()

Solution is to calculate a grouping variable using the row_number function which can be used in later groupBy. 解决方案是使用row_number函数计算分组变量,该函数可在以后的groupBy中使用。

import org.apache.spark.sql.expressions.Window
import org.apache.spark.sql.functions.row_number

var w = Window.orderBy("TotalN")
df.withColumn("GeneratedID", $"N" - row_number.over(w)).show

+------+---+------+-----------+
|TotalN|  N|String|GeneratedID|
+------+---+------+-----------+
|     2|  1|     D|          0|
|     2|  2|     E|          0|
|     2|  1|     X|         -2|
|     2|  2|     X|         -2|
|     3|  1|     A|         -4|
|     3|  2|     B|         -4|
|     3|  3|     C|         -4|
|     3|  1|     F|         -7|
|     3|  2|     G|         -7|
|     3|  3|     G|         -7|
+------+---+------+-----------+

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM