繁体   English   中英

在火花中如何使用 window 规范与聚合函数

[英]In spark how to use window spec with aggregate functions

我有看起来像的火花数据框:

+------------+---------+---------------------------------------------------------------------------------------------------------+
|parent_key  |id       |value                                                       |raw_is_active           |updated_at         |
+------------+---------+------------------------------------------------------------+------------------------+-------------------+
|1           |2        |[, 0, USER, 2020-12-11 04:50:40, 2020-12-11 04:50:40,]      |[2020-12-11 04:50:40, 0]|2020-12-11 04:50:40|
|1           |2        |[testA, 0, USER, 2020-12-11 04:50:40, 2020-12-11 17:18:00,] |null                    |2020-12-11 17:18:00|
|1           |2        |[testA, 0, USER, 2020-12-11 04:50:40, 2020-12-11 17:19:56,] |null                    |2020-12-11 17:19:56|
|1           |2        |[testA, 1, USER, 2020-12-11 04:50:40, 2020-12-11 17:20:24,] |[2020-12-11 17:20:24, 1]|2020-12-11 17:20:24|
|2           |3        |[testB, 0, USER, 2020-12-11 17:24:03, 2020-12-11 17:24:03,] |[2020-12-11 17:24:03, 0]|2020-12-11 17:24:03|
|3           |4        |[testC, 0, USER, 2020-12-11 17:27:36, 2020-12-11 17:27:36,] |[2020-12-11 17:27:36, 0]|2020-12-11 17:27:36|
+------------+---------+------------------------------------------------------------+------------------------+-------------------+

架构是:

root
 |-- parent_key: long (nullable = true)
 |-- id: string (nullable = true)
 |-- value: struct (nullable = true)
 |    |-- first_name: string (nullable = true)
 |    |-- is_active: integer (nullable = true)
 |    |-- source: string (nullable = true)
 |    |-- created_at: timestamp (nullable = true)
 |    |-- updated_at: timestamp (nullable = true)
 |-- raw_is_active: struct (nullable = true)
 |    |-- updated_at: timestamp (nullable = true)
 |    |-- value: integer (nullable = true)
 |-- updated_at: timestamp (nullable = true)

我正在寻找 output:

+------------+---------+------------------------------------------------------------+---------------------------------------------------+-------------------+
|parent_key  |id       |value                                                       |raw_is_active                                      |updated_at         |
+------------+---------+---------------------------------------------------------------------------------------------------------+--------------------------+
|1           |2        |[testA, 1, USER, 2020-12-11 04:50:40, 2020-12-11 17:20:24]  |[[2020-12-11 04:50:40, 0],[2020-12-11 17:20:24, 1]]|2020-12-11 04:50:40|
|2           |3        |[testB, 0, USER, 2020-12-11 17:24:03, 2020-12-11 17:24:03]  |[2020-12-11 17:24:03, 0]                           |2020-12-11 17:24:03|
|3           |4        |[testC, 0, USER, 2020-12-11 17:27:36, 2020-12-11 17:27:36]  |[2020-12-11 17:27:36, 0]                           |2020-12-11 17:27:36|
+------------+---------+---------------------------------------------------------------------------------------------------------+--------------------------+

因此,在updated_at列的基础上,我想保留最新的行,并且还想为给定id的所有行创建raw_is_active数组。

我知道我可以使用代码选择最新value

 val windowSpec = Window.partitionBy("id").orderBy(col("updated_at").desc)

    dataFrame
      .withColumn("maxTS", first("updated_at").over(windowSpec))
      .select("*").where(col("maxTS") === col("updated_at"))
      .drop("maxTS")

但不确定我如何也可以为raw_is_active列创建一个集合。

或者我可以完全按 function 使用 group,例如:

 dataFrame
      .groupBy("parent_key", "id")
      .agg(collect_list("value") as "value_list", collect_set("raw_is_active") as "active_list")
      .withColumn("value", col("value_list")(size(col("value_list")).minus(1)))
      .drop("value_list")

对于上述我不确定

  1. .withColumn("value", col("value_list")(size(col("value_list")).minus(1)))总是会给我最新的值
  2. 考虑使用collect_listcollect_set ,这段代码有效吗?

更新感谢@mck,我能够让它与代码一起工作:

val windowSpec = Window.partitionBy("id").orderBy(col("updated_at").desc)
val windowSpecSet = Window.partitionBy("id").orderBy(col("updated_at"))

val df2 = dataFrame.withColumn(
    "rn",
    row_number().over(windowSpec)
).withColumn(
    "active_list",
    collect_set("raw_is_active").over(windowSpecSet)
).drop("raw_is_active").filter("rn = 1")

但是,该代码比我现有的代码花费更多时间:

 dataFrame
      .groupBy("parent_key", "id")
      .agg(collect_list("value") as "value_list", collect_set("raw_is_active") as "active_list")
      .withColumn("value", col("value_list")(size(col("value_list")).minus(1)))
      .drop("value_list")

我的印象是 window function 会比groupByagg表现更好。

为每个 id 分区中的每一行分配一个row_number并过滤row_number = 1的行:

val windowSpec = Window.partitionBy("id").orderBy(col("updated_at").desc)

val df2 = dataFrame.withColumn(
    "rn",
    row_number().over(windowSpec)
).withColumn(
    "active_list",
    array_sort(collect_set("raw_is_active").over(windowSpec))
).drop("raw_is_active").filter("rn = 1")

暂无
暂无

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM