簡體   English   中英

在火花中如何使用 window 規范與聚合函數

[英]In spark how to use window spec with aggregate functions

我有看起來像的火花數據框:

+------------+---------+---------------------------------------------------------------------------------------------------------+
|parent_key  |id       |value                                                       |raw_is_active           |updated_at         |
+------------+---------+------------------------------------------------------------+------------------------+-------------------+
|1           |2        |[, 0, USER, 2020-12-11 04:50:40, 2020-12-11 04:50:40,]      |[2020-12-11 04:50:40, 0]|2020-12-11 04:50:40|
|1           |2        |[testA, 0, USER, 2020-12-11 04:50:40, 2020-12-11 17:18:00,] |null                    |2020-12-11 17:18:00|
|1           |2        |[testA, 0, USER, 2020-12-11 04:50:40, 2020-12-11 17:19:56,] |null                    |2020-12-11 17:19:56|
|1           |2        |[testA, 1, USER, 2020-12-11 04:50:40, 2020-12-11 17:20:24,] |[2020-12-11 17:20:24, 1]|2020-12-11 17:20:24|
|2           |3        |[testB, 0, USER, 2020-12-11 17:24:03, 2020-12-11 17:24:03,] |[2020-12-11 17:24:03, 0]|2020-12-11 17:24:03|
|3           |4        |[testC, 0, USER, 2020-12-11 17:27:36, 2020-12-11 17:27:36,] |[2020-12-11 17:27:36, 0]|2020-12-11 17:27:36|
+------------+---------+------------------------------------------------------------+------------------------+-------------------+

架構是:

root
 |-- parent_key: long (nullable = true)
 |-- id: string (nullable = true)
 |-- value: struct (nullable = true)
 |    |-- first_name: string (nullable = true)
 |    |-- is_active: integer (nullable = true)
 |    |-- source: string (nullable = true)
 |    |-- created_at: timestamp (nullable = true)
 |    |-- updated_at: timestamp (nullable = true)
 |-- raw_is_active: struct (nullable = true)
 |    |-- updated_at: timestamp (nullable = true)
 |    |-- value: integer (nullable = true)
 |-- updated_at: timestamp (nullable = true)

我正在尋找 output:

+------------+---------+------------------------------------------------------------+---------------------------------------------------+-------------------+
|parent_key  |id       |value                                                       |raw_is_active                                      |updated_at         |
+------------+---------+---------------------------------------------------------------------------------------------------------+--------------------------+
|1           |2        |[testA, 1, USER, 2020-12-11 04:50:40, 2020-12-11 17:20:24]  |[[2020-12-11 04:50:40, 0],[2020-12-11 17:20:24, 1]]|2020-12-11 04:50:40|
|2           |3        |[testB, 0, USER, 2020-12-11 17:24:03, 2020-12-11 17:24:03]  |[2020-12-11 17:24:03, 0]                           |2020-12-11 17:24:03|
|3           |4        |[testC, 0, USER, 2020-12-11 17:27:36, 2020-12-11 17:27:36]  |[2020-12-11 17:27:36, 0]                           |2020-12-11 17:27:36|
+------------+---------+---------------------------------------------------------------------------------------------------------+--------------------------+

因此,在updated_at列的基礎上,我想保留最新的行,並且還想為給定id的所有行創建raw_is_active數組。

我知道我可以使用代碼選擇最新value

 val windowSpec = Window.partitionBy("id").orderBy(col("updated_at").desc)

    dataFrame
      .withColumn("maxTS", first("updated_at").over(windowSpec))
      .select("*").where(col("maxTS") === col("updated_at"))
      .drop("maxTS")

但不確定我如何也可以為raw_is_active列創建一個集合。

或者我可以完全按 function 使用 group,例如:

 dataFrame
      .groupBy("parent_key", "id")
      .agg(collect_list("value") as "value_list", collect_set("raw_is_active") as "active_list")
      .withColumn("value", col("value_list")(size(col("value_list")).minus(1)))
      .drop("value_list")

對於上述我不確定

  1. .withColumn("value", col("value_list")(size(col("value_list")).minus(1)))總是會給我最新的值
  2. 考慮使用collect_listcollect_set ,這段代碼有效嗎?

更新感謝@mck,我能夠讓它與代碼一起工作:

val windowSpec = Window.partitionBy("id").orderBy(col("updated_at").desc)
val windowSpecSet = Window.partitionBy("id").orderBy(col("updated_at"))

val df2 = dataFrame.withColumn(
    "rn",
    row_number().over(windowSpec)
).withColumn(
    "active_list",
    collect_set("raw_is_active").over(windowSpecSet)
).drop("raw_is_active").filter("rn = 1")

但是,該代碼比我現有的代碼花費更多時間:

 dataFrame
      .groupBy("parent_key", "id")
      .agg(collect_list("value") as "value_list", collect_set("raw_is_active") as "active_list")
      .withColumn("value", col("value_list")(size(col("value_list")).minus(1)))
      .drop("value_list")

我的印象是 window function 會比groupByagg表現更好。

為每個 id 分區中的每一行分配一個row_number並過濾row_number = 1的行:

val windowSpec = Window.partitionBy("id").orderBy(col("updated_at").desc)

val df2 = dataFrame.withColumn(
    "rn",
    row_number().over(windowSpec)
).withColumn(
    "active_list",
    array_sort(collect_set("raw_is_active").over(windowSpec))
).drop("raw_is_active").filter("rn = 1")

暫無
暫無

聲明:本站的技術帖子網頁,遵循CC BY-SA 4.0協議,如果您需要轉載,請注明本站網址或者原文地址。任何問題請咨詢:yoyou2525@163.com.

 
粵ICP備18138465號  © 2020-2024 STACKOOM.COM