简体   繁体   中英

In spark how to use window spec with aggregate functions

I have spark data frame that looks like:

+------------+---------+---------------------------------------------------------------------------------------------------------+
|parent_key  |id       |value                                                       |raw_is_active           |updated_at         |
+------------+---------+------------------------------------------------------------+------------------------+-------------------+
|1           |2        |[, 0, USER, 2020-12-11 04:50:40, 2020-12-11 04:50:40,]      |[2020-12-11 04:50:40, 0]|2020-12-11 04:50:40|
|1           |2        |[testA, 0, USER, 2020-12-11 04:50:40, 2020-12-11 17:18:00,] |null                    |2020-12-11 17:18:00|
|1           |2        |[testA, 0, USER, 2020-12-11 04:50:40, 2020-12-11 17:19:56,] |null                    |2020-12-11 17:19:56|
|1           |2        |[testA, 1, USER, 2020-12-11 04:50:40, 2020-12-11 17:20:24,] |[2020-12-11 17:20:24, 1]|2020-12-11 17:20:24|
|2           |3        |[testB, 0, USER, 2020-12-11 17:24:03, 2020-12-11 17:24:03,] |[2020-12-11 17:24:03, 0]|2020-12-11 17:24:03|
|3           |4        |[testC, 0, USER, 2020-12-11 17:27:36, 2020-12-11 17:27:36,] |[2020-12-11 17:27:36, 0]|2020-12-11 17:27:36|
+------------+---------+------------------------------------------------------------+------------------------+-------------------+

Schema is:

root
 |-- parent_key: long (nullable = true)
 |-- id: string (nullable = true)
 |-- value: struct (nullable = true)
 |    |-- first_name: string (nullable = true)
 |    |-- is_active: integer (nullable = true)
 |    |-- source: string (nullable = true)
 |    |-- created_at: timestamp (nullable = true)
 |    |-- updated_at: timestamp (nullable = true)
 |-- raw_is_active: struct (nullable = true)
 |    |-- updated_at: timestamp (nullable = true)
 |    |-- value: integer (nullable = true)
 |-- updated_at: timestamp (nullable = true)

I am looking for an output:

+------------+---------+------------------------------------------------------------+---------------------------------------------------+-------------------+
|parent_key  |id       |value                                                       |raw_is_active                                      |updated_at         |
+------------+---------+---------------------------------------------------------------------------------------------------------+--------------------------+
|1           |2        |[testA, 1, USER, 2020-12-11 04:50:40, 2020-12-11 17:20:24]  |[[2020-12-11 04:50:40, 0],[2020-12-11 17:20:24, 1]]|2020-12-11 04:50:40|
|2           |3        |[testB, 0, USER, 2020-12-11 17:24:03, 2020-12-11 17:24:03]  |[2020-12-11 17:24:03, 0]                           |2020-12-11 17:24:03|
|3           |4        |[testC, 0, USER, 2020-12-11 17:27:36, 2020-12-11 17:27:36]  |[2020-12-11 17:27:36, 0]                           |2020-12-11 17:27:36|
+------------+---------+---------------------------------------------------------------------------------------------------------+--------------------------+

So on the basis of the updated_at column I want to keep the latest row and also wants to create an array for raw_is_active for all the rows for a given id .

I know I can pick the latest value using code:

 val windowSpec = Window.partitionBy("id").orderBy(col("updated_at").desc)

    dataFrame
      .withColumn("maxTS", first("updated_at").over(windowSpec))
      .select("*").where(col("maxTS") === col("updated_at"))
      .drop("maxTS")

But not sure how I can also create a set for raw_is_active column.

Or I can completely use group by function like:

 dataFrame
      .groupBy("parent_key", "id")
      .agg(collect_list("value") as "value_list", collect_set("raw_is_active") as "active_list")
      .withColumn("value", col("value_list")(size(col("value_list")).minus(1)))
      .drop("value_list")

For the above I am not sure

  1. .withColumn("value", col("value_list")(size(col("value_list")).minus(1))) will always give me the latest value
  2. Considering use of collect_list and collect_set , is this code efficient?

UPDATE Thanks to @mck, I was able to get it working with code:

val windowSpec = Window.partitionBy("id").orderBy(col("updated_at").desc)
val windowSpecSet = Window.partitionBy("id").orderBy(col("updated_at"))

val df2 = dataFrame.withColumn(
    "rn",
    row_number().over(windowSpec)
).withColumn(
    "active_list",
    collect_set("raw_is_active").over(windowSpecSet)
).drop("raw_is_active").filter("rn = 1")

However the code is taking more time than my existing code:

 dataFrame
      .groupBy("parent_key", "id")
      .agg(collect_list("value") as "value_list", collect_set("raw_is_active") as "active_list")
      .withColumn("value", col("value_list")(size(col("value_list")).minus(1)))
      .drop("value_list")

I was under impression that window function would perform better than groupBy and agg .

Assign a row_number for each row in each id partition and filter the rows with row_number = 1 :

val windowSpec = Window.partitionBy("id").orderBy(col("updated_at").desc)

val df2 = dataFrame.withColumn(
    "rn",
    row_number().over(windowSpec)
).withColumn(
    "active_list",
    array_sort(collect_set("raw_is_active").over(windowSpec))
).drop("raw_is_active").filter("rn = 1")

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM