簡體   English   中英

如何在Spark SQL中的兩列上聚合

[英]How to aggregate on two columns in Spark SQL

現在,我有一個執行以下任務的表:

  1. 按部門編號和員工編號的功能分組
  2. 在每個組中,我需要按(ArrivalDate,ArrivalTime)對其進行排序,然后選擇第一個。 因此,如果兩個日期不同,請選擇較新的日期。 如果兩個日期相同,則選擇較新的時間。

我正在嘗試這種方法:

input.select("DepartmenId","EmolyeeID", "ArrivalDate", "ArrivalTime", "Word")
  .agg(here will be the function that handles logic from 2)
  .show()

這里要聚合的語法是什么?

先感謝您。

 // +-----------+---------+-----------+-----------+--------+ // |DepartmenId|EmolyeeID|ArrivalDate|ArrivalTime| Word | // +-----------+---------+-----------+-----------+--------+ // | D1 | E1 | 20170101 | 0730 | "YES" | // +-----------+---------+-----------+-----------+--------+ // | D1 | E1 | 20170102 | 1530 | "NO" | // +-----------+---------+-----------+-----------+--------+ // | D1 | E2 | 20170101 | 0730 | "ZOO" | // +-----------+---------+-----------+-----------+--------+ // | D1 | E2 | 20170102 | 0330 | "BOO" | // +-----------+---------+-----------+-----------+--------+ // | D2 | E1 | 20170101 | 0730 | "LOL" | // +-----------+---------+-----------+-----------+--------+ // | D2 | E1 | 20170101 | 1830 | "ATT" | // +-----------+---------+-----------+-----------+--------+ // | D2 | E2 | 20170105 | 1430 | "UNI" | // +-----------+---------+-----------+-----------+--------+ // output should be // +-----------+---------+-----------+-----------+--------+ // |DepartmenId|EmolyeeID|ArrivalDate|ArrivalTime| Word | // +-----------+---------+-----------+-----------+--------+ // | D1 | E1 | 20170102 | 1530 | "NO" | // +-----------+---------+-----------+-----------+--------+ // | D1 | E2 | 20170102 | 0330 | "BOO" | // +-----------+---------+-----------+-----------+--------+ // | D2 | E1 | 20170101 | 1830 | "ATT" | // +-----------+---------+-----------+-----------+--------+ // | D2 | E2 | 20170105 | 1430 | "UNI" | // +-----------+---------+-----------+-----------+--------+ 

一種方法是使用“火花窗口”功能:

val df = Seq(
  ("D1", "E1", "20170101", "0730", "YES"),
  ("D1", "E1", "20170102", "1530", "NO"),
  ("D1", "E2", "20170101", "0730", "ZOO"),
  ("D1", "E2", "20170102", "0330", "BOO"),
  ("D2", "E1", "20170101", "0730", "LOL"),
  ("D2", "E1", "20170101", "1830", "ATT"),
  ("D2", "E2", "20170105", "1430", "UNI")
).toDF(
  "DepartmenId", "EmolyeeID", "ArrivalDate", "ArrivalTime", "Word"
)

import org.apache.spark.sql.expressions.Window

val df2 = df.withColumn("rowNum", row_number().over(
    Window.partitionBy("DepartmenId", "EmolyeeID").
      orderBy($"ArrivalDate".desc, $"ArrivalTime".desc)
  )).
  select("DepartmenId", "EmolyeeID", "ArrivalDate", "ArrivalTime","Word").
  where($"rowNum" === 1).
  orderBy("DepartmenId", "EmolyeeID")

df2.show
+-----------+---------+-----------+-----------+----+
|DepartmenId|EmolyeeID|ArrivalDate|ArrivalTime|Word|
+-----------+---------+-----------+-----------+----+
|         D1|       E1|   20170102|       1530|  NO|
|         D1|       E2|   20170102|       0330| BOO|
|         D2|       E1|   20170101|       1830| ATT|
|         D2|       E2|   20170105|       1430| UNI|
+-----------+---------+-----------+-----------+----+

您可以在包含所有非分組列的新Struct列上使用max ,首先使用ArrivalData然后使用ArrivalTime :新列的排序將滿足您的要求(最新日期為第一;相似日期中的最后一小時為第一),因此maximum將產生您想要的記錄。

然后,您可以使用select操作將結構“拆分”回單獨的列。

import spark.implicits._
import org.apache.spark.sql.functions._

df.groupBy($"DepartmentID", $"EmployeeID")
  .agg(max(struct("ArrivalDate", "ArrivalTime", "Word")) as "struct")
  .select($"DepartmentID", $"EmployeeID",
    $"struct.ArrivalDate" as "ArrivalDate",
    $"struct.ArrivalTime" as "ArrivalTime",
    $"struct.Word" as "Word"
  )

暫無
暫無

聲明:本站的技術帖子網頁,遵循CC BY-SA 4.0協議,如果您需要轉載,請注明本站網址或者原文地址。任何問題請咨詢:yoyou2525@163.com.

 
粵ICP備18138465號  © 2020-2024 STACKOOM.COM