簡體   English   中英

在Spark中正確聯接DataFrame嗎?

[英]Correct join of DataFrame in Spark?

我是Spark Framework的新手,需要幫助!

假設第一個DataFrame( df1 )存儲用戶訪問呼叫中心的時間。

+---------+-------------------+
|USER_NAME|       REQUEST_DATE|
+---------+-------------------+
|     Mark|2018-02-20 00:00:00|
|     Alex|2018-03-01 00:00:00|
|      Bob|2018-03-01 00:00:00|
|     Mark|2018-07-01 00:00:00|
|     Kate|2018-07-01 00:00:00|
+---------+-------------------+

第二個DataFrame存儲有關某人是否是組織成員的信息。 OUT表示用戶已離開組織。 IN表示用戶已來到組織。 START_DATEEND_DATE表示相應過程的開始和結束。

例如,您可以看到Alex2018-01-01 00:00:00離開了組織,此過程在2018-02-01 00:00:00結束。 您會注意到,一個用戶可以像Mark一樣在不同的時間離開和離開組織。

+---------+---------------------+---------------------+--------+
|NAME     | START_DATE          | END_DATE            | STATUS |
+---------+---------------------+---------------------+--------+
|     Alex| 2018-01-01 00:00:00 | 2018-02-01 00:00:00 | OUT    |
|      Bob| 2018-02-01 00:00:00 | 2018-02-05 00:00:00 | IN     |
|     Mark| 2018-02-01 00:00:00 | 2018-03-01 00:00:00 | IN     |
|     Mark| 2018-05-01 00:00:00 | 2018-08-01 00:00:00 | OUT    |
|    Meggy| 2018-02-01 00:00:00 | 2018-02-01 00:00:00 | OUT    |
+----------+--------------------+---------------------+--------+

我試圖在最后獲得這樣的DataFrame。 它必須包含來自第一個DataFrame的所有記錄以及一列,該列指示在請求之時Person是否是組織的成員( REQUEST_DATE )。

+---------+-------------------+----------------+
|USER_NAME|       REQUEST_DATE| USER_STATUS    |
+---------+-------------------+----------------+
|     Mark|2018-02-20 00:00:00| Our user       |
|     Alex|2018-03-01 00:00:00| Not our user   |
|      Bob|2018-03-01 00:00:00| Our user       |
|     Mark|2018-07-01 00:00:00| Not our user   |
|     Kate|2018-07-01 00:00:00| No Information |
+---------+-------------------+----------------+

碼:

val df1: DataFrame  = Seq(
    ("Mark", "2018-02-20 00:00:00"),
    ("Alex", "2018-03-01 00:00:00"),
    ("Bob", "2018-03-01 00:00:00"),
    ("Mark", "2018-07-01 00:00:00"),
    ("Kate", "2018-07-01 00:00:00")
).toDF("USER_NAME", "REQUEST_DATE")

df1.show()

val df2: DataFrame  = Seq(
    ("Alex", "2018-01-01 00:00:00", "2018-02-01 00:00:00", "OUT"),
    ("Bob", "2018-02-01 00:00:00", "2018-02-05 00:00:00", "IN"),
    ("Mark", "2018-02-01 00:00:00", "2018-03-01 00:00:00", "IN"),
    ("Mark", "2018-05-01 00:00:00", "2018-08-01 00:00:00", "OUT"),
    ("Meggy", "2018-02-01 00:00:00", "2018-02-01 00:00:00", "OUT")
).toDF("NAME", "START_DATE", "END_DATE", "STATUS")

df2.show()
import org.apache.spark.sql.Dataset
import org.apache.spark.sql.functions._

case class UserAndRequest(
                           USER_NAME:String,
                           REQUEST_DATE:java.sql.Date,
                           START_DATE:java.sql.Date,
                           END_DATE:java.sql.Date,
                           STATUS:String,
                           REQUEST_ID:Long
                         )

val joined : Dataset[UserAndRequest] = df1.withColumn("REQUEST_ID", monotonically_increasing_id).
  join(df2,$"USER_NAME" === $"NAME", "left").
  as[UserAndRequest]

val lastRowByRequestId = joined.
  groupByKey(_.REQUEST_ID).
  reduceGroups( (x,y) =>
    if (x.REQUEST_DATE.getTime > x.END_DATE.getTime && x.END_DATE.getTime > y.END_DATE.getTime) x else y
  ).map(_._2)

def logic(status: String): String = {
  if (status == "IN") "Our user"
  else if (status == "OUT") "not our user"
  else "No Information"
}

val logicUDF = udf(logic _)

val finalDF = lastRowByRequestId.withColumn("USER_STATUS",logicUDF($"REQUEST_DATE"))

哪個產量:

結果

暫無
暫無

聲明:本站的技術帖子網頁,遵循CC BY-SA 4.0協議,如果您需要轉載,請注明本站網址或者原文地址。任何問題請咨詢:yoyou2525@163.com.

 
粵ICP備18138465號  © 2020-2024 STACKOOM.COM