簡體   English   中英

加入這兩個Spark DataFrame的正確方法是什么?

[英]What is the right way to join these 2 Spark DataFrames?

假設我有2個Spark DataFrame:

val addStuffDf = Seq(
  ("A", "2018-03-22", 5),
  ("A", "2018-03-24", 1),
  ("B", "2018-03-24, 3))
.toDF("user", "dt", "count")

val removedStuffDf = Seq(
  ("C", "2018-03-25", 10),
  ("A", "2018-03-24", 5),
  ("B", "2018-03-25", 1)
).toDF("user", "dt", "count")

最后,我想獲得一個具有摘要統計信息的單個數據框(實際上排序並不重要):

+----+----------+-----+-------+
|user|        dt|added|removed|
+----+----------+-----+-------+
|   A|2018-03-22|    5|      0|
|   A|2018-03-24|    1|      5|
|   B|2018-03-24|    3|      0|
|   B|2018-03-25|    0|      1|
|   C|2018-03-25|    0|     10|
+----+----------+-----+-------+

很明顯,我可以在“步驟0”處簡單地重命名“計數”列,以便具有數據幀df1df2

val df1 = addedDf.withColumnRenamed("count", "added")
df1.show()
+----+----------+-----+
|user|        dt|added|
+----+----------+-----+
|   A|2018-03-22|    5|
|   A|2018-03-24|    1|
|   B|2018-03-24|    3|
+----+----------+-----+

val df2 = removedDf.withColumnRenamed("count", "removed")
df2.show()
+----+----------+-------+
|user|        dt|applied|
+----+----------+-------+
|   C|2018-03-25|     10|
|   A|2018-03-24|      5|
|   B|2018-03-25|      1|
+----+----------+-------+

但是現在我無法定義“步驟1”-即,確定將df1和df2壓縮在一起的轉換。 從邏輯的觀點來看, full_outer連接將我需要的所有行都放在一個DF中,但是隨后我需要以某種方式合並重復的列:

df1.as('d1)
  .join(df2.as('d2),
        ($"d1.user"===$"d2.user" && $"d1.dt"===$"d2.dt"),
        "full_outer")
.show()

+----+----------+-----+----+----------+-------+
|user|        dt|added|user|        dt|applied|
+----+----------+-----+----+----------+-------+
|null|      null| null|   C|2018-03-25|     10|
|null|      null| null|   B|2018-03-25|      1|
|   B|2018-03-24|    3|null|      null|   null|
|   A|2018-03-22|    5|null|      null|   null|
|   A|2018-03-24|    1|   A|2018-03-24|      5|
+----+----------+-----+----+----------+-------+

如何將這些userdt列合並在一起? 而且,總體而言-我是否使用正確的方法來解決我的問題,或者是否有更直接/有效的解決方案?

由於要為兩個DataFrame聯接的列具有匹配的名稱,因此對聯接條件使用Seq("user", "dt")將產生您想要的合並表:

val addStuffDf = Seq(
  ("A", "2018-03-22", 5),
  ("A", "2018-03-24", 1),
  ("B", "2018-03-24", 3)
).toDF("user", "dt", "count")

val removedStuffDf = Seq(
  ("C", "2018-03-25", 10),
  ("A", "2018-03-24", 5),
  ("B", "2018-03-25", 1)
).toDF("user", "dt", "count")

val df1 = addStuffDf.withColumnRenamed("count", "added")
val df2 = removedStuffDf.withColumnRenamed("count", "removed")

df1.as('d1).join(df2.as('d2), Seq("user", "dt"), "full_outer").
  na.fill(0).
  show
// +----+----------+-----+-------+
// |user|        dt|added|removed|
// +----+----------+-----+-------+
// |   C|2018-03-25|    0|     10|
// |   B|2018-03-25|    0|      1|
// |   B|2018-03-24|    3|      0|
// |   A|2018-03-22|    5|      0|
// |   A|2018-03-24|    1|      5|
// +----+----------+-----+-------+

暫無
暫無

聲明:本站的技術帖子網頁,遵循CC BY-SA 4.0協議,如果您需要轉載,請注明本站網址或者原文地址。任何問題請咨詢:yoyou2525@163.com.

 
粵ICP備18138465號  © 2020-2024 STACKOOM.COM