![](/img/trans.png)
[英]Joining two dataframes through an inner join and a filter condition on Pyspark (Python)
[英]How to join two pyspark dataframes in python on a condition while changing column value on match?
我有兩個這樣的數據框:
df1 = spark.createDataFrame([(1, 11, 1999, 1999, None), (2, 22, 2000, 2000, 44), (3, 33, 2001, 2001,None)], ['id', 't', 'year','new_date','rev_t'])
df2 = spark.createDataFrame([(2, 44, 2022, 2022,None), (2, 55, 2001, 2001, 88)], ['id', 't', 'year','new_date','rev_t'])
df1.show()
df2.show()
+---+---+----+--------+-----+
| id| t|year|new_date|rev_t|
+---+---+----+--------+-----+
| 1| 11|1999| 1999| null|
| 2| 22|2000| 2000| 44|
| 3| 33|2001| 2001| null|
+---+---+----+--------+-----+
+---+---+----+--------+-----+
| id| t|year|new_date|rev_t|
+---+---+----+--------+-----+
| 2| 44|2022| 2022| null|
| 2| 55|2001| 2001| 88|
+---+---+----+--------+-----+
我想以一種方式加入他們,如果df2.t == df1.rev_t
然后在結果new_date
中將 new_date 更新為df2.year
。所以它應該看起來像這樣:
+---+---+----+--------+-----+
| id| t|year|new_date|rev_t|
+---+---+----+--------+-----+
| 1| 11|1999| 1999| null|
| 2| 22|2000| 2022| 44|
| 2| 44|2022| 2022| null|
| 2| 55|2001| 2001| 88|
| 3| 33|2001| 2001| null|
+---+---+----+--------+-----+
要更新df1
中df2
的列,您可以使用 left join + coalesce
function 作為要更新的列,在本例中new_date
。
從您預期的 output 來看,您似乎還想添加df2
中的行,因此將連接結果與df2
2 合並:
from pyspark.sql import functions as F
result = (df1.join(df2.selectExpr("t as rev_t", "new_date as df2_new_date"), ["rev_t"], "left")
.withColumn("new_date", F.coalesce("df2_new_date", "new_date"))
.select(*df1.columns).union(df2)
)
result.show()
#+---+---+----+--------+-----+
#| id| t|year|new_date|rev_t|
#+---+---+----+--------+-----+
#| 1| 11|1999| 1999| null|
#| 3| 33|2001| 2001| null|
#| 2| 22|2000| 2022| 44|
#| 2| 44|2022| 2022| null|
#| 2| 55|2001| 2001| 88|
#+---+---+----+--------+-----+
聲明:本站的技術帖子網頁,遵循CC BY-SA 4.0協議,如果您需要轉載,請注明本站網址或者原文地址。任何問題請咨詢:yoyou2525@163.com.