簡體   English   中英

如何通過 Pyspark 中的不同字段連接兩個數據幀

[英]How to join two dataframes by different fields in Pyspark

我有兩個數據框df1和df2,以下是每個數據框的內容。

df1:

+--------------------------+------------------------+--------+                  
|line_item_usage_account_id|line_item_unblended_cost|    name|
+--------------------------+------------------------+--------+
|              100000000001|                   12.05|account1|
|              200000000001|                    52.0|account2|
|              300000000003|                   12.03|account3|
+--------------------------+------------------------+--------+

df2:

+-----------+-----------------+-----------+-------+--------------+------------------------+
|accountname|accountproviderid|clustername|app_pmo|app_costcenter|line_item_unblended_cost|
+-----------+-----------------+-----------+-------+--------------+------------------------+
|   account1|     100000000001|   cluster1| 111111|      11111111|                   12.05|
|   account1|     100000000001|   cluster1| 666666|      55555555|                   10.09|
|   account1|     100000000001|   cluster7| 666660|      55555551|                   11.09|
|   account2|     200000000001|   cluster2| 222222|      22222222|                    52.0|
+-----------+-----------------+-----------+-------+--------------+------------------------+

我只需要找到 df1.line_item_usage_account_id 中的 id,如果它不在 df2.accountproviderid 中,並添加字段 df1.line_item_unblended_cost 和 df1.name,如下所示:

df3:

+-----------+-----------------+-----------+-------+--------------+------------------------+
|accountname|accountproviderid|clustername|app_pmo|app_costcenter|line_item_unblended_cost|
+-----------+-----------------+-----------+-------+--------------+------------------------+
|   account1|     100000000001|   cluster1| 111111|      11111111|                   12.05|
|   account1|     100000000001|   cluster1| 666666|      55555555|                   10.09|
|   account1|     100000000001|   cluster7| 666660|      55555551|                   11.09|
|   account2|     200000000001|   cluster2| 222222|      22222222|                    52.0|
|   account3|     300000000003|   null    | null  |      null    |                   12.03|
+-----------+-----------------+-----------+-------+--------------+------------------------+

這是數據幀的代碼,知道如何實現嗎?

from pyspark.sql import SparkSession   
spark = SparkSession.builder.getOrCreate()

df1 = spark.createDataFrame([
    [100000000001, 12.05, 'account1'], 
    [200000000001, 52.00, 'account2'], 
    [300000000003, 12.03, 'account3']], 
    schema=['line_item_usage_account_id',  'line_item_unblended_cost', 'name' ])

df2 = spark.createDataFrame([
    ['account1', 100000000001, 'cluster1', 111111, 11111111, 12.05],
    ['account1', 100000000001, 'cluster1', 666666, 55555555, 10.09],
    ['account1', 100000000001, 'cluster7', 666660, 55555551, 11.09],
    ['account2', 200000000001, 'cluster2', 222222, 22222222, 52.00]], 
    schema=['accountname', 'accountproviderid', 'clustername', 'app_pmo', 'app_costcenter', 'line_item_unblended_cost'])

提前致謝。

我沒有安裝 PySpark 來檢查,但它可以幫助

df3 = df1.join(df2, df1.line_item_usage_account_id==df2.accountproviderid, how='left').filter(col('df2.line_item_usage_account_id').isNull())

留下來加入過濾,但如果你的 df 可能很大 - 需要使用另一種方法

暫無
暫無

聲明:本站的技術帖子網頁,遵循CC BY-SA 4.0協議,如果您需要轉載,請注明本站網址或者原文地址。任何問題請咨詢:yoyou2525@163.com.

 
粵ICP備18138465號  © 2020-2024 STACKOOM.COM