繁体   English   中英

如何通过 Pyspark 中的不同字段连接两个数据帧

[英]How to join two dataframes by different fields in Pyspark

我有两个数据框df1和df2,以下是每个数据框的内容。

df1:

+--------------------------+------------------------+--------+                  
|line_item_usage_account_id|line_item_unblended_cost|    name|
+--------------------------+------------------------+--------+
|              100000000001|                   12.05|account1|
|              200000000001|                    52.0|account2|
|              300000000003|                   12.03|account3|
+--------------------------+------------------------+--------+

df2:

+-----------+-----------------+-----------+-------+--------------+------------------------+
|accountname|accountproviderid|clustername|app_pmo|app_costcenter|line_item_unblended_cost|
+-----------+-----------------+-----------+-------+--------------+------------------------+
|   account1|     100000000001|   cluster1| 111111|      11111111|                   12.05|
|   account1|     100000000001|   cluster1| 666666|      55555555|                   10.09|
|   account1|     100000000001|   cluster7| 666660|      55555551|                   11.09|
|   account2|     200000000001|   cluster2| 222222|      22222222|                    52.0|
+-----------+-----------------+-----------+-------+--------------+------------------------+

我只需要找到 df1.line_item_usage_account_id 中的 id,如果它不在 df2.accountproviderid 中,并添加字段 df1.line_item_unblended_cost 和 df1.name,如下所示:

df3:

+-----------+-----------------+-----------+-------+--------------+------------------------+
|accountname|accountproviderid|clustername|app_pmo|app_costcenter|line_item_unblended_cost|
+-----------+-----------------+-----------+-------+--------------+------------------------+
|   account1|     100000000001|   cluster1| 111111|      11111111|                   12.05|
|   account1|     100000000001|   cluster1| 666666|      55555555|                   10.09|
|   account1|     100000000001|   cluster7| 666660|      55555551|                   11.09|
|   account2|     200000000001|   cluster2| 222222|      22222222|                    52.0|
|   account3|     300000000003|   null    | null  |      null    |                   12.03|
+-----------+-----------------+-----------+-------+--------------+------------------------+

这是数据帧的代码,知道如何实现吗?

from pyspark.sql import SparkSession   
spark = SparkSession.builder.getOrCreate()

df1 = spark.createDataFrame([
    [100000000001, 12.05, 'account1'], 
    [200000000001, 52.00, 'account2'], 
    [300000000003, 12.03, 'account3']], 
    schema=['line_item_usage_account_id',  'line_item_unblended_cost', 'name' ])

df2 = spark.createDataFrame([
    ['account1', 100000000001, 'cluster1', 111111, 11111111, 12.05],
    ['account1', 100000000001, 'cluster1', 666666, 55555555, 10.09],
    ['account1', 100000000001, 'cluster7', 666660, 55555551, 11.09],
    ['account2', 200000000001, 'cluster2', 222222, 22222222, 52.00]], 
    schema=['accountname', 'accountproviderid', 'clustername', 'app_pmo', 'app_costcenter', 'line_item_unblended_cost'])

提前致谢。

我没有安装 PySpark 来检查,但它可以帮助

df3 = df1.join(df2, df1.line_item_usage_account_id==df2.accountproviderid, how='left').filter(col('df2.line_item_usage_account_id').isNull())

留下来加入过滤,但如果你的 df 可能很大 - 需要使用另一种方法

暂无
暂无

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM