简体   繁体   English

加入 pyspark 中多列的通用合并

[英]Generic coalesce of multiple columns in join pyspark

I have to merge many spark DataFrames.我必须合并许多 spark DataFrame。 After the merge, I want to perform a coalesce between multiple columns with the same names.合并后,我想在具有相同名称的多个列之间执行合并。

I was able to create a minimal example following this question .我能够在这个问题之后创建一个最小的例子。

However, I need a more generic piece of code to support: a set of variables to coalesce (in the example set_vars = set(('var1','var2')) ), and multiple join keys (in the example join_keys = set(('id')) ).但是,我需要一段更通用的代码来支持:一组要合并的变量(在示例中set_vars = set(('var1','var2')) )和多个连接键(在示例中join_keys = set(('id')) )。

Is there a less verbose (more generic) way to obtain this result in pyspark ?pyspark是否有更pyspark (更通用)的方法来获得这个结果?

df1 =  spark.createDataFrame([ 
        ( 1, None ,  "aa"),
        ( 2 , "a", None ),
        ( 3 , "b",  None),
        ( 4 , "h",  None),],
        "id int, var1 string, var2 string",
       )

df2 =  spark.createDataFrame([ 
        ( 1, "f" ,  "Ba"),
        ( 2 , "a", "bb" ),
        ( 3 , "b",  None),],
        "id int, var1 string, var2 string",
       )

df1 = df1.alias("df1")
df2 = df2.alias("df2")

df3 = df1.join(df2, df1.id == df2.id, how='left').withColumn("var1_", coalesce("df1.var1", "df2.var1")).drop("var1").withColumnRenamed("var1_", "var1").withColumn("var2_", coalesce("df1.var2", "df2.var2")).drop("var2").withColumnRenamed("var2_", "var2")

We can avoid duplicate columns by passing columns as a list to join method instead of writing joining condition, refer this link .我们可以通过将列作为列表传递给连接方法而不是编写连接条件来避免重复列,请参阅此链接 But here there are some common columns which are not required for joining condition.但是这里有一些不需要加入条件的常见列。 we can use for loop to generalize your code.我们可以使用 for 循环来概括您的代码。

spark = SparkSession.builder.master("local[*]").getOrCreate()

df1 =  spark.createDataFrame([
        ( 1, None ,  "aa"),
        ( 2 , "a", None ),
        ( 3 , "b",  None),
        ( 4 , "h",  None),],
        "id int, var1 string, var2 string",
       )

df2 =  spark.createDataFrame([
        ( 1, "f" ,  "Ba"),
        ( 2 , "a", "bb" ),
        ( 3 , "b",  None),],
        "id int, var1 string, var2 string",
       )

df1 = df1.alias("df1")
df2 = df2.alias("df2")

key_columns = ["id"]
# Get common columns between 2 dataframes excluding columns-
# -which are being used in joining conditions
other_common_columns = set(df1.columns).intersection(set(df2.columns))\
.difference(set(key_columns))

outputDF = df1.join(df2, key_columns, how='left')

for i in other_common_columns:
    outputDF = outputDF.withColumn(f"{i}_", coalesce(f"df1.{i}", f"df2.{i}"))\
.drop(i).withColumnRenamed(f"{i}_", i)

outputDF.show()

+---+----+----+
| id|var2|var1|
+---+----+----+
|  1|  aa|   f|
|  3|null|   b|
|  4|null|   h|
|  2|  bb|   a|
+---+----+----+

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM