繁体   English   中英

如何在 Pyspark 中加入多个列?

[英]How to join on multiple columns in Pyspark?

我正在使用 Spark 1.3,并希望使用 python 接口 (SparkSQL) 加入多个列

以下作品:

我首先将它们注册为临时表。

numeric.registerTempTable("numeric")
Ref.registerTempTable("Ref")

test  = numeric.join(Ref, numeric.ID == Ref.ID, joinType='inner')

我现在想根据多个列加入它们。

我得到SyntaxError : invalid syntax :

test  = numeric.join(Ref,
   numeric.ID == Ref.ID AND numeric.TYPE == Ref.TYPE AND
   numeric.STATUS == Ref.STATUS ,  joinType='inner')

你应该使用& / | 运算符并注意运算符优先级==优先级低于按位ANDOR ):

df1 = sqlContext.createDataFrame(
    [(1, "a", 2.0), (2, "b", 3.0), (3, "c", 3.0)],
    ("x1", "x2", "x3"))

df2 = sqlContext.createDataFrame(
    [(1, "f", -1.0), (2, "b", 0.0)], ("x1", "x2", "x3"))

df = df1.join(df2, (df1.x1 == df2.x1) & (df1.x2 == df2.x2))
df.show()

## +---+---+---+---+---+---+
## | x1| x2| x3| x1| x2| x3|
## +---+---+---+---+---+---+
## |  2|  b|3.0|  2|  b|0.0|
## +---+---+---+---+---+---+

另一种方法是:

df1 = sqlContext.createDataFrame(
    [(1, "a", 2.0), (2, "b", 3.0), (3, "c", 3.0)],
    ("x1", "x2", "x3"))

df2 = sqlContext.createDataFrame(
    [(1, "f", -1.0), (2, "b", 0.0)], ("x1", "x2", "x4"))

df = df1.join(df2, ['x1','x2'])
df.show()

输出:

+---+---+---+---+
| x1| x2| x3| x4|
+---+---+---+---+
|  2|  b|3.0|0.0|
+---+---+---+---+

主要优点是表所连接的列在输出不会重复,从而降低了遇到诸如org.apache.spark.sql.AnalysisException: Reference 'x1' is ambiguous, could be: x1#50L, x1#57L.等错误的风险org.apache.spark.sql.AnalysisException: Reference 'x1' is ambiguous, could be: x1#50L, x1#57L.


每当两个表中的列具有不同的名称时(假设在上面的示例中, df2具有列y1y2y4 ),您可以使用以下语法:

df = df1.join(df2.withColumnRenamed('y1','x1').withColumnRenamed('y2','x2'), ['x1','x2'])
test = numeric.join(Ref, 
   on=[
     numeric.ID == Ref.ID, 
     numeric.TYPE == Ref.TYPE,
     numeric.STATUS == Ref.STATUS 
   ], how='inner')

如果列名相同,您还可以提供字符串列表。

df1 = sqlContext.createDataFrame(
    [(1, "a", 2.0), (2, "b", 3.0), (3, "c", 3.0)],
    ("x1", "x2", "x3"))

df2 = sqlContext.createDataFrame(
    [(1, "f", -1.0), (2, "b", 0.0)], ("x1", "x2", "x3"))

df = df1.join(df2, ["x1","x2"])

df.show()
+---+---+---+---+
| x1| x2| x3| x3|
+---+---+---+---+
|  2|  b|3.0|0.0|
+---+---+---+---+

go 关于此的另一种方法是,如果列名不同并且您想依赖列名字符串,则如下所示:

df1 = sqlContext.createDataFrame(
    [(1, "a", 2.0), (2, "b", 3.0), (3, "c", 3.0)],
    ("x1", "x2", "x3"))

df2 = sqlContext.createDataFrame(
    [(1, "f", -1.0), (2, "b", 0.0)], ("y1", "y2", "y3"))

df = df1.join(df2, (col("x1")==col("y1")) & (col("x2")==col("y2")))

df.show()
+---+---+---+---+---+---+
| x1| x2| x3| y1| y2| y3|
+---+---+---+---+---+---+
|  2|  b|3.0|  2|  b|0.0|
+---+---+---+---+---+---+

如果您想动态引用列名,并且在列名中有空格并且您不能使用df.col_name语法的情况下,这很有用。 不过,您应该考虑在这种情况下更改列名。

暂无
暂无

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM