繁体   English   中英

Pyspark:将数据帧作为数组类型列连接到另一个数据帧

[英]Pyspark: join dataframe as an array type column to another dataframe

我试图在 pyspark 中加入两个数据帧,但将一个表作为数组列加入另一个表。

例如,对于这些表:

from pyspark.sql import Row

df1 = spark.createDataFrame([
    Row(a = 1, b = 'C', c = 26, d = 'abc'),
    Row(a = 1, b = 'C', c = 27, d = 'def'),
    Row(a = 1, b = 'D', c = 51, d = 'ghi'),
    Row(a = 2, b = 'C', c = 40, d = 'abc'),
    Row(a = 2, b = 'D', c = 45, d = 'abc'),
    Row(a = 2, b = 'D', c = 38, d = 'def')
])

df2 = spark.createDataFrame([
    Row(a = 1, b = 'C', e = 2, f = 'cba'),
    Row(a = 1, b = 'D', e = 3, f = 'ihg'),
    Row(a = 2, b = 'C', e = 7, f = 'cba'),
    Row(a = 2, b = 'D', e = 9, f = 'cba')
])

我想在abdf1.c df1 加入 df2 但df1.cdf1.d应该是单个数组类型列。 此外,应保留所有名称。 新数据帧的输出应该能够转换为这个 json 结构(前两行的示例):

{
    "a": 1,
    "b": "C",
    "e": 2,
    "f": "cba",
    "df1": [
            {
                "c": 26,
                "d": "abc"
            },
            {
                "c": 27,
                "d": "def"
            } 
           ]
}

任何关于如何实现这一点的想法将不胜感激!

谢谢,

卡罗莱纳州

根据您输入的样本数据:

df1 上的聚合

from pyspark.sql import functions as F


df1 = df1.groupBy("a", "b").agg(
    F.collect_list(F.struct(F.col("c"), F.col("d"))).alias("df1")
)

df1.show()
+---+---+--------------------+
|  a|  b|                 df1|
+---+---+--------------------+
|  1|  C|[[26, abc], [27, ...|
|  1|  D|         [[51, ghi]]|
|  2|  D|[[45, abc], [38, ...|
|  2|  C|         [[40, abc]]|
+---+---+--------------------+

加入df2

df3 = df1.join(df2, on=["a", "b"])

df3.show()
+---+---+--------------------+---+---+
|  a|  b|                 df1|  e|  f|
+---+---+--------------------+---+---+
|  1|  C|[[26, abc], [27, ...|  2|cba|
|  1|  D|         [[51, ghi]]|  3|ihg|
|  2|  D|[[45, abc], [38, ...|  9|cba|
|  2|  C|         [[40, abc]]|  7|cba|
+---+---+--------------------+---+---+

暂无
暂无

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM