簡體   English   中英

在 Spark Dataframe 列中聚合並創建“字典”數組 object

[英]Aggregate and Create Array of `Dictionary` object in Spark Dataframe Column

我創建了一個玩具火花 dataframe:

import numpy as np
import pyspark
from pyspark.sql import functions as sf
from pyspark.sql import functions as F

# sc = pyspark.SparkContext()
# sqlc = pyspark.SQLContext(sc)
df = spark.createDataFrame([('csc123','sr1', 'tac1', 'abc'), 
                            ('csc123','sr2', 'tac1', 'abc'), 
                            ('csc234','sr3', 'tac2', 'bvd'),
                            ('csc345','sr5', 'tac2', 'bvd')
                           ], 
                           ['bug_id', 'sr_link', 'TAC_engineer','de_manager'])
df.show()
+------+-------+------------+----------+
|bug_id|sr_link|TAC_engineer|de_manager|
+------+-------+------------+----------+
|csc123|    sr1|        tac1|       abc|
|csc123|    sr2|        tac1|       abc|
|csc234|    sr3|        tac2|       bvd|
|csc345|    sr5|        tac2|       bvd|
+------+-------+------------+----------+

然后我嘗試為每個 bug id 聚合並生成[sr_link, sr_link]數組

#df = spark.createDataFrame([('row11','row12'), ('row21','row22')], ['colname1', 'colname2'])

df_drop_dup = df.select('bug_id', 'de_manager').dropDuplicates()

df = df.withColumn('joined_column', 
                    sf.concat(sf.col('sr_link'),sf.lit(' '), sf.col('TAC_engineer')))

df_sev_arr = df.groupby("bug_id").agg(F.collect_set("joined_column")).withColumnRenamed("collect_set(joined_column)","sr_array")

df = df_drop_dup.join(df_sev_arr, on=['bug_id'], how='inner')

df.show()

這是 output:

+------+----------+--------------------+
|bug_id|de_manager|            sr_array|
+------+----------+--------------------+
|csc345|       bvd|          [sr5 tac2]|
|csc123|       abc|[sr2 tac1, sr1 tac1]|
|csc234|       bvd|          [sr3 tac2]|
+------+----------+--------------------+

但我真正期望的實際 output 是這樣的:

+------+----------+----------------------------------------------------------------------+
|bug_id|de_manager|                                                              sr_array|
+------+----------+----------------------------------------------------------------------+
|csc345|       bvd|                                   [{sr_link: sr5, TAC_engineer:tac2}]|
|csc123|       abc|[{sr_link: sr2, TAC_engineer:tac1},{sr_link: sr1, TAC_engineer: tac1}]|
|csc234|       bvd|                                  [{sr_link: sr3, TAC_engineer: tac2}]|
+------+----------+----------------------------------------------------------------------+

因為我希望最終的 output 可以保存為 JSON 格式,例如:

'bug_id': 'csc123'
'de_manager': 'abc'
'sr_array':
     'sr_link': 'sr2', 'TAC_engineer': 'tac1'
     'sr_link': 'sr1', 'TAC_engineer': 'tac1'

任何人都可以幫忙嗎? 抱歉,我對 Spark Dataframe 中的MapType非常不熟悉。

只是根據您的要求修改了一些功能並添加了新功能。

第一部分將保持不變。

from pyspark.sql import functions as F

# sc = pyspark.SparkContext()
# sqlc = pyspark.SQLContext(sc)
df = spark.createDataFrame([('csc123','sr1', 'tac1', 'abc'), 
                            ('csc123','sr2', 'tac1', 'abc'), 
                            ('csc234','sr3', 'tac2', 'bvd'),
                            ('csc345','sr5', 'tac2', 'bvd')
                           ], 
                           ['bug_id', 'sr_link', 'TAC_engineer','de_manager'])
df.show()

我剛剛修改了第二部分。

>>> df_drop_dup = df.select('bug_id', 'de_manager').dropDuplicates()

修改重命名 function 從WithcolumnRenamed to Alias並添加to_json and Struct function 得到想要的 Z78E6221F6393D1356681F6393D1356681DB398F14CED6

>>> df1 = df.withColumn('joined_column', F.to_json(F.struct(F.col('sr_link'), F.col('TAC_engineer'))))

>>> df_sev_arr = df1.groupby("bug_id").agg(F.collect_set("joined_column").alias("sr_array"))

>>> df = df_drop_dup.join(df_sev_arr, on=['bug_id'], how='inner')

>>> df.show(truncate=False)
+------+----------+----------------------------------------------------------------------------------+
|bug_id|de_manager|sr_array                                                                          |
+------+----------+----------------------------------------------------------------------------------+
|csc345|bvd       |[{"sr_link":"sr5","TAC_engineer":"tac2"}]                                         |
|csc123|abc       |[{"sr_link":"sr1","TAC_engineer":"tac1"}, {"sr_link":"sr2","TAC_engineer":"tac1"}]|
|csc234|bvd       |[{"sr_link":"sr3","TAC_engineer":"tac2"}]                                         |
+------+----------+----------------------------------------------------------------------------------+

如果您有任何與此相關的問題,請告訴我。

暫無
暫無

聲明:本站的技術帖子網頁,遵循CC BY-SA 4.0協議,如果您需要轉載,請注明本站網址或者原文地址。任何問題請咨詢:yoyou2525@163.com.

 
粵ICP備18138465號  © 2020-2024 STACKOOM.COM