簡體   English   中英

pyspark groupby 並創建包含其他列字典的列

[英]pyspark groupby and create column containing a dictionary of the others columns

我有這個 pyspark 數據框

df = spark.createDataFrame([("a", "b", "v1", 1234, 56, 78, 9), ("a", "b", "v2", 987, 6, 543, 21), ("c", "d", "v1", 12, 345, 6, 789), ("c", "d", "v2", 9, 876, 5, 4321)], ("k1", "k2", "k3", "ca", "pa", "cb", "pb"))
df.show()

+---+---+---+----+---+---+----+
| k1| k2| k3|  ca| pa| cb|  pb|
+---+---+---+----+---+---+----+
|  a|  b| v1|1234| 56| 78|   9|
|  a|  b| v2| 987|  6|543|  21|
|  c|  d| v1|  12|345|  6| 789|
|  c|  d| v2|   9|876|  5|4321|
+---+---+---+----+---+---+----+

基本上我想要做的是通過對前兩個鍵k1k2進行分組來轉換這個數據框,並使用第三個鍵k3作為字典的主鍵,該字典將其他列的值( capacbpb )這將包含在一個新列中。 這種轉換將導致數據框看起來完全像這樣:

+---+---+--------------------------------------------------------------------------------------------------+
|k1 |k2 |k3                                                                                                |
+---+---+--------------------------------------------------------------------------------------------------+
|c  |d  |{"v1": {"pa": 345, "pb": 789, "ca": 12, "cb": 6}, "v2": {"pa": 876, "pb": 4321, "ca": 9, "cb": 5}}|
|a  |b  |{"v1": {"pa": 56, "pb": 9, "ca": 1234, "cb": 78}, "v2": {"pa": 6, "pb": 21, "ca": 987, "cb": 543}}|
+---+---+--------------------------------------------------------------------------------------------------+

為此,我編寫了以下代碼,但我認為可以改進此代碼(使用 pandas_udf 或其他東西),但我沒有找到更好的解決方案,我正在尋找任何可能導致更優雅的建議/指導一個有效的解決方案。

def reoganize_col(list_json):
    col_data = {}
    for d in list_json:
        print(d)
        for k,v in d.items(): 
            col_data[k] = v
    return json.dumps(col_data)
udf_reoganize_col = F.udf(reoganize_col, T.StringType())

df = df.withColumn('x', F.create_map(F.lit('ca'), F.col('ca'),
                                     F.lit('cb'), F.col('cb'),
                                     F.lit('pa'), F.col('pa'),
                                     F.lit('pb'), F.col('pb')))
     .groupby(['k1', 'k2']).agg(F.collect_list(F.create_map(F.col('k3'), F.col('x'))).alias('k3'))
df = df.withColumn('k3', udf_reoganize_col(F.col('k3')))

您的解決方案幾乎就在那里。 我建議您使用to_json而不是 UDF 來提高性能,並使用struct而不是map使代碼更清晰。

(df
    .groupBy('k1', 'k2')
    .agg(F.collect_list(F.struct('k3', F.struct('pa', 'pb', 'ca', 'cb'))).alias('k3'))
    .withColumn('k3', F.to_json(F.map_from_entries('k3')))
    .show(10, False)
)

# Output
# +---+---+---------------------------------------------------------------------------------+
# |k1 |k2 |k3                                                                               |
# +---+---+---------------------------------------------------------------------------------+
# |c  |d  |{"v1":{"pa":345,"pb":789,"ca":12,"cb":6},"v2":{"pa":876,"pb":4321,"ca":9,"cb":5}}|
# |a  |b  |{"v1":{"pa":56,"pb":9,"ca":1234,"cb":78},"v2":{"pa":6,"pb":21,"ca":987,"cb":543}}|
# +---+---+---------------------------------------------------------------------------------+

暫無
暫無

聲明:本站的技術帖子網頁,遵循CC BY-SA 4.0協議,如果您需要轉載,請注明本站網址或者原文地址。任何問題請咨詢:yoyou2525@163.com.

 
粵ICP備18138465號  © 2020-2024 STACKOOM.COM