簡體   English   中英

如何 Pivot pyspark 中的多列類似於 pandas

[英]How to Pivot multiple columns in pyspark similar to pandas

我想在 pyspark 中執行類似的操作,就像 pandas 一樣

我的 dataframe 是:

Year    win_loss_date   Deal    L2 GFCID Name   L2 GFCID    GFCID   GFCID Name  Client Priority Location    Deal Location   Revenue Deal Conclusion New/Rebid
0   2021    2021-03-08 00:00:00 1-2JZONGU   TEST GFCID CREATION P-1-P1DO    P-1-P5O TEST GFCID CREATION None    UNITED STATES   UNITED STATES   4567.0000000    Won New

在此處輸入圖像描述在 pandas 中: pivot 的代碼為:

df = pd.pivot_table(deal_df_pandas, 
                      index=['GFCID', 'GFCID Name', 'Client Priority'], 
                      columns=['New/Rebid', 'Year', 'Deal Conclusion'], 
                      aggfunc={'Deal':'count',
                               'Revenue':'sum',
                               'Location': lambda x: set(x),
                               'Deal Location': lambda x: set(x)}).reset_index()

columns=['New/Rebid', 'Year', 'Deal結論'] ---這些是旋轉的列

Output 我得到並期望:

GFCID   GFCID Name  Client Priority Deal    Revenue
New/Rebid               New Rebid   New Rebid
Year                2020    2021    2020    2021    2020    2021    2020    2021
Deal Conclusion             Lost    Won Lost    Won Lost    Won Lost    Won Lost    Won Lost    Won Lost    Won Lost    Won
    0   0000000752  ARAMARK SERVICES INC    Bronze  NaN 1.0 1.0 2.0 NaN NaN NaN NaN NaN 1600000.0000000 20.0000000  20000.0000000   NaN NaN NaN NaN

在此處輸入圖像描述我想要的是將上述代碼轉換為 pyspark。 我正在嘗試的不起作用:

from pyspark.sql import functions as F
    df_pivot2=(df_d1
        .groupby('GFCID', 'GFCID Name', 'Client Priority')
        .pivot('New/Rebid').agg(F.first('Year'),F.first('Deal Conclusion'),F.count('Deal'),F.sum('Revenue'))

由於 PySPARK 中無法執行此操作:

(df_d1
    .groupby('GFCID', 'GFCID Name', 'Client Priority')
    .pivot('New/Rebid','Year','Deal Conclusion')  #--error

您可以將多列連接成一列,可以在pivot中使用。

考慮下面的例子

data_sdf.show()

# +---+-----+--------+--------+
# | id|state|    time|expected|
# +---+-----+--------+--------+
# |  1|    A|20220722|       1|
# |  1|    A|20220723|       1|
# |  1|    B|20220724|       2|
# |  2|    B|20220722|       1|
# |  2|    C|20220723|       2|
# |  2|    B|20220724|       3|
# +---+-----+--------+--------+

data_sdf. \
    withColumn('pivot_col', func.concat_ws('_', 'state', 'time')). \
    groupBy('id'). \
    pivot('pivot_col'). \
    agg(func.sum('expected')). \
    fillna(0). \
    show()

# +---+----------+----------+----------+----------+----------+
# | id|A_20220722|A_20220723|B_20220722|B_20220724|C_20220723|
# +---+----------+----------+----------+----------+----------+
# |  1|         1|         1|         0|         2|         0|
# |  2|         0|         0|         1|         3|         2|
# +---+----------+----------+----------+----------+----------+

輸入 dataframe 有 2 個字段 - statetime - 將被旋轉。 它們與'_'分隔符連接並在pivot中使用。 您可以根據您的要求在agg中使用多個聚合,然后發布。

暫無
暫無

聲明:本站的技術帖子網頁,遵循CC BY-SA 4.0協議,如果您需要轉載,請注明本站網址或者原文地址。任何問題請咨詢:yoyou2525@163.com.

 
粵ICP備18138465號  © 2020-2024 STACKOOM.COM