[英]How to apply groupBy and aggregate functions to a specific window in a PySpark DataFrame?
[英]How to aggregate data within a time window to a specific date in a dataframe
我有一個像這樣的數據集:
New_ID application_start_date is_approved
1234 2022-03-29 1
2345 2022-01-29 1
1234 2021-02-28 0
567 2019-07-03 1
567 2018-09-01 0
我想創建新屬性N_App_3M
,它將是is_approved
與 3 個月時間范圍內的application_start_date
的總和。
預計 output 將是:
New_ID application_start_date is_approved N_App_3M
1234 2022-03-29 1 2
2345 2022-01-29 0 0
1234 2022-02-28 1 1
567 2019-07-03 1 1
567 2018-09-01 0 0
計算 3 個月和 7 天的滾動總和,然后使用pd.merge_asof
生成您的列:
df["application_start_date"] = pd.to_datetime(df["application_start_date"])
df = df.set_index("application_start_date").sort_index()
app_3M = df.resample("M")["is_approved"].sum().rolling(3).sum().rename("N_App_3M").fillna(0)
app_7D = df.rolling("7D")["is_approved"].sum().rename("N_App_7D").fillna(0)
output = pd.merge_asof(df,app_3M,direction="nearest",left_index=True,right_index=True)
output = pd.merge_asof(output,app_7D,direction="nearest",left_index=True,right_index=True)
>>> output
New_ID is_approved N_App_3M N_App_7D
application_start_date
2018-09-01 567 0 0.0 0.0
2019-07-03 567 1 0.0 1.0
2021-02-28 1234 0 0.0 0.0
2022-01-29 2345 1 1.0 1.0
2022-03-29 1234 1 2.0 1.0
聲明:本站的技術帖子網頁,遵循CC BY-SA 4.0協議,如果您需要轉載,請注明本站網址或者原文地址。任何問題請咨詢:yoyou2525@163.com.