簡體   English   中英

計算從上次交易日期算起的天數,使用 Window 函數從 Pandas 到 Pyspark 的時間序列實現

[英]Calculate days from last deal date, a Time Series implementation from Pandas to Pyspark using Window function

pandas_output PANDAS 中的代碼

# Calculate days since last deal for customer / master customer
df['booked_date_day'] = pd.to_datetime(df['booked_date_day'])
df['customer_whole_days_from_last_deal'] = df[['customer_nbr', 'booked_date_day']].sort_values(['customer_nbr', 'booked_date_day']).drop_duplicates().groupby('customer_nbr')['booked_date_day'].diff()
df['customer_whole_days_from_last_deal'] = df.sort_values(['customer_nbr', 'booked_date_day']).groupby('customer_nbr')['customer_whole_days_from_last_deal'].fillna(method='ffill')
df['customer_whole_days_from_last_deal'] = df['customer_whole_days_from_last_deal'].dt.days
df['master_customer_whole_days_from_last_deal'] = df[['master_cust', 'booked_date_day']].sort_values(['master_cust', 'booked_date_day']).drop_duplicates().groupby('master_cust')['booked_date_day'].diff()
df['master_customer_whole_days_from_last_deal'] = df.sort_values(['master_cust', 'booked_date_day']).groupby('master_cust')['master_customer_whole_days_from_last_deal'].fillna(method='ffill')
df['master_customer_whole_days_from_last_deal'] = df['master_customer_whole_days_from_last_deal'].dt.days

我在 PYSPARK 中開發的代碼 :: 上面的數據框熊貓是 df,下面的火花數據框是訂單

window1 = Window.partitionBy(orders.customer_nbr).orderBy(orders.booked_date_day)

orders = orders.withColumn("customer_whole_days_from_last_deal", F.datediff(orders.booked_date_day, F.lag(orders.booked_date_day, 1).over(window1)))

window2 = Window.partitionBy(orders.master_cust).orderBy(orders.booked_date_day)

orders = orders.withColumn("master_whole_days_from_last_deal", F.datediff(orders.booked_date_day, F.lag(orders.booked_date_day, 1).over(window2)))


window_ff1= Window.partitionBy(orders.customer_nbr).orderBy(orders.booked_date_day).rowsBetween(-sys.maxsize, 0)
filled_column1 = last(orders['customer_whole_days_from_last_deal'], ignorenulls=True).over(window_ff1)
orders = orders.withColumn('customer_whole_days_from_last_deal', filled_column1)

window_ff2= Window.partitionBy(orders.master_cust).orderBy(orders.booked_date_day).rowsBetween(-sys.maxsize, 0)
filled_column2 = last(orders['master_whole_days_from_last_deal'], ignorenulls=True).over(window_ff2)
orders = orders.withColumn('master_whole_days_from_last_deal', filled_column2)

我正在嘗試為客戶/主客戶計算自上次交易以來的天數,請讓我知道我在 pyspark 中做錯了什么,因為我在 Pandas 中獲得了一行,但在 pyspark 中獲得了更多行。

pyspark_output

df1 = orders.select('customer_nbr','booked_date_day').sort("customer_nbr","booked_date_day").dropDuplicates()
window1 = Window.partitionBy(df1.customer_nbr).orderBy(df1.booked_date_day)
df1 = df1.withColumn("customer_whole_days_from_last_deal", F.datediff(df1.booked_date_day, F.lag(df1.booked_date_day, 1).over(window1)))
window_ff1= Window.partitionBy(df1.customer_nbr).orderBy(df1.booked_date_day).rowsBetween(-sys.maxsize, 0)
filled_column1 = last(df1['customer_whole_days_from_last_deal'], ignorenulls=True).over(window_ff1)
df1 = df1.withColumn('customer_whole_days_from_last_deal', filled_column1)


df2 = orders.select('master_cust','booked_date_day').sort("master_cust","booked_date_day").dropDuplicates()
window2 = Window.partitionBy(df2.master_cust).orderBy(df2.booked_date_day)
df2 = df2.withColumn("master_whole_days_from_last_deal", F.datediff(df2.booked_date_day, F.lag(df2.booked_date_day, 1).over(window2)))
window_ff2= Window.partitionBy(df2.master_cust).orderBy(df2.booked_date_day).rowsBetween(-sys.maxsize, 0)
filled_column2 = last(df2['master_whole_days_from_last_deal'], ignorenulls=True).over(window_ff2)
df2 = df2.withColumn('master_whole_days_from_last_deal', filled_column2)

然后使用 broadcast join ,我將數據幀 'df1' 和 'df2' 加入到 'orders' pyspark_final_output ,這解決了我的問題。

暫無
暫無

聲明:本站的技術帖子網頁,遵循CC BY-SA 4.0協議,如果您需要轉載,請注明本站網址或者原文地址。任何問題請咨詢:yoyou2525@163.com.

 
粵ICP備18138465號  © 2020-2024 STACKOOM.COM