簡體   English   中英

PySpark窗口函數:orderBetween / rowsBetween之間的orderBy中的多個條件

[英]PySpark Window Function: multiple conditions in orderBy on rangeBetween/rowsBetween

是否可以創建一個Window函數,該函數可以在orderBy 之間rowsBetween之間具有多個條件。 假設我有一個如下所示的數據框。

user_id     timestamp               date        event
0040b5f0    2018-01-22 13:04:32     2018-01-22  1       
0040b5f0    2018-01-22 13:04:35     2018-01-22  0   
0040b5f0    2018-01-25 18:55:08     2018-01-25  1       
0040b5f0    2018-01-25 18:56:17     2018-01-25  1       
0040b5f0    2018-01-25 20:51:43     2018-01-25  1       
0040b5f0    2018-01-31 07:48:43     2018-01-31  1       
0040b5f0    2018-01-31 07:48:48     2018-01-31  0       
0040b5f0    2018-02-02 09:40:58     2018-02-02  1       
0040b5f0    2018-02-02 09:41:01     2018-02-02  0       
0040b5f0    2018-02-05 14:03:27     2018-02-05  1       

每行,我需要事件列值的總和,其日期不超過3天。 但我不能在同一天晚些時候發生事件。 我可以創建一個窗口函數,如:

days = lambda i: i * 86400
my_window = Window\
                .partitionBy(["user_id"])\
                .orderBy(F.col("date").cast("timestamp").cast("long"))\
                .rangeBetween(-days(3), 0)

但這將包括同一天晚些時候發生的事件。 我需要創建一個窗口函數,其行為類似於(對於帶*的行):

user_id     timestamp               date        event
0040b5f0    2018-01-22 13:04:32     2018-01-22  1----|==============|   
0040b5f0    2018-01-22 13:04:35     2018-01-22  0  sum here       all events
0040b5f0    2018-01-25 18:55:08     2018-01-25  1 only           within 3 days 
* 0040b5f0  2018-01-25 18:56:17     2018-01-25  1----|              |
0040b5f0    2018-01-25 20:51:43     2018-01-25  1===================|       
0040b5f0    2018-01-31 07:48:43     2018-01-31  1       
0040b5f0    2018-01-31 07:48:48     2018-01-31  0       
0040b5f0    2018-02-02 09:40:58     2018-02-02  1       
0040b5f0    2018-02-02 09:41:01     2018-02-02  0       
0040b5f0    2018-02-05 14:03:27     2018-02-05  1       

我嘗試創建類似的東西:

days = lambda i: i * 86400
my_window = Window\
                .partitionBy(["user_id"])\
                .orderBy(F.col("date").cast("timestamp").cast("long"))\
                .rangeBetween(-days(3), Window.currentRow)\
                .orderBy(F.col("t_stamp"))\
                .rowsBetween(Window.unboundedPreceding, Window.currentRow)

但它只反映了最后的訂單

結果表應如下所示:

user_id     timestamp               date        event   event_last_3d
0040b5f0    2018-01-22 13:04:32     2018-01-22  1       1
0040b5f0    2018-01-22 13:04:35     2018-01-22  0       1
0040b5f0    2018-01-25 18:55:08     2018-01-25  1       2
0040b5f0    2018-01-25 18:56:17     2018-01-25  1       3
0040b5f0    2018-01-25 20:51:43     2018-01-25  1       4
0040b5f0    2018-01-31 07:48:43     2018-01-31  1       1
0040b5f0    2018-01-31 07:48:48     2018-01-31  0       1
0040b5f0    2018-02-02 09:40:58     2018-02-02  1       2
0040b5f0    2018-02-02 09:41:01     2018-02-02  0       2
0040b5f0    2018-02-05 14:03:27     2018-02-05  1       2

我已經堅持了一段時間,我很感激有關如何處理它的任何建議。

我已經在scala中編寫了相應的函數來滿足您的要求。 我認為轉換為python應該不難:

import org.apache.spark.sql.expressions.Window
import org.apache.spark.sql.functions._
val DAY_SECS = 24*60*60 //Seconds in a day
//Given a timestamp in seconds, returns the seconds equivalent of 00:00:00 of that date
val trimToDateBoundary = (d: Long) => (d / 86400) * 86400
//Using 4 for range here - since your requirement is to cover 3 days prev, which date wise inclusive is 4 days
//So e.g. given any TS of 25 Jan, the range will cover (25 Jan 00:00:00 - 4 times day_secs = 22 Jan 00:00:00) to current TS
val wSpec = Window.partitionBy("user_id").
                orderBy(col("timestamp").cast("long")).
                rangeBetween(trimToDateBoundary(Window.currentRow)-(4*DAY_SECS), Window.currentRow)
df.withColumn("sum", sum('event) over wSpec).show()

以下是應用於您的數據時的輸出:

+--------+--------------------+--------------------+-----+---+
| user_id|           timestamp|                date|event|sum|
+--------+--------------------+--------------------+-----+---+
|0040b5f0|2018-01-22 13:04:...|2018-01-22 00:00:...|  1.0|1.0|
|0040b5f0|2018-01-22 13:04:...|2018-01-22 00:00:...|  0.0|1.0|
|0040b5f0|2018-01-25 18:55:...|2018-01-25 00:00:...|  1.0|2.0|
|0040b5f0|2018-01-25 18:56:...|2018-01-25 00:00:...|  1.0|3.0|
|0040b5f0|2018-01-25 20:51:...|2018-01-25 00:00:...|  1.0|4.0|
|0040b5f0|2018-01-31 07:48:...|2018-01-31 00:00:...|  1.0|1.0|
|0040b5f0|2018-01-31 07:48:...|2018-01-31 00:00:...|  0.0|1.0|
|0040b5f0|2018-02-02 09:40:...|2018-02-02 00:00:...|  1.0|2.0|
|0040b5f0|2018-02-02 09:41:...|2018-02-02 00:00:...|  0.0|2.0|
|0040b5f0|2018-02-05 14:03:...|2018-02-05 00:00:...|  1.0|2.0|
+--------+--------------------+--------------------+-----+---+

我沒有使用“日期”列。 不確定如何通過考慮來實現您的要求。 因此,如果TS的日期可能與日期列不同,那么此解決方案不會涵蓋它。

注意:接受Column args的rangeBetween已在Spark 2.3.0中引入,它接受日期/時間戳類型列。 所以,這個解決方案可能更優雅。

暫無
暫無

聲明:本站的技術帖子網頁,遵循CC BY-SA 4.0協議,如果您需要轉載,請注明本站網址或者原文地址。任何問題請咨詢:yoyou2525@163.com.

 
粵ICP備18138465號  © 2020-2024 STACKOOM.COM