簡體   English   中英

Pyspark: Window / 有條件的累積和

[英]Pyspark: Window / Cumulative Sum with Condition

假設我有這樣的數據:

+------+-------+-------+---------------------+
| Col1 | Col2  | Col3  | Col3                |
+------+-------+-------+---------------------+
| A    | 0.532 | 0.234 | 2020-01-01 05:00:00 |
| B    | 0.242 | 0.224 | 2020-01-01 06:00:00 |
| A    | 0.152 | 0.753 | 2020-01-01 08:00:00 |
| C    | 0.149 | 0.983 | 2020-01-01 08:00:00 |
| A    | 0.635 | 0.429 | 2020-01-01 09:00:00 |
| A    | 0.938 | 0.365 | 2020-01-01 10:00:00 |
| C    | 0.293 | 0.956 | 2020-01-02 05:00:00 |
| A    | 0.294 | 0.234 | 2020-01-02 06:00:00 |
| E    | 0.294 | 0.394 | 2020-01-02 07:00:00 |
| D    | 0.294 | 0.258 | 2020-01-02 08:00:00 |
| A    | 0.687 | 0.666 | 2020-01-03 05:00:00 |
| C    | 0.232 | 0.494 | 2020-01-03 06:00:00 |
| D    | 0.575 | 0.845 | 2020-01-03 07:00:00 |
+------+-------+-------+---------------------+

我想創建另一列:

  • Col2 的總和
  • 按 Col1 分組
  • 僅適用於 Col3 之前 2 小時以外的記錄

因此,對於本例,查看 A,然后對 Col2 求和

+------+-------+-------+---------------------+
| Col1 | Col2  | Col3  | Col3                |
+------+-------+-------+---------------------+
| A    | 0.532 | 0.234 | 2020-01-01 05:00:00 | => Will be null, as it is the earliest
| A    | 0.152 | 0.753 | 2020-01-01 08:00:00 | => 0.532, as 05:00:00 is >= 2 hours prior
| A    | 0.635 | 0.429 | 2020-01-01 09:00:00 | => 0.532, as 08:00:00 is <2 hours, but 05:00:00 is >= 2 hours (08:00 is within 2 hours of 09:00)
| A    | 0.938 | 0.365 | 2020-01-01 10:00:00 | => 0.532 + 0.152, as 09:00:00 is < 2 hours, but 08:00:00 and 05:00:00 are >= 2 hours prior
| A    | 0.294 | 0.234 | 2020-01-01 12:00:00 | => 0.532 + 0.152 + 0.635 + 0.938, as all of the ones on the same day are >= least 2 hours prior.
| A    | 0.687 | 0.666 | 2020-01-03 05:00:00 | => Will be null, as it is the earliest this day.
+------+-------+-------+---------------------+
  • 我曾考慮過對它們進行排序並進行累積總和,但不確定如何排除 2 小時范圍內的那些。

  • 考慮過根據條件進行分組和求和,但不完全確定如何執行。

  • 還考慮過發出記錄以填補空白,以便將所有時間都填滿,並匯總到 2 之前。 但是,這需要我轉換數據,因為它在每個小時的頂部並不是天生干凈的; 它們是實際的隨機時間戳。

對於Spark2.4+試試這個。

from pyspark.sql import functions as F
from pyspark.sql.window import Window


w=Window().partitionBy("col1",F.to_date("col4", "yyyy-MM-dd HH:mm:ss")).orderBy(F.unix_timestamp("col4"))\
           .rowsBetween(Window.unboundedPreceding, Window.currentRow)
df\
  .withColumn("try", F.collect_list("col2").over(w))\
  .withColumn("try2", F.collect_list(F.unix_timestamp("col4")).over(w))\
  .withColumn("col5", F.arrays_zip("try","try2")).drop("try")\
  .withColumn("try3",F.element_at("try2", -1))\
  .withColumn("col5", F.when(F.size("try2")>1,F.expr("""aggregate(filter(col5, x-> x.try2 <= (try3-7200)),\
                                                     cast(0 as double), (acc,y)-> acc+y.try)""")).otherwise(None))\
  .drop("try3","try2").orderBy("col1","col4").show(truncate=False)

#+----+-----+-----+-------------------+------------------+
#|col1|col2 |col3 |col4               |col5              |
#+----+-----+-----+-------------------+------------------+
#|A   |0.532|0.234|2020-01-01 05:00:00|null              |
#|A   |0.152|0.753|2020-01-01 08:00:00|0.532             |
#|A   |0.635|0.429|2020-01-01 09:00:00|0.532             |
#|A   |0.938|0.365|2020-01-01 10:00:00|0.684             |
#|A   |0.294|0.234|2020-01-01 12:00:00|2.2569999999999997|
#|A   |0.687|0.666|2020-01-03 05:00:00|null              |
#|B   |0.242|0.224|2020-01-01 06:00:00|null              |
#|C   |0.149|0.983|2020-01-01 08:00:00|null              |
#|C   |0.293|0.956|2020-01-02 05:00:00|null              |
#|C   |0.232|0.494|2020-01-03 06:00:00|null              |
#|D   |0.294|0.258|2020-01-02 08:00:00|null              |
#|D   |0.575|0.845|2020-01-03 07:00:00|null              |
#|E   |0.294|0.394|2020-01-02 07:00:00|null              |
#+----+-----+-----+-------------------+------------------+

暫無
暫無

聲明:本站的技術帖子網頁,遵循CC BY-SA 4.0協議,如果您需要轉載,請注明本站網址或者原文地址。任何問題請咨詢:yoyou2525@163.com.

 
粵ICP備18138465號  © 2020-2024 STACKOOM.COM