[英]How to apply groupBy and aggregate functions to a specific window in a PySpark DataFrame?
[英]apply groupby over window in a continuous manner pyspark
我想以60 minutes
的時間 window 應用 groupby,但它只收集它出現的小時內的值,並且對於沒有值的 window 不顯示任何內容。
我希望它的方式是,對於沒有任何值的 window,它給出0
,以便以更連續的方式獲取數據。
例如:
df = sc.parallelize(
[Row(datetime='2015/01/01 03:00:36', value = 2.0),
Row(datetime='2015/01/01 03:40:12', value = 3.0),
Row(datetime='2015/01/01 05:25:30', value = 1.0)]).toDF()
df1 = df.select(sf.unix_timestamp(sf.column("datetime"), 'yyyy/MM/dd HH:mm:ss').cast(TimestampType()).alias("timestamp"), sf.column("value"))
df1.groupBy(sf.window(sf.col("timestamp"), "60 minutes")).agg(sf.sum("value")).show(truncate = False)
我得到的 output 是:
+------------------------------------------+----------+
|window |sum(value)|
+------------------------------------------+----------+
|[2015-01-01 03:00:00, 2015-01-01 04:00:00]|5.0 |
|[2015-01-01 05:00:00, 2015-01-01 06:00:00]|1.0 |
+------------------------------------------+----------+
而我寧願希望 output 是:
+------------------------------------------+----------+
|window |sum(value)|
+------------------------------------------+----------+
|[2015-01-01 03:00:00, 2015-01-01 04:00:00]|5.0 |
|[2015-01-01 04:00:00, 2015-01-01 05:00:00]|0.0 |
|[2015-01-01 05:00:00, 2015-01-01 06:00:00]|1.0 |
+------------------------------------------+----------+
編輯:
然后我如何將其擴展為雙 groupby 並且每個“名稱”的 windows 數量相等:
df = sc.parallelize(
[Row(name = 'ABC', datetime = '2015/01/01 03:00:36', value = 2.0),
Row(name = 'ABC', datetime = '2015/01/01 03:40:12', value = 3.0),
Row(name = 'ABC', datetime = '2015/01/01 05:25:30', value = 1.0),
Row(name = 'XYZ', datetime = '2015/01/01 05:15:30', value = 2.0)]).toDF()
df1 = df.select('name', sf.unix_timestamp(sf.column("datetime"), 'yyyy/MM/dd HH:mm:ss').cast(TimestampType()).alias("timestamp"), sf.column("value"))
df1.show(truncate = False)
>>>+----+-------------------+-----+
|name|timestamp |value|
+----+-------------------+-----+
|ABC |2015-01-01 03:00:36|2.0 |
|ABC |2015-01-01 03:40:12|3.0 |
|ABC |2015-01-01 05:25:30|1.0 |
|XYZ |2015-01-01 05:15:30|2.0 |
+----+-------------------+-----+
我希望結果是:
+----+------------------------------------------+----------+
|name|window |sum(value)|
+----+------------------------------------------+----------+
|ABC |[2015-01-01 03:00:00, 2015-01-01 04:00:00]|5.0 |
|ABC |[2015-01-01 04:00:00, 2015-01-01 05:00:00]|0.0 |
|ABC |[2015-01-01 05:00:00, 2015-01-01 06:00:00]|1.0 |
|XYZ |[2015-01-01 03:00:00, 2015-01-01 04:00:00]|0.0 |
|XYZ |[2015-01-01 04:00:00, 2015-01-01 05:00:00]|0.0 |
|XYZ |[2015-01-01 05:00:00, 2015-01-01 06:00:00]|2.0 |
+----+------------------------------------------+----------+
這實際上是按window
分組的行為,因為您在 4 小時和 5 小時之間沒有相應的行。
但是,您可以通過使用sequence
function 並從min(timestamp)
到max(timestamp)
截斷為小時的單獨 dataframe 中生成間隔來使其工作。 然后,在生成的序列上使用transfrom
function 創建每個桶的strat和end time的結構:
from pyspark.sql import functions as sf
buckets = df1.agg(
sf.expr("""transform(
sequence(date_trunc('hour', min(timestamp)),
date_trunc('hour', max(timestamp)),
interval 1 hour
),
x -> struct(x as start, x + interval 1 hour as end)
)
""").alias("buckets")
).select(sf.explode("buckets").alias("window"))
buckets.show(truncate=False)
#+------------------------------------------+
#|window |
#+------------------------------------------+
#|[2015-01-01 03:00:00, 2015-01-01 04:00:00]|
#|[2015-01-01 04:00:00, 2015-01-01 05:00:00]|
#|[2015-01-01 05:00:00, 2015-01-01 06:00:00]|
#+------------------------------------------+
現在,您加入原始的value
和 groupby window
列以求和:
df2 = buckets.join(
df1,
(sf.col("timestamp") >= sf.col("window.start")) &
(sf.col("timestamp") < sf.col("window.end")),
"left"
).groupBy("window").agg(
sf.sum(sf.coalesce(sf.col("value"), sf.lit(0))).alias("sum")
)
df2.show(truncate=False)
#+------------------------------------------+---+
#|window |sum|
#+------------------------------------------+---+
#|[2015-01-01 04:00:00, 2015-01-01 05:00:00]|0.0|
#|[2015-01-01 03:00:00, 2015-01-01 04:00:00]|5.0|
#|[2015-01-01 05:00:00, 2015-01-01 06:00:00]|1.0|
#+------------------------------------------+---+
聲明:本站的技術帖子網頁,遵循CC BY-SA 4.0協議,如果您需要轉載,請注明本站網址或者原文地址。任何問題請咨詢:yoyou2525@163.com.