繁体   English   中英

使用窗口函数计算PySpark中的累积和

[英]Calculating Cumulative sum in PySpark using Window Functions

我有以下示例DataFrame:

rdd = sc.parallelize([(1,20), (2,30), (3,30)])
df2 = spark.createDataFrame(rdd, ["id", "duration"])
df2.show()

+---+--------+
| id|duration|
+---+--------+
|  1|      20|
|  2|      30|
|  3|      30|
+---+--------+

我想按持续时间的降序对这个DataFrame进行排序,并添加一个新列,该列具有持续时间的累积总和。 所以我做了以下事情:

windowSpec = Window.orderBy(df2['duration'].desc())

df_cum_sum = df2.withColumn("duration_cum_sum", sum('duration').over(windowSpec))

df_cum_sum.show()

+---+--------+----------------+
| id|duration|duration_cum_sum|
+---+--------+----------------+
|  2|      30|              60|
|  3|      30|              60|
|  1|      20|              80|
+---+--------+----------------+

我想要的输出是:

+---+--------+----------------+
| id|duration|duration_cum_sum|
+---+--------+----------------+
|  2|      30|              30| 
|  3|      30|              60| 
|  1|      20|              80|
+---+--------+----------------+

我怎么得到这个?

这是细分:

+--------+----------------+
|duration|duration_cum_sum|
+--------+----------------+
|      30|              30| #First value
|      30|              60| #Current duration + previous cum sum value
|      20|              80| #Current duration + previous cum sum value     
+--------+----------------+

您可以引入row_number来打破row_number 如果用sql编写:

df2.selectExpr(
    "id", "duration", 
    "sum(duration) over (order by row_number() over (order by duration desc)) as duration_cum_sum"
 ).show()

+---+--------+----------------+
| id|duration|duration_cum_sum|
+---+--------+----------------+
|  2|      30|              30|
|  3|      30|              60|
|  1|      20|              80|
+---+--------+----------------+

在这里你可以检查一下

df2.withColumn('cumu', F.sum('duration').over(Window.orderBy(F.col('duration').desc()).rowsBetween(Window.unboundedPreceding, 0)
)).show()

暂无
暂无

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM