[英]Spark Dataframes : CASE statement while using Window PARTITION function Syntax
我需要检查一个条件,如果ReasonCode是否为“ YES”,则将ProcessDate用作PARTITION列之一,否则不使用。
等效的SQL查询如下:
SELECT PNum, SUM(SIAmt) OVER (PARTITION BY PNum,
ReasonCode ,
CASE WHEN ReasonCode = 'YES' THEN ProcessDate ELSE NULL END
ORDER BY ProcessDate RANGE BETWEEN UNBOUNDED PRECEDING AND CURRENT ROW) SumAmt
from TABLE1
到目前为止,我已经尝试了以下查询,但无法合并该条件
Spark数据帧中的“ CASE WHEN ReasonCode ='YES',然后ProcessDate ELSE NULL END”
val df = inputDF.select("PNum")
.withColumn("SumAmt", sum("SIAmt").over(Window.partitionBy("PNum","ReasonCode").orderBy("ProcessDate")))
输入数据:
---------------------------------------
Pnum ReasonCode ProcessDate SIAmt
---------------------------------------
1 No 1/01/2016 200
1 No 2/01/2016 300
1 Yes 3/01/2016 -200
1 Yes 4/01/2016 200
---------------------------------------
预期产量:
---------------------------------------------
Pnum ReasonCode ProcessDate SIAmt SumAmt
---------------------------------------------
1 No 1/01/2016 200 200
1 No 2/01/2016 300 500
1 Yes 3/01/2016 -200 -200
1 Yes 4/01/2016 200 200
---------------------------------------------
关于Spark数据框而不是spark-sql查询的任何建议/帮助吗?
您可以应用与api形式相同的SQL完全相同的副本
import org.apache.spark.sql.functions._
import org.apache.spark.sql.expressions._
val df = inputDF
.withColumn("SumAmt", sum("SIAmt").over(Window.partitionBy(col("PNum"),col("ReasonCode"), when(col("ReasonCode") === "Yes", col("ProcessDate")).otherwise(null)).orderBy("ProcessDate")))
您也可以添加.rowsBetween(Long.MinValue, 0)
部分,这应该给您
+----+----------+-----------+-----+------+
|Pnum|ReasonCode|ProcessDate|SIAmt|SumAmt|
+----+----------+-----------+-----+------+
| 1| Yes| 4/01/2016| 200| 200|
| 1| No| 1/01/2016| 200| 200|
| 1| No| 2/01/2016| 300| 500|
| 1| Yes| 3/01/2016| -200| -200|
+----+----------+-----------+-----+------+
声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.