[英]How to fix the error mismatched input 'partition' for window functions in spark sql?
I'd like to run a window function in Spark SQL.我想在 Spark SQL 中运行 window function。 I have Zeppelin sitting on top of a Spark cluster with Hadoop.
我让 Zeppelin 坐在带有 Hadoop 的 Spark 集群顶部。
I'd like to add a row number to a table and group it by a combination of two IDs.我想将行号添加到表中,并按两个 ID 的组合对其进行分组。
This is my data.这是我的数据。
food aisle item date_added
pear 1 1234 '2020-01-01 10:12'
banana 2. 1233 '2020-01-02 10:12'
banana 2 1211 '2020-01-03 10:12'
banana 2 1412 '2020-01-04 10:12'
apple 1 1452 '2020-01-05 10:12'
apple 1 1334 '2020-01-06 10:12'
I'd like to turn the data into this我想把数据变成这个
food aisle item date_added rn
pear 1 1234 '2020-01-01 10:12' 1
banana 2 1233 '2020-01-02 10:12' 3
banana 2 1211 '2020-01-03 10:12' 2
banana 2 1412 '2020-01-04 10:12' 1
apple 1 1452 '2020-01-05 10:12' 2
apple 1 1334 '2020-01-06 10:12' 1
This is my query这是我的查询
%sql
select
food,
aisle,
item,
row_number() over (order by date_added desc
partition by food, aisle
rows between unbounded preceeding and current row) as rn
from fruits
This is the error这是错误
mismatched input 'partition' expecting {')', ',', 'RANGE', 'ROWS'}(line 5, pos 28)
How do I solve this error with Spark SQL?如何使用 Spark SQL 解决此错误?
The correct syntax is:正确的语法是:
row_number() over (partition by food, aisle order by date_added desc) as rn
A window frame specification is not needed for the ranking functions ( row_number()
, rank()
, and dense_rank()
).排名函数(
row_number()
、 rank()
和dense_rank()
)不需要 window 帧规范。
声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.