简体   繁体   English

Spark - 流式DataFrames / Datasets不支持非基于时间的窗口;

[英]Spark - Non-time-based windows are not supported on streaming DataFrames/Datasets;

I need to write Spark sql query with inner select and partition by. 我需要用内部选择和分区编写Spark sql查询。 Problem is that I have AnalysisException. 问题是我有AnalysisException。 I already spend few hours on this but with other approach I have no success. 我已经花了几个小时在这上面但是用其他方法我没有成功。

Exception: 例外:

Exception in thread "main" org.apache.spark.sql.AnalysisException: Non-time-based windows are not supported on streaming DataFrames/Datasets;;
Window [sum(cast(_w0#41 as bigint)) windowspecdefinition(deviceId#28, timestamp#30 ASC NULLS FIRST, RANGE BETWEEN UNBOUNDED PRECEDING AND CURRENT ROW) AS grp#34L], [deviceId#28], [timestamp#30 ASC NULLS FIRST]
+- Project [currentTemperature#27, deviceId#28, status#29, timestamp#30, wantedTemperature#31, CASE WHEN (status#29 = cast(false as boolean)) THEN 1 ELSE 0 END AS _w0#41]

I assume that this is too complicated query to implement like this. 我认为这是一个太复杂的查询来实现这样的。 But i don't know to to fix it. 但我不知道要解决它。

 SparkSession spark = SparkUtils.getSparkSession("RawModel");

 Dataset<RawModel> datasetMap = readFromKafka(spark);

 datasetMap.registerTempTable("test");

 Dataset<Row> res = datasetMap.sqlContext().sql("" +
                " select deviceId, grp, avg(currentTemperature) as averageT, min(timestamp) as minTime ,max(timestamp) as maxTime, count(*) as countFrame " +
                " from (select test.*,  sum(case when status = 'false' then 1 else 0 end) over (partition by deviceId order by timestamp) as grp " +
                "  from test " +
                "  ) test " +
                " group by deviceid, grp ");

Any suggestion would be very appreciated. 任何建议将非常感谢。 Thank you. 谢谢。

I believe the issue is in the windowing specification: 我认为问题出在窗口规范中:

over (partition by deviceId order by timestamp) 

The partition would need to be over a time based column - in your case timestamp . 分区需要超过基于时间的列 - 在您的情况下是时间戳 The following should work: 以下应该有效:

over (partition by timestamp order by timestamp) 

That will not of course address the intent of your query. 这当然不会解决您的查询意图 The following might be attempted: but it is unclear whether spark would support it: 可能会尝试以下方法:但不清楚spark是否会支持它:

over (partition by timestamp, deviceId order by timestamp) 

Even if spark does support that it would still change the semantics of your query. 即使spark 确实支持它仍然会改变查询的语义。

Update 更新

Here is a definitive source: from Tathagata Das who is a key/core committer on spark streaming : http://apache-spark-user-list.1001560.n3.nabble.com/Does-partition-by-and-order-by-works-only-in-stateful-case-td31816.html 这是一个明确的来源:来自Tathagata Das ,他是火花流的关键/核心提交者: http//apache-spark-user-list.1001560.n3.nabble.com/Does-partition-by-and-order-通过-作品只,在有状态的情况下,td31816.html

在此输入图像描述

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM