簡體   English   中英

窗口重載方法無法在 spark 結構化流 scala 中解析

[英]Window Overload method cannot resolve in spark structured streaming-scala

下面的代碼在 spark scala 結構化流中拋出過載錯誤。

錯誤:

Cannot resolve overloaded method window

Code
package Stream
import org.apache.spark.sql._
import org.apache.spark.sql.{DataFrame, SaveMode, SparkSession}
import org.apache.spark.sql.functions._
import org.apache.spark.sql.types.StructType
import org.apache.spark.sql.streaming.Trigger
import org.apache.log4j.{Level, Logger}
import org.apache.spark.SparkContext
import org.apache.spark.sql.types._
import org.apache.spark.storage.StorageLevel
import org.apache.spark.streaming._
import org.apache.spark.sql.functions.window




object SparkRestApi {
  def main(args: Array[String]): Unit = {

    val logger = Logger.getLogger("Datapipeline")
    Logger.getLogger("org").setLevel(Level.WARN)
    Logger.getLogger("akka").setLevel(Level.WARN)

    val spark = SparkSession.builder()
      .appName("StreamTest")
      .config("spark.driver.memory", "2g")
      .master("local[*]")
      //.enableHiveSupport()
      .getOrCreate()

    import spark.implicits._

    val userSchema = new StructType()
      .add("id", "string")
      .add("Faulttime", "timestamp")
      .add("name", "string")
      .add("Parentgroup", "string")
      .add("childgroup", "string")
      .add("MountStyle", "string")


val JSONDF = spark
      .readStream
      .option("header",true)
      .option("sep", ",")
      .schema(userSchema)      // Specify schema of the csv files
      .json("D:/TEST")
     

val windowColumn = window($"timestamp", "10 minutes", "5 minutes")

    val df2 = JSONDF.withWatermark("timestamp", "1 minutes")
    .groupBy("Parentgroup","childgroup","MountStyle",window("timestamp", "5 minutes", "1 minutes"))
      .agg(countDistinct("id"))

 df2.
      writeStream
      .outputMode("Append")
      .format("csv")
      .option("checkpointLocation", "D:/TEST/chkdir")
      .option("path", "D:/TEST/OutDir")
      .option("truncate",false)
      .start()
      .awaitTermination()

    spark.stop()


  }

}

非常感謝所有寶貴的建議。 即使添加了所有庫,這也會引發錯誤...................................... ………………………………………………………………………………………………………………………………………………………… ………………………………………………………………………………………………………………………………………………………… ………………………………………………………………………………………………………………………………………………………… ………………………………………………………………………………………………………………………………………………………… ………………………………………………………………………………………………………………………………………………………… ………………

從手冊中舉個例子:

val windowedCounts = words
    .withWatermark("timestamp", "10 minutes")
    .groupBy(
        window($"timestamp", "10 minutes", "5 minutes"),
        $"word")
    .count()

試着把你的窗口條款放在前面,我可能會猜測。 並在字段名稱的示例中使用 $ 。

val JSONDF = explodedf.withWatermark("timestamp", "1 分鍾")

val aggDF = JSONDF.groupBy(functions.window(JSONDF.col("timestamp"), "30 seconds", "30 seconds"),JSONDF.col("jsonData.name")) .avg("jsonData.price" ).alias("平均價格")

試試這個,謝謝我

暫無
暫無

聲明:本站的技術帖子網頁,遵循CC BY-SA 4.0協議,如果您需要轉載,請注明本站網址或者原文地址。任何問題請咨詢:yoyou2525@163.com.

 
粵ICP備18138465號  © 2020-2024 STACKOOM.COM