简体   繁体   中英

Window Overload method cannot resolve in spark structured streaming-scala

The below code is throwing overload error in spark scala structured streaming.

Error:

Cannot resolve overloaded method window

Code
package Stream
import org.apache.spark.sql._
import org.apache.spark.sql.{DataFrame, SaveMode, SparkSession}
import org.apache.spark.sql.functions._
import org.apache.spark.sql.types.StructType
import org.apache.spark.sql.streaming.Trigger
import org.apache.log4j.{Level, Logger}
import org.apache.spark.SparkContext
import org.apache.spark.sql.types._
import org.apache.spark.storage.StorageLevel
import org.apache.spark.streaming._
import org.apache.spark.sql.functions.window




object SparkRestApi {
  def main(args: Array[String]): Unit = {

    val logger = Logger.getLogger("Datapipeline")
    Logger.getLogger("org").setLevel(Level.WARN)
    Logger.getLogger("akka").setLevel(Level.WARN)

    val spark = SparkSession.builder()
      .appName("StreamTest")
      .config("spark.driver.memory", "2g")
      .master("local[*]")
      //.enableHiveSupport()
      .getOrCreate()

    import spark.implicits._

    val userSchema = new StructType()
      .add("id", "string")
      .add("Faulttime", "timestamp")
      .add("name", "string")
      .add("Parentgroup", "string")
      .add("childgroup", "string")
      .add("MountStyle", "string")


val JSONDF = spark
      .readStream
      .option("header",true)
      .option("sep", ",")
      .schema(userSchema)      // Specify schema of the csv files
      .json("D:/TEST")
     

val windowColumn = window($"timestamp", "10 minutes", "5 minutes")

    val df2 = JSONDF.withWatermark("timestamp", "1 minutes")
    .groupBy("Parentgroup","childgroup","MountStyle",window("timestamp", "5 minutes", "1 minutes"))
      .agg(countDistinct("id"))

 df2.
      writeStream
      .outputMode("Append")
      .format("csv")
      .option("checkpointLocation", "D:/TEST/chkdir")
      .option("path", "D:/TEST/OutDir")
      .option("truncate",false)
      .start()
      .awaitTermination()

    spark.stop()


  }

}

Appreciate all valuable suggestion very much. This is throwing error even all library added........................................................................................................................................................................................................................................................................................................................

From the manuals an example:

val windowedCounts = words
    .withWatermark("timestamp", "10 minutes")
    .groupBy(
        window($"timestamp", "10 minutes", "5 minutes"),
        $"word")
    .count()

Try putting your window clause up front I would hazard to guess. And use of $ as in the examples for field names.

val JSONDF = explodedf.withWatermark("timestamp", "1 minutes")

val aggDF = JSONDF.groupBy(functions.window(JSONDF.col("timestamp"), "30 seconds", "30 seconds"),JSONDF.col("jsonData.name")) .avg("jsonData.price").alias("AveragePrice")

Try this, thanks me latter

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM