簡體   English   中英

Spark Kafka Streaming多分區CommitAsync問題

[英]Spark Kafka Streaming multi partition CommitAsync issue

我正在閱讀來自Kafka主題的消息,該主題具有多個分區。 從消息中讀取沒有問題,而將偏移范圍提交給Kafka時,卻出現錯誤。 我已盡力嘗試了該級別,但無法解決此問題。

object ParallelStreamJob {

  def main(args: Array[String]): Unit = {
    val spark = SparkHelper.getOrCreateSparkSession()
    val ssc = new StreamingContext(spark.sparkContext, Seconds(10))
    spark.sparkContext.setLogLevel("WARN")
    val kafkaStream = {
      val kafkaParams = Map[String, Object](
        "bootstrap.servers" -> "localhost:9092",
        "key.deserializer" -> classOf[StringDeserializer],
        "value.deserializer" -> classOf[StringDeserializer],
        "group.id" -> "welcome3",

        "auto.offset.reset" -> "latest",
        "enable.auto.commit" -> (false: java.lang.Boolean)
      )

      val topics = Array("test2")
      val numPartitionsOfInputTopic = 2
      val streams = (1 to numPartitionsOfInputTopic) map {
        _ => KafkaUtils.createDirectStream[String, String]( ssc, PreferConsistent, Subscribe[String, String](topics, kafkaParams) )
      }
     streams
    }

   // var offsetRanges = Array[OffsetRange]()
    kafkaStream.foreach(rdd=> {
      rdd.foreachRDD(conRec=> {
        val offsetRanges = conRec.asInstanceOf[HasOffsetRanges].offsetRanges
        conRec.foreach(str=> {
          println(str.value())
          for (o <- offsetRanges) {
            println(s"${o.topic} ${o.partition} ${o.fromOffset} ${o.untilOffset}")
          }
        })

          kafkaStream.asInstanceOf[CanCommitOffsets].commitAsync(offsetRanges)
      })

    })

    println(" Spark parallel reader is ready !!!")


    ssc.start()
    ssc.awaitTermination()
  }
}

錯誤

18/03/19 21:21:30 ERROR JobScheduler: Error running job streaming job 1521512490000 ms.0
java.lang.ClassCastException: scala.collection.immutable.Vector cannot be cast to org.apache.spark.streaming.kafka010.CanCommitOffsets
    at com.cts.ignite.inventory.core.ParallelStreamJob$$anonfun$main$1$$anonfun$apply$1.apply(ParallelStreamJob.scala:48)
    at com.cts.ignite.inventory.core.ParallelStreamJob$$anonfun$main$1$$anonfun$apply$1.apply(ParallelStreamJob.scala:39)
    at org.apache.spark.streaming.dstream.DStream$$anonfun$foreachRDD$1$$anonfun$apply$mcV$sp$3.apply(DStream.scala:628)
    at org.apache.spark.streaming.dstream.DStream$$anonfun$foreachRDD$1$$anonfun$apply$mcV$sp$3.apply(DStream.scala:628)
    at org.a

您可以像這樣提交偏移量

stream.foreachRDD { rdd =>
  val offsetRanges = rdd.asInstanceOf[HasOffsetRanges].offsetRanges

  // some time later, after outputs have completed
  stream.asInstanceOf[CanCommitOffsets].commitAsync(offsetRanges)
}

在您的情況下, kafkaStream是流的Seq 更改您的提交行。 參考: https : //spark.apache.org/docs/latest/streaming-kafka-0-10-integration.html

將kafkaStream.asInstanceOf [CanCommitOffsets] .commitAsync(offsetRanges)行更改為rdd.asInstanceOf [CanCommitOffsets] .commitAsync(offsetRanges)

暫無
暫無

聲明:本站的技術帖子網頁,遵循CC BY-SA 4.0協議,如果您需要轉載,請注明本站網址或者原文地址。任何問題請咨詢:yoyou2525@163.com.

 
粵ICP備18138465號  © 2020-2024 STACKOOM.COM