简体   繁体   English

当我使用SparkStreaming消耗Kafka的消息时,得到NullPointerException

[英]Got NullPointerException, when I using SparkStreaming to consume the Kafka's messages

I'm working some code to Kafka and SparkStreaming, when I put them on Yarn-Cluster, it reported NullPointerException . 我正在为Kafka和SparkStreaming工作一些代码,当我将它们放在Yarn-Cluster上时,它报告了NullPointerException

But it works well on my computer (Stand-alone mode) 但它在我的计算机上运行良好(独立模式)

So what's wrong with it ? 那么这是怎么回事呢?

//Here are the code //这里是代码

import java.util.Properties

import org.apache.kafka.common.serialization.StringDeserializer
import org.apache.log4j.Logger
import org.apache.spark.sql.SparkSession
import org.apache.spark.streaming.kafka010.ConsumerStrategies.Subscribe
import org.apache.spark.streaming.kafka010.KafkaUtils
import org.apache.spark.streaming.kafka010.LocationStrategies.PreferConsistent
import org.apache.spark.streaming.{Seconds, StreamingContext}

object DealLog extends App {

  val spark=SparkSession.builder().appName(" DealLog").getOrCreate()
  val sc = spark.sparkContext
  val ssc: StreamingContext= new StreamingContext(sc, Seconds(3))

  val log = Logger.getLogger(this.getClass)
  val pro = new Properties()
  val in = Thread.currentThread().getContextClassLoader.getResourceAsStream("config.properties")
  pro.load(in)
  //  ssc.checkpoint("hdfs://192.168.0.240:8022/bigdata/checkpoint2")
  val bootstrap=pro.getProperty("kafka.brokers")
  val kafkaParams = Map[String, Object]("bootstrap.servers" -> bootstrap,
    "key.deserializer" -> classOf[StringDeserializer],
    "value.deserializer" -> classOf[StringDeserializer],
    "group.id" -> "userlabel",
    "auto.offset.reset" -> "latest",
    "enable.auto.commit" -> (true: java.lang.Boolean)
  )
  val topicsSet = Array(pro.getProperty("kafkaconsume.topic"))
  val ds = KafkaUtils.createDirectStream[String,String](
    ssc,
    PreferConsistent,
    Subscribe[String,String](topicsSet,kafkaParams)
  ).map(s=>{(s.value())})

  ds.foreachRDD(p=>{
    log.info("ds.foreachRdd p=="+ p)
    p.foreachPartition(per=>{
      log.info("per-------"+per)
      per.foreach(rdd=> {
        log.info("rdd---------"+ rdd)
        if(rdd.isEmpty){
          log.info("null ")
        }
        else{
          log.info("not null..")
        }
        log.info("complete")

      })
    })
  })
  ssc.start()
  ssc.awaitTermination()
}

------------------------Exception here-------------------------------- ------------------------这里例外------------------------ --------

19/07/26 18:21:56 WARN scheduler.TaskSetManager: Lost task 0.0 in stage 0.0 (TID 0, cdh102, executor 2): java.lang.NullPointerException at Recommend.DealLog$$anonfun$2$$anonfun$apply$1.apply(DealLog.scala:42) at Recommend.DealLog$$anonfun$2$$anonfun$apply$1.apply(DealLog.scala:41) at org.apache.spark.rdd.RDD$$anonfun$foreachPartition$1$$anonfun$apply$29.apply(RDD.scala:926) at org.apache.spark.rdd.RDD$$anonfun$foreachPartition$1$$anonfun$apply$29.apply(RDD.scala:926) at org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:2071) at org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:2071) at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:87) at org.apache.spark.scheduler.Task.run(Task.scala:109) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:338) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) 19/07/26 18:21:56 WARN Scheduler.TaskSetManager:在阶段0.0(TID 0,cdh102,执行者2)中丢失了任务0.0:Recommend.DealLog $$ anonfun $ 2 $$ anonfun $ apply $ 1的java.lang.NullPointerException .apply(DealLog.scala:42)在Recommend.DealLog $$ anonfun $ 2 $$ anonfun $ apply $ 1.apply(DealLog.scala:41)在org.apache.spark.rdd.RDD $$ anonfun $ foreachPartition $ 1 $$ org.apache.spark.rdd上的anonfun $ apply $ 29.apply(RDD.scala:926)在org.apache.spark上的RDD $$ anonfun $ foreachPartition $ 1 $ anonfun $ apply $ 29.apply(RDD.scala:926) org.apache.spark.SparkContext $ park.SparkContext $$ anonfun $ runJob $ 5.apply(SparkContext.scala:2071)org.apache.spark.scheduler.ResultTask上的.SparkContext $$ anonfun $ runJob $ 5.apply(SparkContext.scala:2071) org.apache.spark.scheduler.Task.run(Task.scala:109)的org.apache.spark.executor.Executor $ TaskRunner.run(Executor.scala:338)的.runTask(ResultTask.scala:87) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)at java.util.concurrent.ThreadPoolExecutor $ Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) 在java.lang.Thread.run(Thread.java:745)

 19/07/26 18:21:56 INFO scheduler.TaskSetManager: Starting task 0.1 in stage 0.0 (TID 1, cdh102, executor 2, partition 0, PROCESS_LOCAL, 

4706 bytes) 19/07/26 18:21:56 INFO scheduler.TaskSetManager: Lost task 0.1 in stage 0.0 (TID 1) on cdh102, executor 2: java.lang.NullPointerException (null) [duplicate 1] 19/07/26 18:21:56 INFO scheduler.TaskSetManager: Starting task 0.2 in stage 0.0 (TID 2, cdh102, executor 2, partition 0, PROCESS_LOCAL, 4706 bytes) 19/07/26 18:21:56 INFO scheduler.TaskSetManager: Lost task 0.2 in stage 0.0 (TID 2) on cdh102, executor 2: java.lang.NullPointerException (null) [duplicate 2] 19/07/26 18:21:56 INFO scheduler.TaskSetManager: Starting task 0.3 in stage 0.0 (TID 3, cdh102, executor 2, partition 0, PROCESS_LOCAL, 4706 bytes) 19/07/26 18:21:56 INFO scheduler.TaskSetManager: Lost task 0.3 in stage 0.0 (TID 3) on cdh102, executor 2: java.lang.NullPointerException (null) [duplicate 3] 19/07/26 18:21:56 ERROR scheduler.TaskSetManager: Task 0 in stage 0.0 failed 4 times; 4706字节)19/07/26 18:21:56 INFO scheduler.TaskSetManager:在cdh102,执行程序2上的阶段0.0(TID 1)中丢失了任务0.1,重复执行2:java.lang.NullPointerException(空)[重复1] 19/07 / 26 18:21:56 INFO scheduler.TaskSetManager:在阶段0.0中启动任务0.2(TID 2,cdh102,执行程序2,分区0,PROCESS_LOCAL,4706字节)19/07/26 18:21:56 INFO scheduler.TaskSetManager:丢失在cdh102上的阶段0.0(TID 2)中执行任务0.2(执行程序2:java.lang.NullPointerException(空)[重复2] 19/07/26 18:21:56 INFO scheduler.TaskSetManager:在阶段0.0(TID)中启动任务0.3 3,cdh102,执行程序2,分区0,PROCESS_LOCAL,4706字节)19/07/26 18:21:56 INFO scheduler.TaskSetManager:在cdh102,执行程序2上的阶段0.0(TID 3)中丢失了任务0.3:2:java.lang。 NullPointerException(null)[重复3] 19/07/26 18:21:56错误Scheduler.TaskSetManager:阶段0中的任务0失败4次; aborting job 19/07/26 18:21:56 INFO cluster.YarnClusterScheduler: Removed TaskSet 0.0, whose tasks have all completed, from pool 19/07/26 18:21:56 INFO cluster.YarnClusterScheduler: Cancelling stage 0 19/07/26 18:21:56 INFO scheduler.DAGScheduler: ResultStage 0 (foreachPartition at DealLog.scala:41) failed in 1.092 s due to Job aborted due to stage failure: Task 0 in stage 0.0 failed 4 times, most recent failure: Lost task 0.3 in stage 0.0 (TID 3, cdh102, executor 2): java.lang.NullPointerException at Recommend.DealLog$$anonfun$2$$anonfun$apply$1.apply(DealLog.scala:42) at Recommend.DealLog$$anonfun$2$$anonfun$apply$1.apply(DealLog.scala:41) at org.apache.spark.rdd.RDD$$anonfun$foreachPartition$1$$anonfun$apply$29.apply(RDD.scala:926) at org.apache.spark.rdd.RDD$$anonfun$foreachPartition$1$$anonfun$apply$29.apply(RDD.scala:926) at org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:2071) at org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:20 正在中止作业19/07/26 18:21:56 INFO cluster.YarnClusterScheduler:从池19/07/26 18:21:56 INFO cluster.YarnClusterScheduler中删除了任务已全部完成的TaskSet 0.0:取消阶段0 19/07 / 26 18:21:56 INFO scheduler.DAGScheduler:ResultStage 0(DealLog.scala:41上的foreachPartition)在1.092 s内由于阶段失败而因作业中止而失败:阶段0.0中的任务0失败4次,最近一次失败:丢失第0.0阶段的任务0.3(TID 3,cdh102,执行者2):Recommend.DealLog $$ anonfun $ 2 $$ anonfun $ apply $ 1.apply(DealLog.scala:42)处的java.lang.NullPointerException,在Recommend.DealLog $$ anonfun在org.apache.spark.rdd.RDD $ 2 $ anonfun $ apply $ 1.apply(DealLog.scala:41)在org.org上的org.apache.spark.rdd.RDD $$ anonfun $ foreachPartition $ 1 $ anonfun $ apply $ 29.apply(RDD.scala:926)。 apache.spark.rdd.RDD $$ anonfun $ foreachPartition $ 1 $ anonfun $ apply $ 29.apply(RDD.scala:926)在org.apache.spark.SparkContext $$ anonfun $ runJob $ 5.apply(SparkContext.scala:2071) )在org.apache.spark.SparkContext $$ anonfun $ runJob $ 5.apply(SparkContext.scala:20 71) at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:87) at org.apache.spark.scheduler.Task.run(Task.scala:109) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:338) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) 71),位于org.apache.spark.executor.Executor的org.apache.spark.scheduler.Task.run(Task.scala:109)的org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:87) Java上的java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)上的$ TaskRunner.run(Executor.scala:338)在java.java.util.concurrent.ThreadPoolExecutor $ Worker.run(ThreadPoolExecutor.java:617)上。 lang.Thread.run(Thread.java:745)

I think your issue is might be coming from this line 我认为您的问题可能来自此行

if(rdd.isEmpty)

because the way you wrote your code, that isn't actually an RDD. 因为您编写代码的方式实际上不是RDD。 After you call foreachPartition you're going to be getting the iterator to that partition. 调用foreachPartition之后,您将获得迭代器到该分区。 THen when you call foreach on that iterator you'll be accessing the actual records on that partitions iterator. 当您在该迭代器上调用foreach时,您将访问该分区迭代器上的实际记录。 So what you're dealing with on that line is the record coming from the DStream. 因此,您正在处理的是来自DStream的记录。 So potentially you might be calling .isEmpty on a null string/value which throws that exception. 因此,可能您可能在引发该异常的空字符串/值上调用.isEmpty

You could replace .isEmpty with 您可以将.isEmpty替换为

if(record == null)

but you don't have to do that. 但您不必这样做。 You can just check if the RDD itself is empty. 您可以只检查RDD本身是否为空。 Can you try the below instead? 您可以尝试以下方法吗?

import org.apache.kafka.common.serialization.StringDeserializer
import org.apache.log4j.Logger
import org.apache.spark.sql.SparkSession
import org.apache.spark.streaming.kafka010.ConsumerStrategies.Subscribe
import org.apache.spark.streaming.kafka010.KafkaUtils
import org.apache.spark.streaming.kafka010.LocationStrategies.PreferConsistent
import org.apache.spark.streaming.{Seconds, StreamingContext}

object DealLog extends App {

  val spark = SparkSession.builder().appName(" DealLog").getOrCreate()
  val sc = spark.sparkContext
  val ssc: StreamingContext = new StreamingContext(sc, Seconds(3))

  val log = Logger.getLogger(this.getClass)
  val pro = new Properties()
  val in = Thread.currentThread().getContextClassLoader.getResourceAsStream("config.properties")
  pro.load(in)
  //  ssc.checkpoint("hdfs://192.168.0.240:8022/bigdata/checkpoint2")
  val bootstrap = pro.getProperty("kafka.brokers")
  val kafkaParams = Map[String, Object]("bootstrap.servers" -> bootstrap,
    "key.deserializer" -> classOf[StringDeserializer],
    "value.deserializer" -> classOf[StringDeserializer],
    "group.id" -> "userlabel",
    "auto.offset.reset" -> "latest",
    "enable.auto.commit" -> (true: java.lang.Boolean)
  )
  val topicsSet = Array(pro.getProperty("kafkaconsume.topic"))
  val ds = KafkaUtils.createDirectStream[String, String](
    ssc,
    PreferConsistent,
    Subscribe[String, String](topicsSet, kafkaParams)
  ).map(s => {
    (s.value())
  })

  ds.foreachRDD(rdd => {
    log.info("ds.foreachRdd p==" + rdd)
    if (!rdd.isEmpty) {
      rdd.foreachPartition(partition => {
        log.info("per-------" + partition)
        partition.foreach(record => {
          log.info("record---------" + record)
        })
      })
    } else log.info("rdd was empty")

    log.info("complete")
  })
  ssc.start()
  ssc.awaitTermination()
  ssc.stop()
}

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM