简体   繁体   中英

Spark Structured Streaming NOT process Kafka offset expires

We have Spark structured streaming app that push the data from Kafka to S3.
Spark Job runs fine for a few days then it start accumulating lag. We have Kafka topic with expiry of 6 hours. If lag increases and some of the offset starts expiring then Spark cannot find the offset and it starts logging following warning. On the surface Spark job seems be running but it is NOT processing any data. When I try to manually restart the system I am getting GC issues (Seen in the screen shot below). I have set "failOnDataLoss" to "false". We want system to not STOP processing when offsets are not found. I do not see any error in the logs except the warning mentioned below.

在此处输入图像描述

Only warn we see is this:

The current available offset range is AvailableOffsetRange(34066048,34444327).
 Offset 34005119 is out of range, and records in [34005119, 34006993) will be
 skipped (GroupId: spark-kafka-source-6b17001a-01ff-4c10-8877-7677cdbbecfc--1295174908-executor, TopicPartition: DataPipelineCopy-46). 
Some data may have been lost because they are not available in Kafka any more; either the
 data was aged out by Kafka or the topic may have been deleted before all the data in the
 topic was processed. If you want your streaming query to fail on such cases, set the source
 option "failOnDataLoss" to "true".
    
        
20/05/17 17:16:30 INFO Fetcher: [Consumer clientId=consumer-7, groupId=spark-kafka-source-6b17001a-01ff-4c10-8877-7677cdbbecfc--1295174908-executor] Resetting offset for partition DataPipelineCopy-1 to offset 34444906.
20/05/17 17:16:30 WARN InternalKafkaConsumer: Some data may be lost. Recovering from the earliest offset: 34068782
20/05/17 17:16:30 WARN InternalKafkaConsumer: 
The current available offset range is AvailableOffsetRange(34068782,34444906).
 Offset 34005698 is out of range, and records in [34005698, 34007572) will be
 skipped (GroupId: spark-kafka-source-6b17001a-01ff-4c10-8877-7677cdbbecfc--1295174908-executor, TopicPartition: DataPipelineCopy-1). 
Some data may have been lost because they are not available in Kafka any more; either the
 data was aged out by Kafka or the topic may have been deleted before all the data in the
 topic was processed. If you want your streaming query to fail on such cases, set the source
 option "failOnDataLoss" to "true".

    ome data may have been lost because they are not available in Kafka any more; either the
 data was aged out by Kafka or the topic may have been deleted before all the data in the
 topic was processed. If you want your streaming query to fail on such cases, set the source
 option "failOnDataLoss" to "true".
    
org.apache.kafka.clients.consumer.OffsetOutOfRangeException: Offsets out of range with no configured reset policy for partitions: {DataPipelineCopy-1=34005698}
    at org.apache.kafka.clients.consumer.internals.Fetcher.parseCompletedFetch(Fetcher.java:970)
    at org.apache.kafka.clients.consumer.internals.Fetcher.fetchedRecords(Fetcher.java:490)
    at org.apache.kafka.clients.consumer.KafkaConsumer.pollForFetches(KafkaConsumer.java:1259)
    at org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:1187)
    at org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:1115)
    at org.apache.spark.sql.kafka010.InternalKafkaConsumer.fetchData(KafkaDataConsumer.scala:470)
    at org.apache.spark.sql.kafka010.InternalKafkaConsumer.org$apache$spark$sql$kafka010$InternalKafkaConsumer$$fetchRecord(KafkaDataConsumer.scala:361)
    at org.apache.spark.sql.kafka010.InternalKafkaConsumer$$anonfun$get$1.apply(KafkaDataConsumer.scala:251)
    at org.apache.spark.sql.kafka010.InternalKafkaConsumer$$anonfun$get$1.apply(KafkaDataConsumer.scala:234)
    at org.apache.spark.util.UninterruptibleThread.runUninterruptibly(UninterruptibleThread.scala:77)
    at org.apache.spark.sql.kafka010.InternalKafkaConsumer.runUninterruptiblyIfPossible(KafkaDataConsumer.scala:209)
    at org.apache.spark.sql.kafka010.InternalKafkaConsumer.get(KafkaDataConsumer.scala:234)
    at org.apache.spark.sql.kafka010.KafkaDataConsumer$class.get(KafkaDataConsumer.scala:64)
    at org.apache.spark.sql.kafka010.KafkaDataConsumer$CachedKafkaDataConsumer.get(KafkaDataConsumer.scala:500)
    at org.apache.spark.sql.kafka010.KafkaMicroBatchInputPartitionReader.next(KafkaMicroBatchReader.scala:357)
    at org.apache.spark.sql.execution.datasources.v2.DataSourceRDD$$anon$1.hasNext(DataSourceRDD.scala:49)
    at org.apache.spark.InterruptibleIterator.hasNext(InterruptibleIterator.scala:37)
    at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage1.processNext(Unknown Source)
    at org.apache.spark.sql.execution.BufferedRowIterator.hasNext(BufferedRowIterator.java:43)
    at org.apache.spark.sql.execution.WholeStageCodegenExec$$anonfun$13$$anon$1.hasNext(WholeStageCodegenExec.scala:636)
    at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:409)
    at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage2.processNext(Unknown Source)
    at org.apache.spark.sql.execution.BufferedRowIterator.hasNext(BufferedRowIterator.java:43)
    at org.apache.spark.sql.execution.WholeStageCodegenExec$$anonfun$13$$anon$1.hasNext(WholeStageCodegenExec.scala:636)
    at org.apache.spark.sql.execution.UnsafeExternalRowSorter.sort(UnsafeExternalRowSorter.java:216)
    at org.apache.spark.sql.execution.SortExec$$anonfun$1.apply(SortExec.scala:108)
    at org.apache.spark.sql.execution.SortExec$$anonfun$1.apply(SortExec.scala:101)
    at org.apache.spark.rdd.RDD$$anonfun$mapPartitionsInternal$1$$anonfun$apply$24.apply(RDD.scala:836)
    at org.apache.spark.rdd.RDD$$anonfun$mapPartitionsInternal$1$$anonfun$apply$24.apply(RDD.scala:836)
    at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:52)
    at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324)
    at org.apache.spark.rdd.RDD.iterator(RDD.scala:288)
    at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90)
    at org.apache.spark.scheduler.Task.run(Task.scala:123)
    at org.apache.spark.executor.Executor$TaskRunner$$anonfun$10.apply(Executor.scala:408)
    at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1360)
    at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:414)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
    at java.lang.Thread.run(Thread.java:748)
20/05/17 17:16:30 WARN ConsumerConfig: The configuration 'consumer.commit.groupid' was supplied but isn't a known config.
20/05/17 17:16:30 INFO AppInfoParser: Kafka version : 2.0.0

Before above failure system seems to be working just fine, BUT NOT PROCESSING ANY NEW DATA FROM KAFKA.

在此处输入图像描述

We have Spark structured streaming app that push the data from Kafka to S3.
Spark Job runs fine for a few days then it start accumulating lag. We have Kafka topic with expiry of 6 hours. If lag increases and some of the offset starts expiring then Spark cannot find the offset and it starts logging following warning. On the surface Spark job seems be running but it is NOT processing any data. When I try to manually restart the system I am getting GC issues (Seen in the screen shot below). I have set "failOnDataLoss" to "false". We want system to not STOP processing when offsets are not found. I do not see any error in the logs except the warning mentioned below.

在此处输入图像描述

Only warn we see is this:

20/05/17 17:16:30 WARN InternalKafkaConsumer: The current available offset range is AvailableOffsetRange(34066048,34444327). Offset 34005119 is out of range, and records in [34005119, 34006993) will be skipped (GroupId: spark-kafka-source-6b17001a-01ff-4c10-8877-7677cdbbecfc--1295174908-executor, TopicPartition: DataPipelineCopy-46). Some data may have been lost because they are not available in Kafka any more; either the data was aged out by Kafka or the topic may have been deleted before all the data in the topic was processed. If you want your streaming query to fail on such cases, set the source option "failOnDataLoss" to "true".

20/05/17 17:16:30 INFO Fetcher: [Consumer clientId=consumer-7, groupId=spark-kafka-source-6b17001a-01ff-4c10-8877-7677cdbbecfc--1295174908-executor] Resetting offset for partition DataPipelineCopy-1 to offset 34444906. 20/05/17 17:16:30 WARN InternalKafkaConsumer: Some data may be lost. Recovering from the earliest offset: 34068782 20/05/17 17:16:30 WARN InternalKafkaConsumer: The current available offset range is AvailableOffsetRange(34068782,34444906). Offset 34005698 is out of range, and records in [34005698, 34007572) will be skipped (GroupId: spark-kafka-source-6b17001a-01ff-4c10-8877-7677cdbbecfc--1295174908-executor, TopicPartition: DataPipelineCopy-1). Some data may have been lost because they are not available in Kafka any more; either the data was aged out by Kafka or the topic may have been deleted before all the data in the topic was processed. If you want your streaming query to fail on such cases, set the source option "failOnDataLoss" to "true".

ome data may have been lost because they are not available in Kafka any more; either the

data was aged out by Kafka or the topic may have been deleted before all the data in the topic was processed. If you want your streaming query to fail on such cases, set the source option "failOnDataLoss" to "true".

org.apache.kafka.clients.consumer.OffsetOutOfRangeException: Offsets out of range with no configured reset policy for partitions: {DataPipelineCopy-1=34005698} at org.apache.kafka.clients.consumer.internals.Fetcher.parseCompletedFetch(Fetcher.java:970) at org.apache.kafka.clients.consumer.internals.Fetcher.fetchedRecords(Fetcher.java:490) at org.apache.kafka.clients.consumer.KafkaConsumer.pollForFetches(KafkaConsumer.java:1259) at org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:1187) at org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:1115) at org.apache.spark.sql.kafka 010.InternalKafkaConsumer.fetchData(KafkaDataConsumer.scala:470) at org.apache.spark.sql.kafka010.InternalKafkaConsumer.org$apache$spark$sql$kafka010$InternalKafkaConsumer$$fetchRecord(KafkaDataConsumer.scala:361) at org.apache.spark.sql.kafka010.InternalKafkaConsumer$$anonfun$get$1.apply(KafkaDataConsumer.scala:251) at org.apache.spark.sql.kafka010.InternalKafkaConsumer$$anonfun$get$1.apply(KafkaDataConsumer.scala:234) at org.apache.spark.util.UninterruptibleThread.runUninterruptibly(UninterruptibleThread.scala:77) at org.apache.spark.sql.kafka010.InternalKafkaConsumer.runUninterrupt iblyIfPossible(KafkaDataConsumer.scala:209) at org.apache.spark.sql.kafka010.InternalKafkaConsumer.get(KafkaDataConsumer.scala:234) at org.apache.spark.sql.kafka010.KafkaDataConsumer$class.get(KafkaDataConsumer.scala:64) at org.apache.spark.sql.kafka010.KafkaDataConsumer$CachedKafkaDataConsumer.get(KafkaDataConsumer.scala:500) at org.apache.spark.sql.kafka010.KafkaMicroBatchInputPartitionReader.next(KafkaMicroBatchReader.scala:357) at org.apache.spark.sql.execution.datasources.v2.DataSourceRDD$$anon$1.hasNext(DataSourceRDD.ZBAAD2C48E66FBC14C61337D0B25782 21Z:49) at org.apache.spark.InterruptibleIterator.hasNext(InterruptibleIterator.scala:37) at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage1.processNext(Unknown Source) at org.apache.spark.sql.execution.BufferedRowIterator.hasNext(BufferedRowIterator.java:43) at org.apache.spark.sql.execution.WholeStageCodegenExec$$anonfun$13$$anon$1.hasNext(WholeStageCodegenExec.scala:636) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:409) at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage2.processNex t(Unknown Source) at org.apache.spark.sql.execution.BufferedRowIterator.hasNext(BufferedRowIterator.java:43) at org.apache.spark.sql.execution.WholeStageCodegenExec$$anonfun$13$$anon$1.hasNext(WholeStageCodegenExec.scala:636) at org.apache.spark.sql.execution.UnsafeExternalRowSorter.sort(UnsafeExternalRowSorter.java:216) at org.apache.spark.sql.execution.SortExec$$anonfun$1.apply(SortExec.scala:108) at org.apache.spark.sql.execution.SortExec$$anonfun$1.apply(SortExec.scala:101) at org.apache.spark.rdd.RDD$$anonfun$mapPartiti onsInternal$1$$anonfun$apply$24.apply(RDD.scala:836) at org.apache.spark.rdd.RDD$$anonfun$mapPartitionsInternal$1$$anonfun$apply$24.apply(RDD.scala:836) at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:52) at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324) at org.apache.spark.rdd.RDD.iterator(RDD.scala:288) at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90) at org.apache.spark.scheduler.Task.run(Task.scala:123) at org.apache.spark.executor.Executor$TaskRunner$$anonfun$10.apply(Executor.scala:408) at org.ZB6EF D606D118D0F62066E31419FF04CCZ.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1360) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:414) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748) 20/05/17 17:16:30 WARN ConsumerConfig: The configuration 'consumer.commit.groupid' was supplied but isn't a known config. 20/05/17 17:16:30 INFO AppInfoParser: Kafka version: 2.0.0

Before above failure system seems to be working just fine, BUT NOT PROCESSING ANY NEW DATA FROM KAFKA.

在此处输入图像描述

It seems the records been remarked as "unvisible",before your app (kafka comsumer) processing them. as mentions What determines Kafka consumer offset?

my solutions: 1.create a new consumer group and restart your app.(your kafka confsumer offset policy set to earliest at first) 2.if step1 isn't work increase the kafka log retention window ( kafka broker parameters: log.retention.hours or log.retention.ms or log.cleaner.delete.retentions.ms, that rely on your product env).

and step2 works fine for me.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM