簡體   English   中英

Spark Structured Streaming NOT 處理 Kafka 偏移量過期

[英]Spark Structured Streaming NOT process Kafka offset expires

我們有 Spark 結構化流應用程序,可將數據從 Kafka 推送到 S3。
Spark Job 可以正常運行幾天,然后開始累積延遲。 我們有 Kafka 主題,有效期為 6 小時。 如果延遲增加並且某些偏移量開始過期,則 Spark 找不到偏移量並在警告后開始記錄。 從表面上看,Spark 作業似乎正在運行,但它沒有處理任何數據。 當我嘗試手動重新啟動系統時,我遇到了 GC 問題(見下面的屏幕截圖)。 我已將“failOnDataLoss”設置為“false”。 我們希望系統在未找到偏移量時不停止處理。 除了下面提到的警告外,我在日志中看不到任何錯誤。

在此處輸入圖像描述

我們看到的唯一警告是:

The current available offset range is AvailableOffsetRange(34066048,34444327).
 Offset 34005119 is out of range, and records in [34005119, 34006993) will be
 skipped (GroupId: spark-kafka-source-6b17001a-01ff-4c10-8877-7677cdbbecfc--1295174908-executor, TopicPartition: DataPipelineCopy-46). 
Some data may have been lost because they are not available in Kafka any more; either the
 data was aged out by Kafka or the topic may have been deleted before all the data in the
 topic was processed. If you want your streaming query to fail on such cases, set the source
 option "failOnDataLoss" to "true".
    
        
20/05/17 17:16:30 INFO Fetcher: [Consumer clientId=consumer-7, groupId=spark-kafka-source-6b17001a-01ff-4c10-8877-7677cdbbecfc--1295174908-executor] Resetting offset for partition DataPipelineCopy-1 to offset 34444906.
20/05/17 17:16:30 WARN InternalKafkaConsumer: Some data may be lost. Recovering from the earliest offset: 34068782
20/05/17 17:16:30 WARN InternalKafkaConsumer: 
The current available offset range is AvailableOffsetRange(34068782,34444906).
 Offset 34005698 is out of range, and records in [34005698, 34007572) will be
 skipped (GroupId: spark-kafka-source-6b17001a-01ff-4c10-8877-7677cdbbecfc--1295174908-executor, TopicPartition: DataPipelineCopy-1). 
Some data may have been lost because they are not available in Kafka any more; either the
 data was aged out by Kafka or the topic may have been deleted before all the data in the
 topic was processed. If you want your streaming query to fail on such cases, set the source
 option "failOnDataLoss" to "true".

    ome data may have been lost because they are not available in Kafka any more; either the
 data was aged out by Kafka or the topic may have been deleted before all the data in the
 topic was processed. If you want your streaming query to fail on such cases, set the source
 option "failOnDataLoss" to "true".
    
org.apache.kafka.clients.consumer.OffsetOutOfRangeException: Offsets out of range with no configured reset policy for partitions: {DataPipelineCopy-1=34005698}
    at org.apache.kafka.clients.consumer.internals.Fetcher.parseCompletedFetch(Fetcher.java:970)
    at org.apache.kafka.clients.consumer.internals.Fetcher.fetchedRecords(Fetcher.java:490)
    at org.apache.kafka.clients.consumer.KafkaConsumer.pollForFetches(KafkaConsumer.java:1259)
    at org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:1187)
    at org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:1115)
    at org.apache.spark.sql.kafka010.InternalKafkaConsumer.fetchData(KafkaDataConsumer.scala:470)
    at org.apache.spark.sql.kafka010.InternalKafkaConsumer.org$apache$spark$sql$kafka010$InternalKafkaConsumer$$fetchRecord(KafkaDataConsumer.scala:361)
    at org.apache.spark.sql.kafka010.InternalKafkaConsumer$$anonfun$get$1.apply(KafkaDataConsumer.scala:251)
    at org.apache.spark.sql.kafka010.InternalKafkaConsumer$$anonfun$get$1.apply(KafkaDataConsumer.scala:234)
    at org.apache.spark.util.UninterruptibleThread.runUninterruptibly(UninterruptibleThread.scala:77)
    at org.apache.spark.sql.kafka010.InternalKafkaConsumer.runUninterruptiblyIfPossible(KafkaDataConsumer.scala:209)
    at org.apache.spark.sql.kafka010.InternalKafkaConsumer.get(KafkaDataConsumer.scala:234)
    at org.apache.spark.sql.kafka010.KafkaDataConsumer$class.get(KafkaDataConsumer.scala:64)
    at org.apache.spark.sql.kafka010.KafkaDataConsumer$CachedKafkaDataConsumer.get(KafkaDataConsumer.scala:500)
    at org.apache.spark.sql.kafka010.KafkaMicroBatchInputPartitionReader.next(KafkaMicroBatchReader.scala:357)
    at org.apache.spark.sql.execution.datasources.v2.DataSourceRDD$$anon$1.hasNext(DataSourceRDD.scala:49)
    at org.apache.spark.InterruptibleIterator.hasNext(InterruptibleIterator.scala:37)
    at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage1.processNext(Unknown Source)
    at org.apache.spark.sql.execution.BufferedRowIterator.hasNext(BufferedRowIterator.java:43)
    at org.apache.spark.sql.execution.WholeStageCodegenExec$$anonfun$13$$anon$1.hasNext(WholeStageCodegenExec.scala:636)
    at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:409)
    at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage2.processNext(Unknown Source)
    at org.apache.spark.sql.execution.BufferedRowIterator.hasNext(BufferedRowIterator.java:43)
    at org.apache.spark.sql.execution.WholeStageCodegenExec$$anonfun$13$$anon$1.hasNext(WholeStageCodegenExec.scala:636)
    at org.apache.spark.sql.execution.UnsafeExternalRowSorter.sort(UnsafeExternalRowSorter.java:216)
    at org.apache.spark.sql.execution.SortExec$$anonfun$1.apply(SortExec.scala:108)
    at org.apache.spark.sql.execution.SortExec$$anonfun$1.apply(SortExec.scala:101)
    at org.apache.spark.rdd.RDD$$anonfun$mapPartitionsInternal$1$$anonfun$apply$24.apply(RDD.scala:836)
    at org.apache.spark.rdd.RDD$$anonfun$mapPartitionsInternal$1$$anonfun$apply$24.apply(RDD.scala:836)
    at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:52)
    at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324)
    at org.apache.spark.rdd.RDD.iterator(RDD.scala:288)
    at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90)
    at org.apache.spark.scheduler.Task.run(Task.scala:123)
    at org.apache.spark.executor.Executor$TaskRunner$$anonfun$10.apply(Executor.scala:408)
    at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1360)
    at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:414)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
    at java.lang.Thread.run(Thread.java:748)
20/05/17 17:16:30 WARN ConsumerConfig: The configuration 'consumer.commit.groupid' was supplied but isn't a known config.
20/05/17 17:16:30 INFO AppInfoParser: Kafka version : 2.0.0

在上述故障系統似乎工作正常之前,但沒有處理來自 KAFKA 的任何新數據。

在此處輸入圖像描述

我們有 Spark 結構化流應用程序,可將數據從 Kafka 推送到 S3。
Spark Job 可以正常運行幾天,然后開始累積延遲。 我們有 Kafka 主題,有效期為 6 小時。 如果延遲增加並且某些偏移量開始過期,則 Spark 找不到偏移量並在警告后開始記錄。 從表面上看,Spark 作業似乎正在運行,但它沒有處理任何數據。 當我嘗試手動重新啟動系統時,我遇到了 GC 問題(見下面的屏幕截圖)。 我已將“failOnDataLoss”設置為“false”。 我們希望系統在未找到偏移量時不停止處理。 除了下面提到的警告外,我在日志中看不到任何錯誤。

在此處輸入圖像描述

我們看到的唯一警告是:

20/05/17 17:16:30 WARN InternalKafkaConsumer:當前可用的偏移范圍是 AvailableOffsetRange(34066048,34444327)。 偏移量 34005119 超出范圍,將跳過 [34005119, 34006993) 中的記錄(GroupId:spark-kafka-source-6b17001a-01ff-4c10-8877-7677cdbbecfc--1295174908-executor,TopicPartition:DataPipelineCopy-46)。 一些數據可能已經丟失,因為它們在 Kafka 中不再可用; 要么數據被 Kafka 過期,要么主題可能在處理完主題中的所有數據之前已被刪除。 如果您希望流式查詢在這種情況下失敗,請將源選項“failOnDataLoss”設置為“true”。

20/05/17 17:16:30 INFO Fetcher: [Consumer clientId=consumer-7, groupId=spark-kafka-source-6b17001a-01ff-4c10-8877-7677cdbbecfc--1295174908-executor] 為分區 DataPipelineCopy- 重置偏移量1 到偏移 34444906。20/05/17 17:16:30 WARN InternalKafkaConsumer:一些數據可能會丟失。 從最早的偏移量恢復:34068782 20/05/17 17:16:30 WARN InternalKafkaConsumer:當前可用的偏移量范圍是 AvailableOffsetRange(34068782,34444906)。 偏移量 34005698 超出范圍,將跳過 [34005698, 34007572) 中的記錄(GroupId:spark-kafka-source-6b17001a-01ff-4c10-8877-7677cdbbecfc--1295174908-executor,TopicPartition:DataPipelineCopy-1)。 一些數據可能已經丟失,因為它們在 Kafka 中不再可用; 要么數據被 Kafka 過期,要么主題可能在處理完主題中的所有數據之前已被刪除。 如果您希望流式查詢在這種情況下失敗,請將源選項“failOnDataLoss”設置為“true”。

ome data may have been lost because they are not available in Kafka any more; either the

數據已被 Kafka 老化或主題可能在主題中的所有數據處理完之前已被刪除。 如果您希望流式查詢在這種情況下失敗,請將源選項“failOnDataLoss”設置為“true”。

org.apache.kafka.clients.consumer.OffsetOutOfRangeException: Offsets out of range with no configured reset policy for partitions: {DataPipelineCopy-1=34005698} at org.apache.kafka.clients.consumer.internals.Fetcher.parseCompletedFetch(Fetcher. java:970) at org.apache.kafka.clients.consumer.internals.Fetcher.fetchedRecords(Fetcher.java:490) at org.apache.kafka.clients.consumer.KafkaConsumer.pollForFetches(KafkaConsumer.java:1259) at org .apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:1187) at org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:1115) at org.apache.spark.sql.kafka 010.InternalKafkaConsumer.fetchData(KafkaDataConsumer.scala:470) at org.apache.spark.sql.kafka010.InternalKafkaConsumer.org$apache$spark$sql$kafka010$InternalKafkaConsumer$$fetchRecord(KafkaDataConsumer.scala:361) at org.apache .spark.sql.kafka010.InternalKafkaConsumer$$anonfun$get$1.apply(KafkaDataConsumer.scala:251) at org.apache.spark.sql.kafka010.InternalKafkaConsumer$$anonfun$get$1.apply(KafkaDataConsumer.scala:234) at org.apache.spark.util.UninterruptibleThread.runUninterruptibly(UninterruptibleThread.scala:77) at org.apache.spark.sql.kafka010.InternalKafkaConsumer.runUninterrupt iblyIfPossible(KafkaDataConsumer.scala:209) at org.apache.spark.sql.kafka010.InternalKafkaConsumer.get(KafkaDataConsumer.scala:234) at org.apache.spark.sql.kafka010.KafkaDataConsumer$class.get(KafkaDataConsumer.scala: 64) at org.apache.spark.sql.kafka010.KafkaDataConsumer$CachedKafkaDataConsumer.get(KafkaDataConsumer.scala:500) at org.apache.spark.sql.kafka010.KafkaMicroBatchInputPartitionReader.next(KafkaMicroBatchReader.scala:357) at org.apache .spark.sql.execution.datasources.v2.DataSourceRDD$$anon$1.hasNext(DataSourceRDD.ZBAAD2C48E66FBC14C61337D0B25782 21Z:49) at org.apache.spark.InterruptibleIterator.hasNext(InterruptibleIterator.scala:37) at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage1.processNext(Unknown Source) at org.apache.spark. sql.execution.BufferedRowIterator.hasNext(BufferedRowIterator.java:43) at org.apache.spark.sql.execution.WholeStageCodegenExec$$anonfun$13$$anon$1.hasNext(WholeStageCodegenExec.scala:636) at scala.collection.Iterator$ $anon$11.hasNext(Iterator.scala:409) at org.apache.spark.ZAC5C74B64B4B8352$EF2F181AFFB5AC2AZ.catalyst.GeneratedClassex.GeneratedClass t(Unknown Source) at org.apache.spark.sql.execution.BufferedRowIterator.hasNext(BufferedRowIterator.java:43) at org.apache.spark.sql.execution.WholeStageCodegenExec$$anonfun$13$$anon$1.hasNext(WholeStageCodegenExec .scala:636) at org.apache.spark.sql.execution.UnsafeExternalRowSorter.sort(UnsafeExternalRowSorter.java:216) at org.apache.spark.sql.execution.SortExec$$anonfun$1.apply(SortExec.scala:108 ) at org.apache.spark.sql.execution.SortExec$$anonfun$1.apply(SortExec.scala:101) at org.apache.spark.rdd.RDD$$anonfun$mapPartiti onsInternal$1$$anonfun$apply$24.apply(RDD.scala:836) at org.apache.spark.rdd.RDD$$anonfun$mapPartitionsInternal$1$$anonfun$apply$24.apply(RDD.scala:836) at org .apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:52) at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324) at org.apache.spark.rdd.RDD.iterator(RDD .scala:288) at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90) at org.apache.spark.scheduler.Task.run(Task.scala:123) at org.apache.spark. executor.Executor$TaskRunner$$anonfun$10.apply(Executor.scala:408) at org.ZB6EF D606D118D0F62066E31419FF04CCZ.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1360) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:414) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor .java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748) 20/05/17 17:16:30 WARN ConsumerConfig:提供了配置“consumer.commit.groupid”,但不是已知配置。 20/05/17 17:16:30 INFO AppInfoParser:Kafka 版本:2.0.0

在上述故障系統似乎工作正常之前,但沒有處理來自 KAFKA 的任何新數據。

在此處輸入圖像描述

在您的應用程序(kafka comsumer)處理它們之前,這些記錄似乎被標記為“不可見”。 如前所述, 是什么決定了 Kafka 消費者的偏移量?

我的解決方案:1.創建一個新的消費者組並重新啟動您的應用程序。(您的 kafka confsumer 偏移策略首先設置為最早)2.如果第 1 步不起作用,請增加 kafka 日志保留 window(kafka 代理參數:log.retention。小時或 log.retention.ms 或 log.cleaner.delete.retentions.ms,這取決於您的產品環境)。

step2 對我來說很好。

暫無
暫無

聲明:本站的技術帖子網頁,遵循CC BY-SA 4.0協議,如果您需要轉載,請注明本站網址或者原文地址。任何問題請咨詢:yoyou2525@163.com.

 
粵ICP備18138465號  © 2020-2024 STACKOOM.COM