简体   繁体   English

Strimzi Kafka + Structured Streaming - 偏移量不可用(这是因为segment.bytes)?

[英]Strimzi Kafka + Structured Streaming - Offsets not available (is this because of segment.bytes)?

i've a long-running Apache Spark structured streaming job running in GCP Dataproc, which reads data from Kafka every 10 mins, and does some processing.我有一个长期运行的 Apache Spark 结构化流作业在 GCP Dataproc 中运行,它每 10 分钟从 Kafka 读取数据,并进行一些处理。 Kafka topic has 3 partitions, and a retention period of 3 days. Kafka 主题有 3 个分区,保留期为 3 天。

The issue i'm facing is that after few hours, the program stops reading data from Kafka.我面临的问题是,几个小时后,程序停止从 Kafka 读取数据。 If i delete the gcp bucket (which is the checkpoint directory), and then restart the streaming job - it starts consuming the data again.如果我删除 gcp 存储桶(即检查点目录),然后重新启动流式作业 - 它会再次开始使用数据。

here is the code, where I'm reading data from Kafka & using foreachbatch to call a function where the processing happens这是代码,我正在从 Kafka 读取数据并使用 foreachbatch 调用发生处理的 function


df_stream = spark.readStream.format('kafka') \
        .option("kafka.security.protocol", "SSL") \
        .option("kafka.ssl.truststore.location", ssl_truststore_location) \
        .option("kafka.ssl.truststore.password", ssl_truststore_password) \
        .option("kafka.ssl.keystore.location", ssl_keystore_location) \
        .option("kafka.ssl.keystore.password", ssl_keystore_password) \
        .option("kafka.bootstrap.servers", kafkaBrokers) \
        .option("subscribe", topic) \
        .option("startingOffsets", "latest") \
        .option("failOnDataLoss", "false") \
        .option("kafka.metadata.max.age.ms", "1000") \
        .option("kafka.ssl.keystore.type", "PKCS12") \
        .option("kafka.ssl.truststore.type", "PKCS12") \
        .option("maxOffsetsPerTrigger", 100000) \
        .option("max.poll.records", 500) \
        .option("max.poll.interval.ms", 1000000) \
        .load()



query = df_stream.selectExpr("CAST(value AS STRING)", "timestamp", "topic").writeStream \
        .outputMode("append") \
        .trigger(processingTime='3 minutes') \
        .option("truncate", "false") \
        .option("checkpointLocation", checkpoint) \
        .foreachBatch(convertToDictForEachBatch) \
        .start()

Here is snippet of the log, where the这是日志的片段,其中

 total time take, convertToDict :  0:00:00.006695
22/08/26 17:45:03 INFO org.apache.kafka.clients.consumer.internals.AbstractCoordinator: [Consumer clientId=consumer-spark-kafka-source-a1532573-fad4-4127-ac20-ab9878913643-594190416-driver-0-1, groupId=spark-kafka-source-a1532573-fad4-4127-ac20-ab9878913643-594190416-driver-0] Member consumer-spark-kafka-source-a1532573-fad4-4127-ac20-ab9878913643-594190416-driver-0-1-26755c4c-93d6-4ab6-8799-411439e310bc sending LeaveGroup request to coordinator 35.185.24.226:9094 (id: 2147483646 rack: null) due to consumer poll timeout has expired. This means the time between subsequent calls to poll() was longer than the configured max.poll.interval.ms, which typically implies that the poll loop is spending too much time processing messages. You can address this either by increasing max.poll.interval.ms or by reducing the maximum size of batches returned in poll() with max.poll.records.
22/08/26 17:50:00 INFO org.apache.kafka.clients.consumer.internals.ConsumerCoordinator: [Consumer clientId=consumer-spark-kafka-source-a1532573-fad4-4127-ac20-ab9878913643-594190416-driver-0-1, groupId=spark-kafka-source-a1532573-fad4-4127-ac20-ab9878913643-594190416-driver-0] Giving away all assigned partitions as lost since generation has been reset,indicating that consumer is no longer part of the group
22/08/26 17:50:00 INFO org.apache.kafka.clients.consumer.internals.ConsumerCoordinator: [Consumer clientId=consumer-spark-kafka-source-a1532573-fad4-4127-ac20-ab9878913643-594190416-driver-0-1, groupId=spark-kafka-source-a1532573-fad4-4127-ac20-ab9878913643-594190416-driver-0] Lost previously assigned partitions syslog.ueba-us4.v1.versa.demo3-0, syslog.ueba-us4.v1.versa.demo3-1, syslog.ueba-us4.v1.versa.demo3-2
22/08/26 17:50:00 INFO org.apache.kafka.clients.consumer.internals.AbstractCoordinator: [Consumer clientId=consumer-spark-kafka-source-a1532573-fad4-4127-ac20-ab9878913643-594190416-driver-0-1, groupId=spark-kafka-source-a1532573-fad4-4127-ac20-ab9878913643-594190416-driver-0] (Re-)joining group
22/08/26 17:50:00 INFO org.apache.kafka.clients.consumer.internals.AbstractCoordinator: [Consumer clientId=consumer-spark-kafka-source-a1532573-fad4-4127-ac20-ab9878913643-594190416-driver-0-1, groupId=spark-kafka-source-a1532573-fad4-4127-ac20-ab9878913643-594190416-driver-0] Join group failed with org.apache.kafka.common.errors.MemberIdRequiredException: The group member needs to have a valid member id before actually entering a consumer group.
22/08/26 17:50:00 INFO org.apache.kafka.clients.consumer.internals.AbstractCoordinator: [Consumer clientId=consumer-spark-kafka-source-a1532573-fad4-4127-ac20-ab9878913643-594190416-driver-0-1, groupId=spark-kafka-source-a1532573-fad4-4127-ac20-ab9878913643-594190416-driver-0] (Re-)joining group
22/08/26 17:50:03 INFO org.apache.kafka.clients.consumer.internals.ConsumerCoordinator: [Consumer clientId=consumer-spark-kafka-source-a1532573-fad4-4127-ac20-ab9878913643-594190416-driver-0-1, groupId=spark-kafka-source-a1532573-fad4-4127-ac20-ab9878913643-594190416-driver-0] Finished assignment for group at generation 1: {consumer-spark-kafka-source-a1532573-fad4-4127-ac20-ab9878913643-594190416-driver-0-1-307570e0-ca20-42d5-b4d1-255da4fca485=Assignment(partitions=[syslog.ueba-us4.v1.versa.demo3-0, syslog.ueba-us4.v1.versa.demo3-1, syslog.ueba-us4.v1.versa.demo3-2])}
22/08/26 17:50:03 INFO org.apache.kafka.clients.consumer.internals.AbstractCoordinator: [Consumer clientId=consumer-spark-kafka-source-a1532573-fad4-4127-ac20-ab9878913643-594190416-driver-0-1, groupId=spark-kafka-source-a1532573-fad4-4127-ac20-ab9878913643-594190416-driver-0] Successfully joined group with generation 1
22/08/26 17:50:03 INFO org.apache.kafka.clients.consumer.internals.ConsumerCoordinator: [Consumer clientId=consumer-spark-kafka-source-a1532573-fad4-4127-ac20-ab9878913643-594190416-driver-0-1, groupId=spark-kafka-source-a1532573-fad4-4127-ac20-ab9878913643-594190416-driver-0] Notifying assignor about the new Assignment(partitions=[syslog.ueba-us4.v1.versa.demo3-0, syslog.ueba-us4.v1.versa.demo3-1, syslog.ueba-us4.v1.versa.demo3-2])
22/08/26 17:50:03 INFO org.apache.kafka.clients.consumer.internals.ConsumerCoordinator: [Consumer clientId=consumer-spark-kafka-source-a1532573-fad4-4127-ac20-ab9878913643-594190416-driver-0-1, groupId=spark-kafka-source-a1532573-fad4-4127-ac20-ab9878913643-594190416-driver-0] Adding newly assigned partitions: syslog.ueba-us4.v1.versa.demo3-0, syslog.ueba-us4.v1.versa.demo3-1, syslog.ueba-us4.v1.versa.demo3-2
22/08/26 17:50:03 INFO org.apache.kafka.clients.consumer.internals.ConsumerCoordinator: [Consumer clientId=consumer-spark-kafka-source-a1532573-fad4-4127-ac20-ab9878913643-594190416-driver-0-1, groupId=spark-kafka-source-a1532573-fad4-4127-ac20-ab9878913643-594190416-driver-0] Found no committed offset for partition syslog.ueba-us4.v1.versa.demo3-0
22/08/26 17:50:03 INFO org.apache.kafka.clients.consumer.internals.ConsumerCoordinator: [Consumer clientId=consumer-spark-kafka-source-a1532573-fad4-4127-ac20-ab9878913643-594190416-driver-0-1, groupId=spark-kafka-source-a1532573-fad4-4127-ac20-ab9878913643-594190416-driver-0] Found no committed offset for partition syslog.ueba-us4.v1.versa.demo3-1
22/08/26 17:50:03 INFO org.apache.kafka.clients.consumer.internals.ConsumerCoordinator: [Consumer clientId=consumer-spark-kafka-source-a1532573-fad4-4127-ac20-ab9878913643-594190416-driver-0-1, groupId=spark-kafka-source-a1532573-fad4-4127-ac20-ab9878913643-594190416-driver-0] Found no committed offset for partition syslog.ueba-us4.v1.versa.demo3-2
22/08/26 17:50:03 INFO org.apache.kafka.clients.consumer.internals.SubscriptionState: [Consumer clientId=consumer-spark-kafka-source-a1532573-fad4-4127-ac20-ab9878913643-594190416-driver-0-1, groupId=spark-kafka-source-a1532573-fad4-4127-ac20-ab9878913643-594190416-driver-0] Resetting offset for partition syslog.ueba-us4.v1.versa.demo3-1 to offset 247969068.
22/08/26 17:50:03 INFO org.apache.kafka.clients.consumer.internals.SubscriptionState: [Consumer clientId=consumer-spark-kafka-source-a1532573-fad4-4127-ac20-ab9878913643-594190416-driver-0-1, groupId=spark-kafka-source-a1532573-fad4-4127-ac20-ab9878913643-594190416-driver-0] Resetting offset for partition syslog.ueba-us4.v1.versa.demo3-2 to offset 246383018.
22/08/26 17:50:03 INFO org.apache.kafka.clients.consumer.internals.SubscriptionState: [Consumer clientId=consumer-spark-kafka-source-a1532573-fad4-4127-ac20-ab9878913643-594190416-driver-0-1, groupId=spark-kafka-source-a1532573-fad4-4127-ac20-ab9878913643-594190416-driver-0] Resetting offset for partition syslog.ueba-us4.v1.versa.demo3-0 to offset 248913006.
22/08/26 17:50:03 INFO org.apache.kafka.clients.consumer.internals.SubscriptionState: [Consumer clientId=consumer-spark-kafka-source-a1532573-fad4-4127-ac20-ab9878913643-594190416-driver-0-1, groupId=spark-kafka-source-a1532573-fad4-4127-ac20-ab9878913643-594190416-driver-0] Seeking to LATEST offset of partition syslog.ueba-us4.v1.versa.demo3-0
22/08/26 17:50:03 INFO org.apache.kafka.clients.consumer.internals.SubscriptionState: [Consumer clientId=consumer-spark-kafka-source-a1532573-fad4-4127-ac20-ab9878913643-594190416-driver-0-1, groupId=spark-kafka-source-a1532573-fad4-4127-ac20-ab9878913643-594190416-driver-0] Seeking to LATEST offset of partition syslog.ueba-us4.v1.versa.demo3-1
22/08/26 17:50:03 INFO org.apache.kafka.clients.consumer.internals.SubscriptionState: [Consumer clientId=consumer-spark-kafka-source-a1532573-fad4-4127-ac20-ab9878913643-594190416-driver-0-1, groupId=spark-kafka-source-a1532573-fad4-4127-ac20-ab9878913643-594190416-driver-0] Seeking to LATEST offset of partition syslog.ueba-us4.v1.versa.demo3-2
22/08/26 17:50:03 INFO org.apache.kafka.clients.consumer.internals.SubscriptionState: [Consumer clientId=consumer-spark-kafka-source-a1532573-fad4-4127-ac20-ab9878913643-594190416-driver-0-1, groupId=spark-kafka-source-a1532573-fad4-4127-ac20-ab9878913643-594190416-driver-0] Resetting offset for partition syslog.ueba-us4.v1.versa.demo3-2 to offset 248534038.
22/08/26 17:50:03 INFO org.apache.kafka.clients.consumer.internals.SubscriptionState: [Consumer clientId=consumer-spark-kafka-source-a1532573-fad4-4127-ac20-ab9878913643-594190416-driver-0-1, groupId=spark-kafka-source-a1532573-fad4-4127-ac20-ab9878913643-594190416-driver-0] Resetting offset for partition syslog.ueba-us4.v1.versa.demo3-1 to offset 248185455.
22/08/26 17:50:03 INFO org.apache.kafka.clients.consumer.internals.SubscriptionState: [Consumer clientId=consumer-spark-kafka-source-a1532573-fad4-4127-ac20-ab9878913643-594190416-driver-0-1, groupId=spark-kafka-source-a1532573-fad4-4127-ac20-ab9878913643-594190416-driver-0] Resetting offset for partition syslog.ueba-us4.v1.versa.demo3-0 to offset 248990456.
 IN CONVERT TO DICT  84  currentTime  2022-08-26 17:50:03.995993  df ->  DataFrame[value: string, timestamp: timestamp, topic: string]
 before adding topic, count in batch  0
+-----+---------+-----+
|value|timestamp|topic|
+-----+---------+-----+
+-----+---------+-----+

Here is the oldest & latest offset in the Kafka topic (from Prometheus)这是 Kafka 主题中最旧和最新的偏移量(来自 Prometheus)

Current Offset (metric : kafka_topic_partition_current_offset)
partition 0 : 249185343
partition 1 : 248380971
partition 2 : 248728475

Oldest Offset (metric : kafka_topic_partition_oldest_offset)
partition 0 : 248913006
partition 1 : 247969068
partition 2 : 248541752

What i see in the checkpoint bucket:我在检查点存储桶中看到的内容:

(base) Karans-MacBook-Pro:prometheus-yamls karanalang$ gsutil cat gs://ss-checkpoint-10m-noconsumergrp/offsets/96 
v1
{"batchWatermarkMs":0,"batchTimestampMs":1661538780007,"conf":{"spark.sql.streaming.stateStore.providerClass":"org.apache.spark.sql.execution.streaming.state.HDFSBackedStateStoreProvider","spark.sql.streaming.join.stateFormatVersion":"2","spark.sql.streaming.stateStore.compression.codec":"lz4","spark.sql.streaming.flatMapGroupsWithState.stateFormatVersion":"2","spark.sql.streaming.multipleWatermarkPolicy":"min","spark.sql.streaming.aggregation.stateFormatVersion":"2","spark.sql.shuffle.partitions":"200"}}
{"syslog.ueba-us4.v1.versa.demo3":{"2":246124174,"1":245765547,"0":246582707}}

The offsets in the checkpoint bucket are lower than the oldest offset in the topic, is that the reason why the data is not being read?检查点桶中的偏移量低于主题中最旧的偏移量,这是没有读取数据的原因吗? What needs to be done to resolve this issue?需要做些什么来解决这个问题?

tia!蒂亚!

Also, here is the yaml used to create the Strimzi Kafka Topic, this此外,这里是用于创建 Strimzi Kafka 主题的 yaml,这个

apiVersion: kafka.strimzi.io/v1beta2
kind: KafkaTopic
metadata:
  annotations:
    kubectl.kubernetes.io/last-applied-configuration: |
      {"apiVersion":"kafka.strimzi.io/v1beta2","kind":"KafkaTopic","metadata":{"annotations":{},"labels":{"strimzi.io/cluster":"versa-kafka-gke"},"name":"syslog.ueba-us4.v1.versa.demo3","namespace":"kafka"},"spec":{"config":{"retention.ms":259200,"segment.bytes":1073741824},"partitions":3,"replicas":3}}
  creationTimestamp: "2022-07-24T18:11:32Z"
  generation: 3
  labels:
    strimzi.io/cluster: versa-kafka-gke
  name: syslog.ueba-us4.v1.versa.demo3
  namespace: kafka
  resourceVersion: "140557580"
  uid: 26c32f6f-5e4f-48ae-8026-007ab59624ec
spec:
  config:
    retention.ms: 259200
    segment.bytes: 1073741824
  partitions: 3
  replicas: 3
status:
  conditions:
  - lastTransitionTime: "2022-08-21T17:25:44.833287Z"
    status: "True"
    type: Ready
  observedGeneration: 3
  topicName: syslog.ueba-us4.v1.versa.demo3

segment.bytes = 1073741824,(ie ~1 GB), is this causing the older data to be unavailable in kafka? segment.bytes = 1073741824,(即~1 GB),这是否会导致旧数据在kafka中不可用?

configuration issue - retention.ms was set to very low value(259200, instead of 259200000), setting it to milliseconds equivalent to 3 days resolved the issue.配置问题-retention.ms 设置为非常低的值(259200,而不是 259200000),将其设置为相当于 3 天的毫秒解决了问题。

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

相关问题 使用 GKE 内部负载均衡器的 Strimzi Kafka 设置 - Strimzi Kafka setup with GKE internal loadbalancer Spark Structured Streaming - stderr 被填满 - Spark Structured Streaming - stderr getting filled up 如何修复 CloudRun 错误“请求已中止,因为没有可用实例” - How to fix CloudRun error 'The request was aborted because there was no available instance' Spark 设置为从最早的偏移量读取 - 在尝试使用 Kafka 上不再可用的偏移量时抛出错误 - Spark set to read from earliest offset - throws error on attempting to consumer an offset no longer available on Kafka “请求已中止,因为没有可用实例”的根本原因和重试。 云函数错误 - Root cause and retry of "The request was aborted because there was no available instance." error in Cloud Functions 访问父级的结构化属性 - Structured Property accessing Parent Go 卡夫卡的未定义卡夫卡组件 - undefined kafka components for Go kafka 与 Salesforce 集成段,添加一个动作 - Integration Segment with Salesforce, adding an action Twilio 流丢失 GaussianBlurBackgroundProcessor - Twilio Streaming lose GaussianBlurBackgroundProcessor 0 字节的共享 0 字节 (100%) 在 Google Workspace 中使用 - 0 bytes of shared 0 bytes (100%) used in Google Workspace
 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM