简体   繁体   English

Spark Streaming Kafka偏移量管理

[英]Spark Streaming kafka offset manage

I had been doing spark streaming jobs which consumer and produce data through kafka. 我一直在从事火花流工作,这些工作通过kafka进行消费并产生数据。 I used directDstream,so I had to manage offset by myself,we adopted redis to write and read offsets.Now there is one problem,when I launched my client,my client need to get the offset from redis,not offset which exists in kafka itself.how show I write my code?Now I had written my code below: 我使用directDstream,所以我必须自己管理偏移量,我们使用redis来读写偏移量。现在有一个问题,当我启动客户端时,我的客户端需要从redis获取偏移量,而不是kafka中存在的偏移量本身如何显示我编写我的代码?现在我已在下面编写我的代码:

   kafka_stream = KafkaUtils.createDirectStream(
    ssc,
    topics=[config.CONSUME_TOPIC, ],
    kafkaParams={"bootstrap.servers": config.CONSUME_BROKERS,
                 "auto.offset.reset": "largest"},
    fromOffsets=read_offset_range(config.OFFSET_KEY))

But I think the fromOffsets is the value(from redis) when the spark-streaming client lauched,not during its running.thank you for helpinp. 但我认为fromOffsets是启动流式传输客户端时(而不是在运行时)的值(来自redis)。感谢您的帮助。

If I understand you correctly you need to set your offset manually. 如果我对您的理解正确,则需要手动设置偏移量。 This is how I do it: 这是我的方法:

from pyspark.streaming import StreamingContext
from pyspark.streaming.kafka import KafkaUtils
from pyspark.streaming.kafka import TopicAndPartition

stream = StreamingContext(sc, 120) # 120 second window

kafkaParams = {"metadata.broker.list":"1:667,2:6667,3:6667"}
kafkaParams["auto.offset.reset"] = "smallest"
kafkaParams["enable.auto.commit"] = "false"

topic = "xyz"
topicPartion = TopicAndPartition(topic, 0)
fromOffset = {topicPartion: long(PUT NUMERIC OFFSET HERE)}

kafka_stream = KafkaUtils.createDirectStream(stream, [topic], kafkaParams, fromOffsets = fromOffset)

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM