简体   繁体   English

在Spark批处理作业中从kafka读取(从OffOffset直到Offset)

[英]Read from kafka in a Spark batch job (fromOffset untilOffset)

I saw on this question that we can read messages from Kafka in spark batch jobs using org.apache.spark.streaming.kafka.KafkaUtils#createRDD but this method requires a offset range that needs a 'from offset' and 'until offset'. 我在这个问题上看到,我们可以使用org.apache.spark.streaming.kafka.KafkaUtils#createRDD在火花批处理作业中从Kafka读取消息,但是此方法需要一个偏移范围,该范围需要一个'from offset'和'until offset'。 I'm getting the 'from offset' from org.apache.spark.streaming.kafka.KafkaCluster#getLatestLeaderOffsets method but how can I get the until offset? 我从org.apache.spark.streaming.kafka.KafkaCluster#getLatestLeaderOffsets方法获取“从偏移量”,但是如何获取直到偏移量呢? I'm using kafka-2.1.1-0.9.0.1 我正在使用kafka-2.1.1-0.9.0.1

You can use GetOffsetShell to fetch latest offset from any topic 您可以使用GetOffsetShell来获取任何主题的最新偏移量

bin/kafka-run-class.sh kafka.tools.GetOffsetShell --broker-list localhost:9092 --topic myTopic --time -1

this will return : 这将返回:

myTopic:12341:47841

which mean 47841 is the latest offset for topic myTopic 这意味着47841是最新的话题偏移myTopic

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM