[英]Creating a JavaPairRDD using KafkaUtils.createRDD (spark and kafka)
我正在寫一個批處理作業,以重播Kafka中的事件。 Kafka v.0.10.1.0和spark 1.6。
我正在嘗試使用JavaPairRDD javaPairRDD = KafkaUtils.createRDD(...)調用:
Properties configProperties = new Properties();
configProperties.put(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG, "10.4.1.194:9092");
configProperties.put(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG, "org.apache.kafka.common.serialization.StringSerializer");
configProperties.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG, "org.apache.kafka.common.serialization.StringSerializer");
org.apache.kafka.clients.producer.Producer producer = new KafkaProducer(configProperties);
for (String topic : topicNames) {
List<PartitionInfo> partitionInfos = producer.partitionsFor(topic);
for (PartitionInfo partitionInfo : partitionInfos) {
log.debug("partition leader id: {}", partitionInfo.leader().id());
JavaSparkContext sparkContext = new JavaSparkContext(sparkConf);
Map<String, String> kafkaParams = new HashMap();
kafkaParams.put("metadata.broker.list", "10.4.1.194:9092");
kafkaParams.put("zookeeper.connect", "10.4.1.194:2181");
kafkaParams.put("group.id", "kafka-replay");
OffsetRange[] offsetRanges = new OffsetRange[]{OffsetRange.create(topic, partitionInfo.partition(), 0, Long.MAX_VALUE)};
JavaPairRDD<String, String> javaPairRDD = KafkaUtils.createRDD(
sparkContext,
String.class,
String.class,
StringDecoder.class,
StringDecoder.class,
kafkaParams,
offsetRanges);
javaPairRDD
.map(t -> getInstrEvent(t._2))
.filter(ie -> startTimestamp <= ie.getTimestamp() && ie.getTimestamp() <= endTimestamp)
.foreach(s -> System.out.println(s));
}
}
但是,它失敗並顯示以下錯誤:
2016-12-14 15:45:44,700 [main] ERROR com.goldenrat.analytics.KafkaToHdfsReplayMain - error
org.apache.spark.SparkException: Offsets not available on leader: OffsetRange(topic: 'sfs_create_room', partition: 0, range: [1 -> 100])
at org.apache.spark.streaming.kafka.KafkaUtils$.org$apache$spark$streaming$kaf ka$KafkaUtils$$checkOffsets(KafkaUtils.scala:200)
at org.apache.spark.streaming.kafka.KafkaUtils$$anonfun$createRDD$1.apply(KafkaUtils.scala:253)
at org.apache.spark.streaming.kafka.KafkaUtils$$anonfun$createRDD$1.apply(KafkaUtils.scala:249)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:150)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:111)
at org.apache.spark.SparkContext.withScope(SparkContext.scala:714)
at org.apache.spark.streaming.kafka.KafkaUtils$.createRDD(KafkaUtils.scala:249)
at org.apache.spark.streaming.kafka.KafkaUtils$$anonfun$createRDD$3.apply(KafkaUtils.scala:338)
at org.apache.spark.streaming.kafka.KafkaUtils$$anonfun$createRDD$3.apply(KafkaUtils.scala:333)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:150)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:111)
at org.apache.spark.SparkContext.withScope(SparkContext.scala:714)
at org.apache.spark.streaming.kafka.KafkaUtils$.createRDD(KafkaUtils.scala:333)
at org.apache.spark.streaming.kafka.KafkaUtils.createRDD(KafkaUtils.scala)
at com.goldenrat.analytics.KafkaToHdfsReplayMain$KafkaToHdfsReplayJob.start(KafkaToHdfsReplayMain.java:172)
我可以使用其他客戶端連接到代理並獲取消息,因此我知道它不是代理。 有什么幫助嗎?
看起來您無法為范圍指定不存在的偏移量。 我希望我可以通過為Long.MAX_VALUE指定0來獲得所有偏移量,但是如果偏移量對該錯誤消息無效,則它將失敗。 如果我為范圍指定了有效的偏移量(最小/最大),則它確實起作用。 對於任何偶然發現此問題的人,您可以通過以下方式獲得它們:
Properties configProperties = new Properties();
configProperties.put(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG, "10.4.1.194:9092");
configProperties.put(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG, "org.apache.kafka.common.serialization.StringSerializer");
configProperties.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG, "org.apache.kafka.common.serialization.StringSerializer");
org.apache.kafka.clients.producer.Producer producer = new KafkaProducer(configProperties);
for (String topic : topicNames) {
offsets.get(topic).getMinimum(), offsets.get(topic).getMaximum());
log.debug("doing topic: {}", topic);
List<PartitionInfo> partitionInfos = producer.partitionsFor(topic);
for (PartitionInfo partitionInfo : partitionInfos) {
TopicAndPartition topicAndPartition = new TopicAndPartition(topic, partitionInfo.partition());
Map<TopicAndPartition, PartitionOffsetRequestInfo> requestInfo = new HashMap<>();
SimpleConsumer consumer = new SimpleConsumer("10.4.1.194", 9092, 10000, 64 * 1024, "kafka-replay");
requestInfo.put(topicAndPartition, new PartitionOffsetRequestInfo(kafka.api.OffsetRequest.EarliestTime(), 1));
kafka.javaapi.OffsetRequest request = new kafka.javaapi.OffsetRequest(requestInfo, kafka.api.OffsetRequest.CurrentVersion(), "kafka-replay");
OffsetResponse response = consumer.getOffsetsBefore(request);
if (response.hasError()) {
log.error("error, " + response.errorCode(topic, partitionInfo.partition()));
}
long[] earliestOffsetsArray = response.offsets(topic, partitionInfo.partition());
requestInfo = new HashMap<>();
requestInfo.put(topicAndPartition, new PartitionOffsetRequestInfo(kafka.api.OffsetRequest.LatestTime(), 1));
request = new kafka.javaapi.OffsetRequest(requestInfo, kafka.api.OffsetRequest.CurrentVersion(), "kafka-replay");
response = consumer.getOffsetsBefore(request);
if (response.hasError()) {
log.error("error, " + response.errorCode(topic, partitionInfo.partition()));
}
long[] latestOffsetsArray = response.offsets(topic, partitionInfo.partition());
...
聲明:本站的技術帖子網頁,遵循CC BY-SA 4.0協議,如果您需要轉載,請注明本站網址或者原文地址。任何問題請咨詢:yoyou2525@163.com.