简体   繁体   English

为什么我无法将kafka.cluster.BrokerEndPoint转换为kafka.cluster.Broker?

[英]Why do I get kafka.cluster.BrokerEndPoint cannot be cast to kafka.cluster.Broker?

When I run this code I get the following error. 当我运行此代码时,出现以下错误。 I checked the other answer but it didn't work for me. 我检查了另一个答案,但对我没有用。

Does anybody know how to slove this? 有人知道该怎么做吗? I checked the dependencies. 我检查了依赖性。

import org.apache.spark.SparkConf;
import org.apache.spark.api.java.JavaSparkContext;
import org.apache.spark.streaming.Duration;
import org.apache.spark.streaming.api.java.JavaPairInputDStream;
import org.apache.spark.streaming.api.java.JavaStreamingContext;
import org.apache.spark.streaming.kafka.KafkaUtils;

import java.util.*;


/**
 * Created by jonas on 10/10/16.
 */
public class SparkStream {

    public static void main(String[] args){

        SparkConf conf = new SparkConf()
                .setAppName("kafka-sandbox")
                .setMaster("local[*]");
        JavaSparkContext sc = new JavaSparkContext(conf);
        JavaStreamingContext ssc = new JavaStreamingContext(sc, new Duration(2000));

        Map<String, String> kafkaParams = new HashMap<>();
        kafkaParams.put("metadata.broker.list", "localhost:9092");
        Set<String> topics = Collections.singleton("Test");

        JavaPairInputDStream<String, String> directKafkaStream = KafkaUtils.createDirectStream(ssc, String.class
        , String.class, kafka.serializer.StringDecoder.class, kafka.serializer.StringDecoder.class, kafkaParams, topics);

        directKafkaStream.foreachRDD(rdd -> {
            System.out.println("--- New RDD with " + rdd.partitions().size()
                    + " partitions and " + rdd.count() + " records");
            rdd.foreach(record -> System.out.println(record._2));
        });

        // TODO: processing pipeline

        ssc.start();


    }


}

I previously started zookeeper at port 2181 and Kafka server 0.9.0.0 at port 9092. But I get the following error in the Spark driver: 我以前在端口2181上启动了zookeeper,在端口9092上启动了Kafka服务器0.9.0.0。但是在Spark驱动程序中出现以下错误:

Exception in thread "main" java.lang.ClassCastException: kafka.cluster.BrokerEndPoint cannot be cast to kafka.cluster.Broker
        at org.apache.spark.streaming.kafka.KafkaCluster$$anonfun$2$$anonfun$3$$anonfun$apply$6$$anonfun$apply$7.apply(KafkaCluster.scala:97)
        at scala.Option.map(Option.scala:146)
        at org.apache.spark.streaming.kafka.KafkaCluster$$anonfun$2$$anonfun$3$$anonfun$apply$6.apply(KafkaCluster.scala:97)
        at org.apache.spark.streaming.kafka.KafkaCluster$$anonfun$2$$anonfun$3$$anonfun$apply$6.apply(KafkaCluster.scala:94)
        at scala.collection.TraversableLike$$anonfun$flatMap$1.apply(TraversableLike.scala:252)
        at scala.collection.TraversableLike$$anonfun$flatMap$1.apply(TraversableLike.scala:252)
        at scala.collection.IndexedSeqOptimized$class.foreach(IndexedSeqOptimized.scala:33)
        at scala.collection.mutable.WrappedArray.foreach(WrappedArray.scala:35)
        at scala.collection.TraversableLike$class.flatMap(TraversableLike.scala:252)
        at scala.collection.AbstractTraversable.flatMap(Traversable.scala:104)
        at org.apache.spark.streaming.kafka.KafkaCluster$$anonfun$2$$anonfun$3.apply(KafkaCluster.scala:94)
        at org.apache.spark.streaming.kafka.KafkaCluster$$anonfun$2$$anonfun$3.apply(KafkaCluster.scala:93)
        at scala.collection.TraversableLike$$anonfun$flatMap$1.apply(TraversableLike.scala:252)
        at scala.collection.TraversableLike$$anonfun$flatMap$1.apply(TraversableLike.scala:252)
        at scala.collection.immutable.Set$Set1.foreach(Set.scala:79)
        at scala.collection.TraversableLike$class.flatMap(TraversableLike.scala:252)
        at scala.collection.AbstractTraversable.flatMap(Traversable.scala:104)
        at org.apache.spark.streaming.kafka.KafkaCluster$$anonfun$2.apply(KafkaCluster.scala:93)
        at org.apache.spark.streaming.kafka.KafkaCluster$$anonfun$2.apply(KafkaCluster.scala:92)
        at scala.util.Either$RightProjection.flatMap(Either.scala:522)
        at org.apache.spark.streaming.kafka.KafkaCluster.findLeaders(KafkaCluster.scala:92)
        at org.apache.spark.streaming.kafka.KafkaCluster.getLeaderOffsets(KafkaCluster.scala:186)
        at org.apache.spark.streaming.kafka.KafkaCluster.getLeaderOffsets(KafkaCluster.scala:168)
        at org.apache.spark.streaming.kafka.KafkaCluster.getLatestLeaderOffsets(KafkaCluster.scala:157)
        at org.apache.spark.streaming.kafka.KafkaUtils$$anonfun$5.apply(KafkaUtils.scala:215)
        at org.apache.spark.streaming.kafka.KafkaUtils$$anonfun$5.apply(KafkaUtils.scala:211)
        at scala.util.Either$RightProjection.flatMap(Either.scala:522)
        at org.apache.spark.streaming.kafka.KafkaUtils$.getFromOffsets(KafkaUtils.scala:211)
        at org.apache.spark.streaming.kafka.KafkaUtils$.createDirectStream(KafkaUtils.scala:484)
        at org.apache.spark.streaming.kafka.KafkaUtils$.createDirectStream(KafkaUtils.scala:607)
        at org.apache.spark.streaming.kafka.KafkaUtils.createDirectStream(KafkaUtils.scala)
        at SparkStream.main(SparkStream.java:28)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:498)
        at com.intellij.rt.execution.application.AppMain.main(AppMain.java:147)

This seems like a library issue that i am debugging as well. 这似乎是我也在调试的库问题。 I am using kafka server with version 0.10.0.0 and Scala version 2.11 My spark core/streaming version is 2.11:2.0.1 Spark streaming kafka lib is 0-8_2.11:2.0.1 Kafka-client and stream are 0.10.0.1 When i use kafka 2.11:0.10.0.1 lib I get this error but when i use kafka 2.10:0.10.0.1, it works fine. 我正在使用版本0.10.0.0和Scala版本2.11的kafka服务器我的spark核心/流版本是2.11:2.0.1 Spark流kafka lib是0-8_2.11:2.0.1 Kafka客户端和流是0.10.0.1什么时候我使用kafka 2.11:0.10.0.1 lib我收到此错误,但是当我使用kafka 2.10:0.10.0.1时,它工作正常。

Be sure your dependencies are compatible with one another. 确保您的依赖项相互兼容。 Here's some that work together: 这是一起工作的:

<dependency>
  <groupId>org.apache.spark</groupId>
  <artifactId>spark-streaming_2.10</artifactId>
  <version>1.6.2</version>
</dependency>

<dependency>
  <groupId>org.apache.spark</groupId>
  <artifactId>spark-core_2.10</artifactId>
  <version>1.6.2</version>
</dependency>

<dependency>
  <groupId>org.apache.spark</groupId>
  <artifactId>spark-streaming-kafka_2.10</artifactId>
  <version>1.6.2</version>
</dependency>

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM