简体   繁体   English

Spark 与 kafka:NoSuchMethodError:org.apache.kafka.clients.consumer.KafkaConsumer.subscribe(Ljava/util/Collection;)

[英]Spark with kafka: NoSuchMethodError: org.apache.kafka.clients.consumer.KafkaConsumer.subscribe(Ljava/util/Collection;)

I try to run java kafka consumer for spark and no matter what i do i get the exception.我尝试为 spark 运行 java kafka 消费者,无论我做什么,我都会遇到异常。 In the exception i see (ConsumerStrategy.scala:85) Why does it says scala here?在我看到的异常中 (ConsumerStrategy.scala:85) 为什么这里说 scala? does this mean that it it uses Scala methods instead of java?这是否意味着它使用 Scala 方法而不是 java? Are any of my libraries conflicting?我的任何图书馆有冲突吗?

My pom我的绒球

<dependencies>
    <dependency>
        <groupId>org.apache.spark</groupId>
        <artifactId>spark-core_2.11</artifactId>
        <version>2.3.0</version>
        <scope>provided</scope>
    </dependency>
    <dependency>
        <groupId>org.apache.spark</groupId>
        <artifactId>spark-sql_2.11</artifactId>
        <version>2.3.0</version>
        <scope>provided</scope>
    </dependency>
    <dependency>
        <groupId>org.apache.spark</groupId>
        <artifactId>spark-streaming_2.11</artifactId>
        <version>2.3.0</version>
        <scope>provided</scope>
    </dependency>

    <dependency>
       <groupId>org.apache.spark</groupId>
       <artifactId>spark-streaming-kafka-0-10_2.11</artifactId>
       <version>2.4.5</version>
    </dependency>
    <dependency>
        <groupId>com.datastax.spark</groupId>
        <artifactId>spark-cassandra-connector_2.11</artifactId>
        <version>2.3.0</version>
    </dependency>
    <dependency>
        <groupId>com.datastax.spark</groupId>
        <artifactId>spark-cassandra-connector-java_2.11</artifactId>
        <version>1.5.2</version>
    </dependency>
        <dependency>
            <groupId>org.apache.kafka</groupId>
            <artifactId>kafka_2.13</artifactId>
            <version>2.4.1</version>
        </dependency>
    </dependencies>

my code:我的代码:

import org.apache.kafka.clients.consumer.ConsumerRecord;
import org.apache.kafka.common.serialization.StringDeserializer;
import org.apache.spark.SparkConf;
import org.apache.spark.streaming.Durations;
import org.apache.spark.streaming.api.java.JavaDStream;
import org.apache.spark.streaming.api.java.JavaInputDStream;
import org.apache.spark.streaming.api.java.JavaStreamingContext;
import org.apache.spark.streaming.kafka010.ConsumerStrategies;
import org.apache.spark.streaming.kafka010.KafkaUtils;
import org.apache.spark.streaming.kafka010.LocationStrategies;

import java.util.Arrays;
import java.util.Collection;
import java.util.HashMap;
import java.util.Map;
public static void main(String[] args) throws InterruptedException {
        SparkConf sparkConf = new SparkConf();
        sparkConf.setAppName("kafkaTest");
       // sparkConf.set("spark.cassandra.connection.host", "127.0.0.1");

        JavaStreamingContext streamingContext = new JavaStreamingContext(
                sparkConf, Durations.seconds(1));

        Map<String, Object> kafkaParams = new HashMap<String, Object>();
        kafkaParams.put("bootstrap.servers", "kafka.kafka:9092");
        kafkaParams.put("key.deserializer", StringDeserializer.class);
        kafkaParams.put("value.deserializer", StringDeserializer.class);
        kafkaParams.put("group.id", "spark_group1");
        kafkaParams.put("auto.offset.reset", "latest");
        kafkaParams.put("enable.auto.commit", false);
        kafkaParams.put("partition.assignment.strategy", "range");

        System.out.println("Hello1");
        Collection<String> topics = Arrays.asList("spark");
        System.out.println("Hello2");
        ConsumerStrategy<String, String> cons = ConsumerStrategies.Subscribe(topics, kafkaParams);

        JavaInputDStream<ConsumerRecord<String, String>> messages =
                KafkaUtils.createDirectStream(
                        streamingContext,
                        LocationStrategies.PreferConsistent(),
                        cons);

        messages.foreachRDD(rdd -> {
            System.out.printf("Mssg received {}", rdd);
        });

i ran it:我跑了:

spark-submit --jars spark-streaming-kafka-0-10_2.11-2.3.0.jar --class Main spark-kafka-1.0-SNAPSHOT-jar-with-dependencies.jar

(also withouti tried --jars spark-streaming-kafka-0-10_2.11-2.3.0.jar and version 2.4.5 of this lib) (也没有尝试过 --jars spark-streaming-kafka-0-10_2.11-2.3.0.jar 和这个库的 2.4.5 版)

and get the exception并获得异常

Exception in thread "streaming-start" java.lang.NoSuchMethodError: org.apache.kafka.clients.consumer.KafkaConsumer.subscribe(Ljava/util/Collection;)V
        at org.apache.spark.streaming.kafka010.Subscribe.onStart(ConsumerStrategy.scala:85)
        at org.apache.spark.streaming.kafka010.DirectKafkaInputDStream.consumer(DirectKafkaInputDStream.scala:73)
        at org.apache.spark.streaming.kafka010.DirectKafkaInputDStream.start(DirectKafkaInputDStream.scala:259)
        at org.apache.spark.streaming.DStreamGraph$$anonfun$start$7.apply(DStreamGraph.scala:54)
        at org.apache.spark.streaming.DStreamGraph$$anonfun$start$7.apply(DStreamGraph.scala:54)
        at scala.collection.parallel.mutable.ParArray$ParArrayIterator.foreach_quick(ParArray.scala:143)
        at scala.collection.parallel.mutable.ParArray$ParArrayIterator.foreach(ParArray.scala:136)
        at scala.collection.parallel.ParIterableLike$Foreach.leaf(ParIterableLike.scala:972)
        at scala.collection.parallel.Task$$anonfun$tryLeaf$1.apply$mcV$sp(Tasks.scala:49)
        at scala.collection.parallel.Task$$anonfun$tryLeaf$1.apply(Tasks.scala:48)
        at scala.collection.parallel.Task$$anonfun$tryLeaf$1.apply(Tasks.scala:48)
        at scala.collection.parallel.Task$class.tryLeaf(Tasks.scala:51)
        at scala.collection.parallel.ParIterableLike$Foreach.tryLeaf(ParIterableLike.scala:969)
        at scala.collection.parallel.AdaptiveWorkStealingTasks$WrappedTask$class.compute(Tasks.scala:152)
        at scala.collection.parallel.AdaptiveWorkStealingForkJoinTasks$WrappedTask.compute(Tasks.scala:443)
        at scala.concurrent.forkjoin.RecursiveAction.exec(RecursiveAction.java:160)
        at scala.concurrent.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
        at scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)
        at scala.concurrent.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
        at scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)

I tried export SPARK_KAFKA_VERSION=0.10 also tried adding adding kafka-clients 0.10.2.1我尝试export SPARK_KAFKA_VERSION=0.10也尝试添加添加 kafka-clients 0.10.2.1

still get the same result.仍然得到相同的结果。

the problem was that there is another kafka library on spark.问题是 spark 上还有另一个 kafka 库。 /opt/hadoop/share/hadoop/tools/lib/kafka-clients-0.8.2.1.jar. /opt/hadoop/share/hadoop/tools/lib/kafka-clients-0.8.2.1.jar。 to overwrite it u used maven shade plugin.覆盖它你使用 maven 阴影插件。 nothing else worked for me.没有其他东西对我有用。 see this link for details: https://medium.com/@minyodev/relocating-classes-using-apache-maven-shade-plugin-6957a1a8666d有关详细信息,请参阅此链接: https://medium.com/@minyodev/relocating-classes-using-apache-maven-shade-plugin-6957a1a8666d

暂无
暂无

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

相关问题 Apache Kafka-Exception:org.apache.kafka.clients.consumer.KafkaConsumer.subscribe(Ljava / util / List;)V - Apache Kafka-Exception : org.apache.kafka.clients.consumer.KafkaConsumer.subscribe(Ljava/util/List;)V 尝试使用flink的Kafka使用者进行消费时,出现错误“ java.lang.NoSuchMethodError:org.apache.kafka.clients.consumer.KafkaConsumer.assign” - Getting Error “java.lang.NoSuchMethodError: org.apache.kafka.clients.consumer.KafkaConsumer.assign” when tring to consume using flink's Kafka Consumer SpringBoot, Kafka: java.lang.NoSuchMethodError: org.apache.kafka.clients.producer.Producer.close(Ljava/time/Duration;)V - SpringBoot, Kafka : java.lang.NoSuchMethodError: org.apache.kafka.clients.producer.Producer.close(Ljava/time/Duration;)V Java中的对象不可序列化(org.apache.kafka.clients.consumer.ConsumerRecord)spark kafka流 - Object not serializable (org.apache.kafka.clients.consumer.ConsumerRecord) in Java spark kafka streaming Kafka抛出“ org.apache.kafka.clients.consumer.CommitFailedException” - Kafka throwing “org.apache.kafka.clients.consumer.CommitFailedException” 面对原因:org.apache.kafka.clients.consumer.CommitFailedException: - Facing Caused by: org.apache.kafka.clients.consumer.CommitFailedException: Kafka消费者启动后是否可以编辑值? (来自org.apache.kafka.clients.consumer) - Is it possible to edit values after Kafka consumer has been initiated? (from org.apache.kafka.clients.consumer) kafka 偏移提交失败 org.apache.kafka.clients.consumer.CommitFailedException - kafka Offset commit failing org.apache.kafka.clients.consumer.CommitFailedException 有没有办法配置 KafkaConsumer (apache.kafka.kafka-clients) 以通过代理与 Kafka 代理一起工作? - Is there a way to configure a KafkaConsumer (apache.kafka.kafka-clients) to work with Kafka brokers via proxy? NoSuchMethodError org.apache.commons.collections.ExtendedProperties.getList(Ljava/lang/String;)Ljava/util/List in Struts 2 - NoSuchMethodError org.apache.commons.collections.ExtendedProperties.getList(Ljava/lang/String;)Ljava/util/List in Struts 2
 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM