简体   繁体   English

java.lang.AbstractMethodError, org.apache.spark.internal.Logging$class.initializeLogIfNecessary

[英]java.lang.AbstractMethodError, org.apache.spark.internal.Logging$class.initializeLogIfNecessary

I m running the kafka producer and consumer code for testing purpose in cdh 5.12.我正在 cdh 5.12 中运行 kafka 生产者和消费者代码以进行测试。 While I m trying to do so I m facing below error while running the consumer code.当我尝试这样做时,我在运行消费者代码时遇到了以下错误。

dataSet: org.apache.spark.sql.Dataset[(String, String)] = [key: string, value: string]
query: org.apache.spark.sql.streaming.StreamingQuery = org.apache.spark.sql.execution.streaming.StreamingQueryWrapper@109a5573
2018-10-25 10:08:37 ERROR MicroBatchExecution:91 - Query [id = 70bc4f7a-cc41-470d-afd0-d46e5aebf3db, runId = 4d974468-6c6b-47e5-976b-8b9aa98114e2] terminated with error
java.lang.AbstractMethodError
        at org.apache.spark.internal.Logging$class.initializeLogIfNecessary(Logging.scala:99)
        at org.apache.spark.sql.kafka010.KafkaSourceProvider$.initializeLogIfNecessary(KafkaSourceProvider.scala:369)
        at org.apache.spark.internal.Logging$class.log(Logging.scala:46)
        at org.apache.spark.sql.kafka010.KafkaSourceProvider$.log(KafkaSourceProvider.scala:369)
        at org.apache.spark.internal.Logging$class.logDebug(Logging.scala:58)
        at org.apache.spark.sql.kafka010.KafkaSourceProvider$.logDebug(KafkaSourceProvider.scala:369)
        at org.apache.spark.sql.kafka010.KafkaSourceProvider$ConfigUpdater.set(KafkaSourceProvider.scala:439)
        at org.apache.spark.sql.kafka010.KafkaSourceProvider$.kafkaParamsForDriver(KafkaSourceProvider.scala:394)
        at org.apache.spark.sql.kafka010.KafkaSourceProvider.createSource(KafkaSourceProvider.scala:90)
        at org.apache.spark.sql.execution.datasources.DataSource.createSource(DataSource.scala:277)
        at org.apache.spark.sql.execution.streaming.MicroBatchExecution$$anonfun$1$$anonfun$applyOrElse$1.apply(MicroBatchExecution.scala:80)
        at org.apache.spark.sql.execution.streaming.MicroBatchExecution$$anonfun$1$$anonfun$applyOrElse$1.apply(MicroBatchExecution.scala:77)
        at scala.collection.mutable.MapLike$class.getOrElseUpdate(MapLike.scala:194)
        at scala.collection.mutable.AbstractMap.getOrElseUpdate(Map.scala:80)
        at org.apache.spark.sql.execution.streaming.MicroBatchExecution$$anonfun$1.applyOrElse(MicroBatchExecution.scala:77)
        at org.apache.spark.sql.execution.streaming.MicroBatchExecution$$anonfun$1.applyOrElse(MicroBatchExecution.scala:75)
        at org.apache.spark.sql.catalyst.trees.TreeNode$$anonfun$2.apply(TreeNode.scala:267)
        at org.apache.spark.sql.catalyst.trees.TreeNode$$anonfun$2.apply(TreeNode.scala:267)
        at org.apache.spark.sql.catalyst.trees.CurrentOrigin$.withOrigin(TreeNode.scala:70)
        at org.apache.spark.sql.catalyst.trees.TreeNode.transformDown(TreeNode.scala:266)
        at org.apache.spark.sql.catalyst.trees.TreeNode$$anonfun$transformDown$1.apply(TreeNode.scala:272)
        at org.apache.spark.sql.catalyst.trees.TreeNode$$anonfun$transformDown$1.apply(TreeNode.scala:272)
        at org.apache.spark.sql.catalyst.trees.TreeNode$$anonfun$4.apply(TreeNode.scala:306)
        at org.apache.spark.sql.catalyst.trees.TreeNode.mapProductIterator(TreeNode.scala:187)
        at org.apache.spark.sql.catalyst.trees.TreeNode.mapChildren(TreeNode.scala:304)
        at org.apache.spark.sql.catalyst.trees.TreeNode.transformDown(TreeNode.scala:272)
        at org.apache.spark.sql.catalyst.trees.TreeNode.transform(TreeNode.scala:256)
        at org.apache.spark.sql.execution.streaming.MicroBatchExecution.logicalPlan$lzycompute(MicroBatchExecution.scala:75)
        at org.apache.spark.sql.execution.streaming.MicroBatchExecution.logicalPlan(MicroBatchExecution.scala:61)
        at org.apache.spark.sql.execution.streaming.StreamExecution.org$apache$spark$sql$execution$streaming$StreamExecution$$runStream(StreamExecution.scala:265)
        at org.apache.spark.sql.execution.streaming.StreamExecution$$anon$1.run(StreamExecution.scala:189)
Exception in thread "stream execution thread for [id = 70bc4f7a-cc41-470d-afd0-d46e5aebf3db, runId = 4d974468-6c6b-47e5-976b-8b9aa98114e2]" java.lang.AbstractMethodError
        at org.apache.spark.internal.Logging$class.initializeLogIfNecessary(Logging.scala:99)
        at org.apache.spark.sql.kafka010.KafkaSourceProvider$.initializeLogIfNecessary(KafkaSourceProvider.scala:369)
        at org.apache.spark.internal.Logging$class.log(Logging.scala:46)
        at org.apache.spark.sql.kafka010.KafkaSourceProvider$.log(KafkaSourceProvider.scala:369)
        at org.apache.spark.internal.Logging$class.logDebug(Logging.scala:58)
        at org.apache.spark.sql.kafka010.KafkaSourceProvider$.logDebug(KafkaSourceProvider.scala:369)
        at org.apache.spark.sql.kafka010.KafkaSourceProvider$ConfigUpdater.set(KafkaSourceProvider.scala:439)
        at org.apache.spark.sql.kafka010.KafkaSourceProvider$.kafkaParamsForDriver(KafkaSourceProvider.scala:394)
        at org.apache.spark.sql.kafka010.KafkaSourceProvider.createSource(KafkaSourceProvider.scala:90)
        at org.apache.spark.sql.execution.datasources.DataSource.createSource(DataSource.scala:277)
        at org.apache.spark.sql.execution.streaming.MicroBatchExecution$$anonfun$1$$anonfun$applyOrElse$1.apply(MicroBatchExecution.scala:80)
        at org.apache.spark.sql.execution.streaming.MicroBatchExecution$$anonfun$1$$anonfun$applyOrElse$1.apply(MicroBatchExecution.scala:77)
        at scala.collection.mutable.MapLike$class.getOrElseUpdate(MapLike.scala:194)
        at scala.collection.mutable.AbstractMap.getOrElseUpdate(Map.scala:80)
        at org.apache.spark.sql.execution.streaming.MicroBatchExecution$$anonfun$1.applyOrElse(MicroBatchExecution.scala:77)
        at org.apache.spark.sql.execution.streaming.MicroBatchExecution$$anonfun$1.applyOrElse(MicroBatchExecution.scala:75)
        at org.apache.spark.sql.catalyst.trees.TreeNode$$anonfun$2.apply(TreeNode.scala:267)
        at org.apache.spark.sql.catalyst.trees.TreeNode$$anonfun$2.apply(TreeNode.scala:267)
        at org.apache.spark.sql.catalyst.trees.CurrentOrigin$.withOrigin(TreeNode.scala:70)
        at org.apache.spark.sql.catalyst.trees.TreeNode.transformDown(TreeNode.scala:266)
        at org.apache.spark.sql.catalyst.trees.TreeNode$$anonfun$transformDown$1.apply(TreeNode.scala:272)
        at org.apache.spark.sql.catalyst.trees.TreeNode$$anonfun$transformDown$1.apply(TreeNode.scala:272)
        at org.apache.spark.sql.catalyst.trees.TreeNode$$anonfun$4.apply(TreeNode.scala:306)
        at org.apache.spark.sql.catalyst.trees.TreeNode.mapProductIterator(TreeNode.scala:187)
        at org.apache.spark.sql.catalyst.trees.TreeNode.mapChildren(TreeNode.scala:304)
        at org.apache.spark.sql.catalyst.trees.TreeNode.transformDown(TreeNode.scala:272)
        at org.apache.spark.sql.catalyst.trees.TreeNode.transform(TreeNode.scala:256)
        at org.apache.spark.sql.execution.streaming.MicroBatchExecution.logicalPlan$lzycompute(MicroBatchExecution.scala:75)
        at org.apache.spark.sql.execution.streaming.MicroBatchExecution.logicalPlan(MicroBatchExecution.scala:61)
        at org.apache.spark.sql.execution.streaming.StreamExecution.org$apache$spark$sql$execution$streaming$StreamExecution$$runStream(StreamExecution.scala:265)
        at org.apache.spark.sql.execution.streaming.StreamExecution$$anon$1.run(StreamExecution.scala:189)
org.apache.spark.sql.streaming.StreamingQueryException: Query [id = 70bc4f7a-cc41-470d-afd0-d46e5aebf3db, runId = 4d974468-6c6b-47e5-976b-8b9aa98114e2] terminated with exception: null
  at org.apache.spark.sql.execution.streaming.StreamExecution.org$apache$spark$sql$execution$streaming$StreamExecution$$runStream(StreamExecution.scala:295)
  at org.apache.spark.sql.execution.streaming.StreamExecution$$anon$1.run(StreamExecution.scala:189)
Caused by: java.lang.AbstractMethodError
  at org.apache.spark.internal.Logging$class.initializeLogIfNecessary(Logging.scala:99)
  at org.apache.spark.sql.kafka010.KafkaSourceProvider$.initializeLogIfNecessary(KafkaSourceProvider.scala:369)
  at org.apache.spark.internal.Logging$class.log(Logging.scala:46)
  at org.apache.spark.sql.kafka010.KafkaSourceProvider$.log(KafkaSourceProvider.scala:369)
  at org.apache.spark.internal.Logging$class.logDebug(Logging.scala:58)
  at org.apache.spark.sql.kafka010.KafkaSourceProvider$.logDebug(KafkaSourceProvider.scala:369)
  at org.apache.spark.sql.kafka010.KafkaSourceProvider$ConfigUpdater.set(KafkaSourceProvider.scala:439)
  at org.apache.spark.sql.kafka010.KafkaSourceProvider$.kafkaParamsForDriver(KafkaSourceProvider.scala:394)
  at org.apache.spark.sql.kafka010.KafkaSourceProvider.createSource(KafkaSourceProvider.scala:90)
  at org.apache.spark.sql.execution.datasources.DataSource.createSource(DataSource.scala:277)
  at org.apache.spark.sql.execution.streaming.MicroBatchExecution$$anonfun$1$$anonfun$applyOrElse$1.apply(MicroBatchExecution.scala:80)
  at org.apache.spark.sql.execution.streaming.MicroBatchExecution$$anonfun$1$$anonfun$applyOrElse$1.apply(MicroBatchExecution.scala:77)
  at scala.collection.mutable.MapLike$class.getOrElseUpdate(MapLike.scala:194)
  at scala.collection.mutable.AbstractMap.getOrElseUpdate(Map.scala:80)
  at org.apache.spark.sql.execution.streaming.MicroBatchExecution$$anonfun$1.applyOrElse(MicroBatchExecution.scala:77)
  at org.apache.spark.sql.execution.streaming.MicroBatchExecution$$anonfun$1.applyOrElse(MicroBatchExecution.scala:75)
  at org.apache.spark.sql.catalyst.trees.TreeNode$$anonfun$2.apply(TreeNode.scala:267)
  at org.apache.spark.sql.catalyst.trees.TreeNode$$anonfun$2.apply(TreeNode.scala:267)
  at org.apache.spark.sql.catalyst.trees.CurrentOrigin$.withOrigin(TreeNode.scala:70)
  at org.apache.spark.sql.catalyst.trees.TreeNode.transformDown(TreeNode.scala:266)
  at org.apache.spark.sql.catalyst.trees.TreeNode$$anonfun$transformDown$1.apply(TreeNode.scala:272)
  at org.apache.spark.sql.catalyst.trees.TreeNode$$anonfun$transformDown$1.apply(TreeNode.scala:272)
  at org.apache.spark.sql.catalyst.trees.TreeNode$$anonfun$4.apply(TreeNode.scala:306)
  at org.apache.spark.sql.catalyst.trees.TreeNode.mapProductIterator(TreeNode.scala:187)
  at org.apache.spark.sql.catalyst.trees.TreeNode.mapChildren(TreeNode.scala:304)
  at org.apache.spark.sql.catalyst.trees.TreeNode.transformDown(TreeNode.scala:272)
  at org.apache.spark.sql.catalyst.trees.TreeNode.transform(TreeNode.scala:256)
  at org.apache.spark.sql.execution.streaming.MicroBatchExecution.logicalPlan$lzycompute(MicroBatchExecution.scala:75)
  at org.apache.spark.sql.execution.streaming.MicroBatchExecution.logicalPlan(MicroBatchExecution.scala:61)
  at org.apache.spark.sql.execution.streaming.StreamExecution.org$apache$spark$sql$execution$streaming$StreamExecution$$runStream(StreamExecution.scala:265)

Below is the scala code I m running:下面是我正在运行的 Scala 代码:

import org.apache.kafka.clients.consumer.KafkaConsumer

import org.apache.kafka.clients.producer.{KafkaProducer, ProducerRecord}



val dataFrame = spark.readStream.format("kafka").option("kafka.bootstrap.servers","host:9093,host:9093,host:9093").option("kafka.security.protocol", "SASL_SSL").option("kafka.sasl.kerberos.service.name", "kafka").option("kafka.ssl.truststore.location","/opt/cloudera/security/jks/truststore.jks").option("kafka.ssl.truststore.password", "password").option("subscribe", "SampleTopic").load()

// dataFrame.writeStream.format("console").option("truncate","false").start().awaitTermination()

dataFrame.printSchema()

val dataSet =dataFrame.selectExpr("CAST(key AS STRING)", "CAST(value AS STRING)").as[(String, String)]
val query = dataSet.writeStream.outputMode("append").format("console").start()

query.awaitTermination()

Below is the command I m running to execute above code:以下是我正在运行以执行上述代码的命令:

spark2-shell --files /tmp/jaas.conf,/path/to/.keytab  --conf spark.executor.extraJavaOptions=-Djava.security.auth.login.config=/tmp/jaas.conf --conf spark.driver.extraJavaOptions=-Djava.security.auth.login.config=/tmp/jaas.conf --packages org.apache.spark:spark-sql-kafka-0-10_2.11:2.2.0  -i /path/to/file.scala

Thanks谢谢

I got a similar issue and it turns out that the problem was an incompatibility between spark version and the versions of the used packages.我遇到了类似的问题,结果证明问题是 spark 版本与所用软件包的版本不兼容。

For your case - according to cloudera doc cdh 5.12 comes with Spark 1.6 which in turns requires scala 2.10, while the used package org.apache.spark:spark-sql-kafka-0-10_2.11:2.2.0 is compiled with scala 2.11.对于您的情况 - 根据cloudera doc cdh cdh 5.12附带 Spark 1.6,而 Spark 1.6 又需要 scala 2.10,而使用的包org.apache.spark:spark-sql-kafka-0-10_2.11:2.2.0是用 scala 编译的2.11. You can try to use org.apache.spark:spark-streaming-kafka_2.10:1.6.1 instead.您可以尝试使用 org.apache.spark:spark-streaming-kafka_2.10:1.6.1 代替。

Credits: https://community.hortonworks.com/articles/197922/spark-23-structured-streaming-integration-with-apa.html学分: https : //community.hortonworks.com/articles/197922/spark-23-structured-streaming-integration-with-apa.html

暂无
暂无

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

相关问题 pyspark 错误读取 bigquery: java.lang.ClassNotFoundException: org.apache.spark.internal.Logging$class - pyspark error reading bigquery: java.lang.ClassNotFoundException: org.apache.spark.internal.Logging$class 读取 kafka 源时出错 - spark 3.0.0 k8s:java.lang.ClassNotFoundException:org.apache.spark.internal.Logging$class - error reading kafka source - spark 3.0.0 k8s : java.lang.ClassNotFoundException: org.apache.spark.internal.Logging$class NoSuchMethodError:org.apache.spark.internal.Logging - NoSuchMethodError: org.apache.spark.internal.Logging java.lang.AbstractMethodError:org.apache.phoenix.spark.DefaultSource.createRelation 使用 pheonix 在 Z77BB59DCD89559748E5DB56956C10601 - java.lang.AbstractMethodError:org.apache.phoenix.spark.DefaultSource.createRelation using pheonix in pyspark Apache Spark 2.4.0,AWS EMR,Spark Redshift和User类引发异常:java.lang.AbstractMethodError - Apache Spark 2.4.0, AWS EMR, Spark Redshift and User class threw exception: java.lang.AbstractMethodError 获取错误NoClassDefFoundError:org.apache.spark.internal.Loging on Kafka Spark Stream - Geting error NoClassDefFoundError: org.apache.spark.internal.Logging on Kafka Spark Stream java.lang.NoClassDefFoundError:org / apache / spark / internal / Logging - java.lang.NoClassDefFoundError: org/apache/spark/internal/Logging Java Spark和cassandra连接中的java.lang.AbstractMethodError - java.lang.AbstractMethodError in Java Spark and cassandra connection 结构化流卡夫卡火花 java.lang.NoClassDefFoundError: org/apache/spark/internal/Logging - Structured Streaming kafka spark java.lang.NoClassDefFoundError: org/apache/spark/internal/Logging 使用java.lang.AbstractMethodError在cloudera上失败的spark kinesis - spark kinesis failing on cloudera with java.lang.AbstractMethodError
 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM