簡體   English   中英

Spark Scala 2.11.8 Spark HbaseConnector錯誤

[英]Spark Scala 2.11.8 Spark HbaseConnector error

我正在嘗試使用Spark Scala 2.11.8 HBase連接器使用Jason從kafka流中保存數據。 但是,當我嘗試保存時,出現以下錯誤。 我正在使用Hortoworks的shc連接器。 我的SBT設置如下。

仍然支持此連接器嗎?

libraryDependencies ++= Seq(
  "org.apache.spark" % "spark-core_2.11" % "2.0.1" % "provided",
  "org.apache.spark" % "spark-sql_2.11" % "2.0.1" % "provided",
  "org.apache.spark" % "spark-streaming_2.11" % "2.0.1" % "provided",
  ("org.apache.spark" % "spark-streaming-kafka-0-8_2.11" % "2.0.1").exclude("org.spark-project.spark", "unused"),
  "org.json4s" % "json4s-native_2.11" % "3.2.10",
  "joda-time" % "joda-time" % "2.9.9",
   "com.hortonworks" % "shc" % "1.1.1-2.1-s_2.11"
)

錯誤如下:

Exception in thread "streaming-job-executor-1" java.lang.NoClassDefFoundError: org/apache/spark/Logging
    at java.lang.ClassLoader.defineClass1(Native Method)
    at java.lang.ClassLoader.defineClass(ClassLoader.java:763)
    at java.security.SecureClassLoader.defineClass(SecureClassLoader.java:142)
    at java.net.URLClassLoader.defineClass(URLClassLoader.java:467)
    at java.net.URLClassLoader.access$100(URLClassLoader.java:73)
    at java.net.URLClassLoader$1.run(URLClassLoader.java:368)
    at java.net.URLClassLoader$1.run(URLClassLoader.java:362)
    at java.security.AccessController.doPrivileged(Native Method)
    at java.net.URLClassLoader.findClass(URLClassLoader.java:361)
    at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
    at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
    at java.lang.Class.getDeclaredConstructors0(Native Method)
    at java.lang.Class.privateGetDeclaredConstructors(Class.java:2671)
    at java.lang.Class.getConstructor0(Class.java:3075)
    at java.lang.Class.newInstance(Class.java:412)
    at org.apache.spark.sql.execution.datasources.DataSource.write(DataSource.scala:427)
    at org.apache.spark.sql.DataFrameWriter.save(DataFrameWriter.scala:211)
    at org.apache.spark.sql.DataFrameWriter.save(DataFrameWriter.scala:194)
    at $line20.$read$$iw$$iw$$iw$$iw$$iw$$iw$$iw$$iw$CTUMhbaseingest$.saveHbase$1(<console>:193)
    at $line20.$read$$iw$$iw$$iw$$iw$$iw$$iw$$iw$$iw$CTUMhbaseingest$.runBusinessLogicAndProduceOutput(<console>:295)
    at $line20.$read$$iw$$iw$$iw$$iw$$iw$$iw$$iw$$iw$CTUMhbaseingest$$anonfun$run$1.apply(<console>:312)
    at $line20.$read$$iw$$iw$$iw$$iw$$iw$$iw$$iw$$iw$CTUMhbaseingest$$anonfun$run$1.apply(<console>:311)
    at org.apache.spark.streaming.dstream.DStream$$anonfun$foreachRDD$1$$anonfun$apply$mcV$sp$3.apply(DStream.scala:627)
    at org.apache.spark.streaming.dstream.DStream$$anonfun$foreachRDD$1$$anonfun$apply$mcV$sp$3.apply(DStream.scala:627)
    at org.apache.spark.streaming.dstream.ForEachDStream$$anonfun$1$$anonfun$apply$mcV$sp$1.apply$mcV$sp(ForEachDStream.scala:51)
    at org.apache.spark.streaming.dstream.ForEachDStream$$anonfun$1$$anonfun$apply$mcV$sp$1.apply(ForEachDStream.scala:51)
    at org.apache.spark.streaming.dstream.ForEachDStream$$anonfun$1$$anonfun$apply$mcV$sp$1.apply(ForEachDStream.scala:51)
    at org.apache.spark.streaming.dstream.DStream.createRDDWithLocalProperties(DStream.scala:415)
    at org.apache.spark.streaming.dstream.ForEachDStream$$anonfun$1.apply$mcV$sp(ForEachDStream.scala:50)
    at org.apache.spark.streaming.dstream.ForEachDStream$$anonfun$1.apply(ForEachDStream.scala:50)
    at org.apache.spark.streaming.dstream.ForEachDStream$$anonfun$1.apply(ForEachDStream.scala:50)
    at scala.util.Try$.apply(Try.scala:192)
    at org.apache.spark.streaming.scheduler.Job.run(Job.scala:39)
    at org.apache.spark.streaming.scheduler.JobScheduler$JobHandler$$anonfun$run$1.apply$mcV$sp(JobScheduler.scala:245)
    at org.apache.spark.streaming.scheduler.JobScheduler$JobHandler$$anonfun$run$1.apply(JobScheduler.scala:245)
    at org.apache.spark.streaming.scheduler.JobScheduler$JobHandler$$anonfun$run$1.apply(JobScheduler.scala:245)
    at scala.util.DynamicVariable.withValue(DynamicVariable.scala:58)
    at org.apache.spark.streaming.scheduler.JobScheduler$JobHandler.run(JobScheduler.scala:244)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
    at java.lang.Thread.run(Thread.java:745)

造成原因:java.lang.ClassNotFoundException:org.apache.spark.Logging at java.net.URLClassLoader.findClass(URLClassLoader.java:381)

您的執行程序或驅動程序類路徑有問題。 org.apache.spark.Logging在Spark版本1.5.2或更低版本中可用。 但是,我從您的libraryDependencies中看到您使用spark 2.0.1。 您可以在spark應用程序UI的環境菜單中檢入,並查看驅動程序和執行程序的類路徑。

暫無
暫無

聲明:本站的技術帖子網頁,遵循CC BY-SA 4.0協議,如果您需要轉載,請注明本站網址或者原文地址。任何問題請咨詢:yoyou2525@163.com.

 
粵ICP備18138465號  © 2020-2024 STACKOOM.COM