簡體   English   中英

Spark 流式傳輸 2.4.0 獲取 org.apache.spark.sql.AnalysisException:找不到數據源:kafka

[英]Spark streaming 2.4.0 getting org.apache.spark.sql.AnalysisException: Failed to find data source: kafka

嘗試從 Kafka 讀取數據時出現以下錯誤。 我正在使用 docker-compose 來運行 kafka 和 spark。

Exception in thread "main" org.apache.spark.sql.AnalysisException: Failed to find data source: kafka. Please deploy the application as per the deployment section of "Structured Streaming + Kafka Integration Guide".

這是我的閱讀代碼:

object Livedata extends App with LazyLogging {
  logger.info("starting livedata...")
  val spark = SparkSession.builder().appName("livedata").master("local[*]").getOrCreate()

  val df = spark.readStream
        .format("kafka")
        .option("kafka.bootstrap.servers", "kafka:9092")
        .option("subscribe", "topic")
        .option("startingOffsets", "latest")
        .load()

  df.printSchema()

  val hadoopConfig = spark.sparkContext.hadoopConfiguration
  hadoopConfig.set("fs.hdfs.impl", classOf[org.apache.hadoop.hdfs.DistributedFileSystem].getName)
  hadoopConfig.set("fs.file.impl", classOf[org.apache.hadoop.fs.LocalFileSystem].getName)

}

在閱讀了幾個答案后,我添加了 sbt 構建的所有包

這是 build.sbt 文件:

lazy val root = (project in file(".")).
  settings(
    inThisBuild(List(
      organization := "com.live.data",
      version := "0.1.0",
      scalaVersion := "2.12.2",
      assemblyJarName in assembly := "livedata.jar"
)),
    name := "livedata",
    libraryDependencies ++= List(
      "org.scalatest" %% "scalatest" % "3.0.5",
      "com.typesafe.scala-logging" %% "scala-logging" % "3.9.0",
      "org.apache.spark" %% "spark-sql" % "2.4.0",
      "org.apache.spark" %% "spark-sql-kafka-0-10" % "2.4.0" % "provided",
      "org.apache.kafka"           % "kafka-clients"            % "2.5.0",
      "org.apache.kafka"           % "kafka-streams"            % "2.5.0",
      "org.apache.kafka"           %% "kafka-streams-scala"     % "2.5.0"
)
)
assemblyMergeStrategy in assembly := {
  case PathList("META-INF", xs@_*) => MergeStrategy.discard
  case x => MergeStrategy.first
}

不知道這里的主要問題是什么。

更新:

最后我從這里得到了解決方案連接火花結構化流+卡夫卡時出錯

Main issue was getting this org.apache.spark.sql.AnalysisException: Failed to find data source: kafka exception because spark-sql-kafka library is not available in classpath & It is unable to find org.apache.spark.sql.sources .DataSourceRegister 在 META-INF/services 文件夾中。

以下代碼塊需要在 build.sbt 中添加。 這將包括最終 jar 中的 org.apache.spark.sql.sources.DataSourceRegister 文件。

// META-INF discarding
assemblyMergeStrategy in assembly := {
  case PathList("META-INF","services",xs @ _*) => MergeStrategy.filterDistinctLines
  case PathList("META-INF",xs @ _*) => MergeStrategy.discard
  case "application.conf" => MergeStrategy.concat
  case _ => MergeStrategy.first
}```

spark-sql-kafka-0-10 沒有提供,所以去掉那部分依賴。 (雖然提供了 spark-sql,所以你可以將它添加到那個)

您也不應該拉 Kafka Streams(因為 Spark 不使用它),並且 kafka-clients 由 sql-kafka 傳遞,所以也不需要

暫無
暫無

聲明:本站的技術帖子網頁,遵循CC BY-SA 4.0協議,如果您需要轉載,請注明本站網址或者原文地址。任何問題請咨詢:yoyou2525@163.com.

 
粵ICP備18138465號  © 2020-2024 STACKOOM.COM