简体   繁体   English

无法使用 spark 读取 kafka 主题数据

[英]unable to read kafka topic data using spark

I have data like below in one of the topics which I created named "sampleTopic"我在我创建的名为"sampleTopic"的主题之一中有如下数据

sid,Believer  

Where the first argument is the username and the second argument is the song name which the user frequently listens.其中第一个参数是username ,第二个参数是用户经常听的song name Now, I have started zookeeper , Kafka server , and producer with the topic name as mentioned above.现在,我已经启动了zookeeperKafka serverproducer ,主题名称如上所述。 I have entered the above data for that topic using CMD .我已经使用CMD为该主题输入了上述数据。 Now, I want to read the topic in spark perform some aggregation, and write it back to stream.现在,我想阅读 spark 中的主题执行一些聚合,并将其写回 stream。 Below is my code:下面是我的代码:

package com.sparkKafka
import org.apache.spark.SparkContext
import org.apache.spark.SparkConf
import org.apache.spark.sql.SparkSession
object SparkKafkaTopic {
  def main(args: Array[String]) {
    val spark = SparkSession.builder().appName("SparkKafka").master("local[*]").getOrCreate()
    println("hey")
    val df = spark
      .readStream
      .format("kafka")
      .option("kafka.bootstrap.servers", "localhost:9092")
      .option("subscribe", "sampleTopic1")
      .load()
    val query = df.writeStream
      .outputMode("append")
      .format("console")
      .start().awaitTermination()


  }
}

However, when I execute the above code it gives:但是,当我执行上面的代码时,它给出了:

    +----+--------------------+------------+---------+------+--------------------+-------------+
| key|               value|       topic|partition|offset|           timestamp|timestampType|
+----+--------------------+------------+---------+------+--------------------+-------------+
|null|[73 69 64 64 68 6...|sampleTopic1|        0|     4|2020-05-31 12:12:...|            0|
+----+--------------------+------------+---------+------+--------------------+-------------+

with infinite below looping messages too也有无限循环消息

20/05/31 11:56:12 INFO Fetcher: [Consumer clientId=consumer-1, groupId=spark-kafka-source-0d6807b9-fcc9-4847-abeb-f0b81ab25187--264582860-driver-0] Resetting offset for partition sampleTopic1-0 to offset 4.
20/05/31 11:56:12 INFO Fetcher: [Consumer clientId=consumer-1, groupId=spark-kafka-source-0d6807b9-fcc9-4847-abeb-f0b81ab25187--264582860-driver-0] Resetting offset for partition sampleTopic1-0 to offset 4.
20/05/31 11:56:12 INFO Fetcher: [Consumer clientId=consumer-1, groupId=spark-kafka-source-0d6807b9-fcc9-4847-abeb-f0b81ab25187--264582860-driver-0] Resetting offset for partition sampleTopic1-0 to offset 4.
20/05/31 11:56:12 INFO Fetcher: [Consumer clientId=consumer-1, groupId=spark-kafka-source-0d6807b9-fcc9-4847-abeb-f0b81ab25187--264582860-driver-0] Resetting offset for partition sampleTopic1-0 to offset 4.
20/05/31 11:56:12 INFO Fetcher: [Consumer clientId=consumer-1, groupId=spark-kafka-source-0d6807b9-fcc9-4847-abeb-f0b81ab25187--264582860-driver-0] Resetting offset for partition sampleTopic1-0 to offset 4.
20/05/31 11:56:12 INFO Fetcher: [Consumer clientId=consumer-1, groupId=spark-kafka-source-0d6807b9-fcc9-4847-abeb-f0b81ab25187--264582860-driver-0] Resetting offset for partition sampleTopic1-0 to offset 4.

I need output something like below:我需要 output 如下所示:
在此处输入图像描述

As modified on the suggestion by Srinivas I got the following output:根据 Srinivas 的建议修改后,我得到了以下 output:
在此处输入图像描述

Not sure what exactly is wrong over here.不知道这里到底出了什么问题。 Please guide me through it.请指导我完成它。

spark-sql-kafka jar is missing, which is having the implementation of 'kafka' datasource. spark-sql-kafka jar 丢失,它具有“kafka”数据源的实现。

you can add the jar using config option or build fat jar which includes spark-sql-kafka jar.您可以使用配置选项添加 jar 或构建包含 spark-sql-kafka jar 的胖 jar。 Please use relevant version of jar请使用jar相关版本

val spark = SparkSession.builder()
  .appName("SparkKafka").master("local[*]")
  .config("spark.jars","/path/to/spark-sql-kafka-xxxxxx.jar")
  .getOrCreate()

Try to add spark-sql-kafka library to your build file.尝试将spark-sql-kafka库添加到您的构建文件中。 Check below.检查下面。

build.sbt构建.sbt

libraryDependencies += "org.apache.spark" %% "spark-sql-kafka-0-10" % "2.3.0"  
// Change to Your spark version 

pom.xml pom.xml

<dependency>
    <groupId>org.apache.spark</groupId>
    <artifactId>spark-sql-kafka-0-10_2.11</artifactId>
    <version>2.3.0</version>    // Change to Your spark version
</dependency>

Change your code like below更改您的代码,如下所示

    package com.sparkKafka
    import org.apache.spark.SparkContext
    import org.apache.spark.SparkConf
    import org.apache.spark.sql.SparkSession
    import org.apache.spark.sql.types._
    import org.apache.spark.sql.functions._
    case class KafkaMessage(key: String, value: String, topic: String, partition: Int, offset: Long, timestamp: String)

    object SparkKafkaTopic {

      def main(args: Array[String]) {
        //val spark = SparkSession.builder().appName("SparkKafka").master("local[*]").getOrCreate()
        println("hey")
        val spark = SparkSession.builder().appName("SparkKafka").master("local[*]").getOrCreate()
        import spark.implicits._
        val mySchema = StructType(Array(
          StructField("userName", StringType),
          StructField("songName", StringType)))
        val df = spark
          .readStream
          .format("kafka")
          .option("kafka.bootstrap.servers", "localhost:9092")
          .option("subscribe", "sampleTopic1")
          .load()

        val query = df
          .as[KafkaMessage]
          .select(split($"value", ",")(0).as("userName"),split($"value", ",")(1).as("songName"))
          .writeStream
          .outputMode("append")
          .format("console")
          .start()
          .awaitTermination()
      }
    }

     /*
        +------+--------+
        |userid|songname|
        +------+--------+
        |   sid|Believer|
        +------+--------+
       */

      }
    }

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM