簡體   English   中英

無法找到用於存儲在數據集中的類型的編碼器,以通過Kafka流式處理mongo db數據

[英]Unable to find encoder for type stored in a Dataset for streaming mongo db data through Kafka

我想跟蹤Mongo oplog並通過Kafka進行流式傳輸。 因此,我找到了debezium Kafka CDC連接器,該連接器以其內置的序列化技術尾隨Mongo oplog。

架構注冊表使用以下轉換器進行序列化,

'key.converter=io.confluent.connect.avro.AvroConverter' and
'value.converter=io.confluent.connect.avro.AvroConverter'

以下是我在項目中使用的庫依賴項

libraryDependencies += "io.confluent" % "kafka-avro-serializer" % "3.1.2"

libraryDependencies += "org.apache.kafka" % "kafka-streams" % "0.10.2.0

下面是反序列化Avro數據的流代碼

import org.apache.spark.sql.{Dataset, SparkSession}
import io.confluent.kafka.schemaregistry.client.rest.RestService
import io.confluent.kafka.serializers.KafkaAvroDeserializer
import org.apache.avro.Schema

import scala.collection.JavaConverters._

object KafkaStream{
  def main(args: Array[String]): Unit = {

    val sparkSession = SparkSession
      .builder
      .master("local")
      .appName("kafka")
      .getOrCreate()
    sparkSession.sparkContext.setLogLevel("ERROR")

    import sparkSession.implicits._

    case class DeserializedFromKafkaRecord(key: String, value: String)

    val schemaRegistryURL = "http://127.0.0.1:8081"

    val topicName = "productCollection.inventory.Product"
    val subjectValueName = topicName + "-value"

    //create RestService object
    val restService = new RestService(schemaRegistryURL)

    //.getLatestVersion returns io.confluent.kafka.schemaregistry.client.rest.entities.Schema object.
    val valueRestResponseSchema = restService.getLatestVersion(subjectValueName)

    //Use Avro parsing classes to get Avro Schema
    val parser = new Schema.Parser
    val topicValueAvroSchema: Schema = parser.parse(valueRestResponseSchema.getSchema)

    //key schema is typically just string but you can do the same process for the key as the value
    val keySchemaString = "\"string\""
    val keySchema = parser.parse(keySchemaString)

    //Create a map with the Schema registry url.
    //This is the only Required configuration for Confluent's KafkaAvroDeserializer.
    val props = Map("schema.registry.url" -> schemaRegistryURL)

    //Declare SerDe vars before using Spark structured streaming map. Avoids non serializable class exception.
    var keyDeserializer: KafkaAvroDeserializer = null
    var valueDeserializer: KafkaAvroDeserializer = null

    //Create structured streaming DF to read from the topic.
    val rawTopicMessageDF = sparkSession.readStream
      .format("kafka")
      .option("kafka.bootstrap.servers", "127.0.0.1:9092")
      .option("subscribe", topicName)
      .option("startingOffsets", "earliest")
      .option("maxOffsetsPerTrigger", 20)  //remove for prod
      .load()
    rawTopicMessageDF.printSchema()

    //instantiate the SerDe classes if not already, then deserialize!
    val deserializedTopicMessageDS = rawTopicMessageDF.map{
      row =>
        if (keyDeserializer == null) {
          keyDeserializer = new KafkaAvroDeserializer
          keyDeserializer.configure(props.asJava, true)  //isKey = true
        }
        if (valueDeserializer == null) {
          valueDeserializer = new KafkaAvroDeserializer
          valueDeserializer.configure(props.asJava, false) //isKey = false
        }

        //Pass the Avro schema.
        val deserializedKeyString = keyDeserializer.deserialize(topicName, row.getAs[Array[Byte]]("key"), keySchema).toString //topic name is actually unused in the source code, just required by the signature. Weird right?
        val deserializedValueJsonString = valueDeserializer.deserialize(topicName, row.getAs[Array[Byte]]("value"), topicValueAvroSchema).toString

        DeserializedFromKafkaRecord(deserializedKeyString, deserializedValueJsonString)
    }

    val deserializedDSOutputStream = deserializedTopicMessageDS.writeStream
      .outputMode("append")
      .format("console")
      .option("truncate", false)
      .start()

Kafka使用者運行正常,我可以在oplog中看到數據拖尾,但是當我在上述代碼中運行時,卻遇到了錯誤,

Error:(70, 59) Unable to find encoder for type stored in a Dataset.  Primitive types (Int, String, etc) and Product types (case classes) are supported by importing spark.implicits._  Support for serializing other types will be added in future releases.
    val deserializedTopicMessageDS = rawTopicMessageDF.map{

Error:(70, 59) not enough arguments for method map: (implicit evidence$7: org.apache.spark.sql.Encoder[DeserializedFromKafkaRecord])org.apache.spark.sql.Dataset[DeserializedFromKafkaRecord].
Unspecified value parameter evidence$7.
    val deserializedTopicMessageDS = rawTopicMessageDF.map{

請提出我在這里想念的東西。

提前致謝。

只需在main方法外部聲明您的案例類DeserializedFromKafkaRecord

我想像一下,當在main內定義case類時,帶有隱式編碼器的Spark magic無法正常工作,因為case class在執行main方法之前不存在。

這個問題可以用一個簡單的例子重現(沒有Kafka):

import org.apache.spark.sql.{DataFrame, Dataset, SparkSession}

object SimpleTest {

  // declare CaseClass outside of main method
  case class CaseClass(value: Int)

  def main(args: Array[String]): Unit = {

    // when case class is declared here instead
    // of outside main, the program does not compile
    // case class CaseClass(value: Int)

    val sparkSession = SparkSession
      .builder
      .master("local")
      .appName("simpletest")
      .getOrCreate()

    import sparkSession.implicits._

    val df: DataFrame = sparkSession.sparkContext.parallelize(1 to 10).toDF()
    val ds: Dataset[CaseClass] = df.map { row =>
      CaseClass(row.getInt(0))
    }

    ds.show()
  }
}

暫無
暫無

聲明:本站的技術帖子網頁,遵循CC BY-SA 4.0協議,如果您需要轉載,請注明本站網址或者原文地址。任何問題請咨詢:yoyou2525@163.com.

 
粵ICP備18138465號  © 2020-2024 STACKOOM.COM