簡體   English   中英

卡夫卡制片人掛在發送

[英]Kafka producer hangs on send

邏輯是從自定義源獲取數據的流作業必須同時寫入Kafka和HDFS。

我寫了一個(非常)基本的Kafka生產者來做這個,但是整個流工作都掛在send方法上。

class KafkaProducer(val kafkaBootstrapServers: String, val kafkaTopic: String, val sslCertificatePath: String, val sslCertificatePassword: String) {

  val kafkaProps: Properties = new Properties()
  kafkaProps.put(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG, kafkaBootstrapServers)
  kafkaProps.put("acks", "1")
  kafkaProps.put(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG, "org.apache.kafka.common.serialization.StringSerializer")
  kafkaProps.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG, "org.apache.kafka.common.serialization.StringSerializer")
  kafkaProps.put("ssl.truststore.location", sslCertificatePath)
  kafkaProps.put("ssl.truststore.password", sslCertificatePassword)

  val kafkaProducer: KafkaProducer[Long, Array[String]] = new KafkaProducer(kafkaProps)

  def sendKafkaMessage(message: Message): Unit = {
    message.data.foreach(list => {
      val producerRecord: ProducerRecord[Long, Array[String]] = new ProducerRecord[Long, Array[String]](kafkaTopic, message.timeStamp.getTime, list.toArray)
      kafkaProducer.send(producerRecord)
    })
  }
}

和調用生產者的代碼:

receiverStream.foreachRDD(rdd => {
      val messageRowRDD: RDD[Row] = rdd.mapPartitions(partition => {
        val parser: Parser = new Parser
        val kafkaProducer: KafkaProducer = new KafkaProducer(kafkaBootstrapServers, kafkaTopic, kafkaSslCertificatePath, kafkaSslCertificatePass)
        val newPartition = partition.map(message => {
          Logger.getLogger("importer").error("Writing Message to Kafka...")
          kafkaProducer.sendKafkaMessage(message)
          Logger.getLogger("importer").error("Finished writing Message to Kafka")
          Message.data.map(singleMessage => parser.parseMessage(Message.timeStamp.getTime, singleMessage))
        })
        newPartition.flatten
      })

      val df = sqlContext.createDataFrame(messageRowRDD, Schema.messageSchema)

      Logger.getLogger("importer").info("Entries-count: " + df.count())
      val row = Try(df.first)

      row match {
        case Success(s) => Persister.writeDataframeToDisk(df, outputFolder)
        case Failure(e) => Logger.getLogger("importer").warn("Resulting DataFrame is empty. Nothing can be written")
      }
    })

從日志中,我可以看出每個執行者都達到了“發送到kafka”的地步,但是還沒有達到。 所有執行者都堅持這一點,不會拋出異常。

Message類是一個非常簡單的case類,具有2個字段,一個時間戳和一個字符串數組。

這是由於卡夫卡的acks場。

Acks設置為1,發送速度更快。

暫無
暫無

聲明:本站的技術帖子網頁,遵循CC BY-SA 4.0協議,如果您需要轉載,請注明本站網址或者原文地址。任何問題請咨詢:yoyou2525@163.com.

 
粵ICP備18138465號  © 2020-2024 STACKOOM.COM