简体   繁体   English

在 spark 批处理作业中读取 Kafka 消息

[英]Read Kafka messages in spark batch job

What is the best option to read each day, the latest messages from kafka topic, in spark-batch job (running on EMR)?在 spark-batch 作业(在 EMR 上运行)中,每天阅读来自 kafka 主题的最新消息的最佳选择是什么? I don't want to use spark-streaming, cause don't have a cluster 24/7.我不想使用 spark-streaming,因为没有 24/7 全天候集群。 I saw the option of kafka-utils: https://mvnrepository.com/artifact/org.apache.spark/spark-streaming-kafka_2.11 But I see that the last version was in 2016. Is It still the best option?我看到kafka-utils的选项: https://mvnrepository.com/artifact/org.apache.spark/spark-streaming-kafka_2.11但是我看到最后一个版本是2016年的。它仍然是最好的选择吗?

Thanks!谢谢!

----------------------edit------------- - - - - - - - - - - - 编辑 - - - - - - -

Thanks for response, I tried this JAR:感谢您的回复,我试过这个 JAR:

   'org.apache.spark', name: 'spark-sql-kafka-0-10_2.12', version: '2.4.4'

Running it on EMR with: scalaVersion = '2.12.11' sparkVersion = '2.4.4'在 EMR 上运行它:scalaVersion = '2.12.11' sparkVersion = '2.4.4'

With the following code:使用以下代码:

 val df = spark
      .read
      .format("kafka")
      .option("kafka.bootstrap.servers", "kafka-utl")
      .option("subscribe", "mytopic")
      .option("startingOffsets", "earliest")
      .option("kafka.partition.assignment.strategy","range")  //added it due to error on missing default value for this param
      .load()


      df.show()

I want to read every batch, all the available messages in the kafka.我想阅读每一批,kafka 中所有可用的消息。 The program failed on the following error:该程序因以下错误而失败:

21/08/18 16:29:50 WARN ConsumerConfig: The configuration auto.offset.reset = earliest was supplied but isn't a known config.

    Exception in thread "Kafka Offset Reader" java.lang.NoSuchMethodError: org.apache.kafka.clients.consumer.KafkaConsumer.subscribe(Ljava/util/Collection;)V
        at org.apache.spark.sql.kafka010.SubscribeStrategy.createConsumer(ConsumerStrategy.scala:63)
        at org.apache.spark.sql.kafka010.KafkaOffsetReader.consumer(KafkaOffsetReader.scala:86)
        at org.apache.spark.sql.kafka010.KafkaOffsetReader.$anonfun$fetchTopicPartitions$1(KafkaOffsetReader.scala:119)
        at scala.concurrent.Future$.$anonfun$apply$1(Future.scala:659)
        at scala.util.Success.$anonfun$map$1(Try.scala:255)
        at scala.util.Success.map(Try.scala:213)
        at scala.concurrent.Future.$anonfun$map$1(Future.scala:292)
        at scala.concurrent.impl.Promise.liftedTree1$1(Promise.scala:33)
        at scala.concurrent.impl.Promise.$anonfun$transform$1(Promise.scala:33)
        at scala.concurrent.impl.CallbackRunnable.run(Promise.scala:64)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
        at org.apache.spark.sql.kafka010.KafkaOffsetReader$$anon$1$$anon$2.run(KafkaOffsetReader.scala:59)

What I did wrong?我做错了什么? Thanks.谢谢。

You're looking at the old spark-kafka package.您正在查看旧的 spark-kafka package。

Try this one https://mvnrepository.com/artifact/org.apache.spark/spark-streaming-kafka-0-10试试这个https://mvnrepository.com/artifact/org.apache.spark/spark-streaming-kafka-0-10

Alternatively, spark-sql-kafka-0-10或者, spark-sql-kafka-0-10

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM