简体   繁体   中英

Read Kafka messages in spark batch job

What is the best option to read each day, the latest messages from kafka topic, in spark-batch job (running on EMR)? I don't want to use spark-streaming, cause don't have a cluster 24/7. I saw the option of kafka-utils: https://mvnrepository.com/artifact/org.apache.spark/spark-streaming-kafka_2.11 But I see that the last version was in 2016. Is It still the best option?

Thanks!

----------------------edit-------------

Thanks for response, I tried this JAR:

   'org.apache.spark', name: 'spark-sql-kafka-0-10_2.12', version: '2.4.4'

Running it on EMR with: scalaVersion = '2.12.11' sparkVersion = '2.4.4'

With the following code:

 val df = spark
      .read
      .format("kafka")
      .option("kafka.bootstrap.servers", "kafka-utl")
      .option("subscribe", "mytopic")
      .option("startingOffsets", "earliest")
      .option("kafka.partition.assignment.strategy","range")  //added it due to error on missing default value for this param
      .load()


      df.show()

I want to read every batch, all the available messages in the kafka. The program failed on the following error:

21/08/18 16:29:50 WARN ConsumerConfig: The configuration auto.offset.reset = earliest was supplied but isn't a known config.

    Exception in thread "Kafka Offset Reader" java.lang.NoSuchMethodError: org.apache.kafka.clients.consumer.KafkaConsumer.subscribe(Ljava/util/Collection;)V
        at org.apache.spark.sql.kafka010.SubscribeStrategy.createConsumer(ConsumerStrategy.scala:63)
        at org.apache.spark.sql.kafka010.KafkaOffsetReader.consumer(KafkaOffsetReader.scala:86)
        at org.apache.spark.sql.kafka010.KafkaOffsetReader.$anonfun$fetchTopicPartitions$1(KafkaOffsetReader.scala:119)
        at scala.concurrent.Future$.$anonfun$apply$1(Future.scala:659)
        at scala.util.Success.$anonfun$map$1(Try.scala:255)
        at scala.util.Success.map(Try.scala:213)
        at scala.concurrent.Future.$anonfun$map$1(Future.scala:292)
        at scala.concurrent.impl.Promise.liftedTree1$1(Promise.scala:33)
        at scala.concurrent.impl.Promise.$anonfun$transform$1(Promise.scala:33)
        at scala.concurrent.impl.CallbackRunnable.run(Promise.scala:64)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
        at org.apache.spark.sql.kafka010.KafkaOffsetReader$$anon$1$$anon$2.run(KafkaOffsetReader.scala:59)

What I did wrong? Thanks.

You're looking at the old spark-kafka package.

Try this one https://mvnrepository.com/artifact/org.apache.spark/spark-streaming-kafka-0-10

Alternatively, spark-sql-kafka-0-10

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM