简体   繁体   English

Sparkstreaming + Kafka到HDFS

[英]Sparkstreaming + Kafka to hdfs

When I try to consume the message from kafka topic using spark streaming getting the below error 当我尝试使用Spark Streaming使用来自Kafka主题的消息时出现以下错误

scala> val kafkaStream = KafkaUtils.createStream(ssc, "<ipaddress>:2181","spark-streaming-consumer-group", Map("test1" -> 5))

Error: 错误:

`missing or invalid dependency detected while loading class file 'KafkaUtils.class'.
Could not access term kafka in package <root>,
because it (or its dependencies) are missing. Check your build definition for
missing or conflicting dependencies. (Re-run with `-Ylog-classpath` to see the problematic classpath.)
A full rebuild may help if 'KafkaUtils.class' was compiled against an incompatible version of <root>.`

Scala version: 2.11.8 spark version: 2.1.0.2.6.0.3-8 Scala版本:2.11.8 spark版本:2.1.0.2.6.0.3-8

I have used all kind of library for spark-streaming-kafka but nothing worked: 我已经使用了各种各样的库进行spark-streaming-kafka,但是没有任何效果:

I am executing the code from the spark shell: 我正在执行spark shell中的代码:

./spark-shell --jars /data/home/local/504/spark-streaming-kafka_2.10-1.5.1.jar, /data/home/local/504/spark-streaming_2.10-1.5.1.jar

Code

import org.apache.spark.SparkConf
val conf = new SparkConf().setMaster("local[*]").setAppName("KafkaReceiver")
import org.apache.spark.streaming.StreamingContext
import org.apache.spark.streaming.Seconds
val ssc = new StreamingContext(conf, Seconds(10))
import org.apache.spark.streaming.kafka.KafkaUtils
val kafkaStream = KafkaUtils.createStream(ssc, "<ipaddress>:2181","spark-streaming-consumer-group", Map("test1" -> 5))

Any suggestion for this issue. 关于这个问题的任何建议。

Since you are using Scala 2.11 and spark 2.1.0 you should be using these jars 由于您正在使用Scala 2.11和spark 2.1.0,因此您应该使用这些罐子

  • spark-streaming-kafka-0-10_2.11-2.1.0.jar
  • spark-streaming_2.11-2.1.0.jar

If you are using Kafka 0.10+ otherwise change it accordingly. 如果您使用的是Kafka 0.10+,请相应地对其进行更改。

And the simple program would look like 而且简单的程序看起来像

import org.apache.spark.streaming.kafka010.ConsumerStrategies.Subscribe
import org.apache.spark.streaming.kafka010.KafkaUtils
import org.apache.spark.streaming.kafka010.LocationStrategies.PreferConsistent
import org.apache.kafka.common.serialization.StringDeserializer

val streamingContext = new StreamingContext(sc, Seconds(5))

//Parameters for kafka
val kafkaParams = Map[String, Object](
  "bootstrap.servers" -> "servers,
  "key.deserializer" -> classOf[StringDeserializer],
  "value.deserializer" -> classOf[StringDeserializer],
  "group.id" -> "test-consumer-group",
  "auto.offset.reset" -> "earliest",
  "enable.auto.commit" -> (false: java.lang.Boolean)
)
val topics = "topics,seperated,by,comma".split(",")

// crate dstreams
val stream = KafkaUtils.createDirectStream[String, String](
  streamingContext,
  PreferConsistent,
  Subscribe[String, String](topics, kafkaParams)
)

//stream.print()
stream.map(_.value().toString).print()

Hope this hepls! 希望这个帮助!

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM