[英]Data from Kafka is not printed in console when I submmited jar file. (Spark streaming + Kafka integration 3.1.1)
我提交 jar 文件时没有错误。
但是当我使用 HTTP 协议发送数据时,没有打印数据。
(当我使用“kafka-console-consumer.sh”检查时,数据打印得很好)
[图片,提交了jar文件:数据未打印]
【图,Kafka-console-consumer.sh:数据打印出来】
命令:
bin/kafka-console-consumer.sh --bootstrap-server localhost:9092 --group test-consumer --topic test01 --from-beginning
[Java 文件]
2-1、依赖
<dependencies>
<dependency>
<groupId>junit</groupId>
<artifactId>junit</artifactId>
<version>4.11</version>
<scope>test</scope>
</dependency>
<dependency>
<groupId>org.apache.spark</groupId>
<artifactId>spark-core_2.12</artifactId>
<version>3.1.1</version>
</dependency>
<dependency>
<groupId>org.apache.spark</groupId>
<artifactId>spark-streaming_2.12</artifactId>
<version>3.1.1</version>
<scope>provided</scope>
</dependency>
<dependency>
<groupId>org.apache.spark</groupId>
<artifactId>spark-streaming-kafka-0-10_2.12</artifactId>
<version>3.1.1</version>
</dependency>
</dependencies>
2-2、代码
package SparkTest.SparkStreaming;
import org.apache.spark.streaming.*;
import org.apache.spark.streaming.api.java.*;
import java.util.*;
import org.apache.spark.SparkConf;
import org.apache.spark.streaming.kafka010.*;
import org.apache.kafka.clients.consumer.ConsumerRecord;
import org.apache.kafka.common.serialization.StringDeserializer;
public final class JavaWordCount {
public static void main(String[] args) throws Exception {
// Create a local StreamingContext with two working thread and batch interval of 1 second
SparkConf conf = new SparkConf().setMaster("yarn").setAppName("JavaWordCount");
JavaStreamingContext jssc = new JavaStreamingContext(conf, Durations.seconds(1));
// load a topic from broker
Map<String, Object> kafkaParams = new HashMap<>();
kafkaParams.put("bootstrap.servers", "localhost:9092");
kafkaParams.put("key.deserializer", StringDeserializer.class);
kafkaParams.put("value.deserializer", StringDeserializer.class);
kafkaParams.put("group.id", "test-consumer");
kafkaParams.put("auto.offset.reset", "latest");
kafkaParams.put("enable.auto.commit", false);
Collection<String> topics = Arrays.asList("test01");
JavaInputDStream<ConsumerRecord<String, String>> stream =
KafkaUtils.createDirectStream(
jssc,
LocationStrategies.PreferBrokers(),
ConsumerStrategies.<String, String>Subscribe(topics, kafkaParams)
);
JavaDStream<String> data = stream.map(v -> {
return v.value(); // mapping to convert into spark D-Stream
});
data.print();
jssc.start();
jssc.awaitTermination();
}
}
您在控制台使用者中使用--from-beginning
,但在 Spark 代码中使用auto.offset.reset=latest
。
因此,如果您想查看任何数据,则需要在 Spark运行时运行生产者
您还需要考虑使用spark-sql-kafka-0-10
结构化流依赖项, 正如您可以在 KafkaWordCount 示例中找到的那样
声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.