簡體   English   中英

Log4j在Spark Streaming的foreachRDD方法中不起作用

[英]Log4j not working in spark streaming's foreachRDD method

當我使用spark2.4的sparkstreaming來消耗kafka時,我發現打印了在foreachRDD方法之外的日志,但是沒有打印出foreachRDD內部的日志。 我使用的日志API是log4j,版本為1.2。

我嘗試添加

到spark-defaults.properties配置文件,並且在開始時,我在打印日志級別和日志配置文件路徑錯誤信息時寫了錯誤的路徑,因此spark.executor.extraJavaOptions和spark.driver.extraJavaOptions配置生效。

foreach塊的內部和外部日志是在不同的機器上執行的,一個在驅動程序上,另一個在執行程序上。 因此,如果要在foreach塊中查看日志,可以訪問yarn以獲得更多日志。

<code>
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:file:/vdir/mnt/disk2/hadoop/yarn/local/usercache/root/filecache/494/__spark_libs__3795396964941241866.zip/slf4j-log4j12-1.7.16.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: Found binding in [jar:file:/usr/lib/hadoop/lib/slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
    19/01/10 14:17:16 ERROR KafkaSparkStreamingKafkaTests: receive+++++++++++++++++++++++++++++++
</code>

    My code:
<code>
    1.if (args[3].equals("consumer1")) {
                logger.error("receive+++++++++++++++++++++++++++++++");
                SparkSQLService sparkSQLService = new SparkSQLService();
                consumerProperties.put(ConsumerConfig.GROUP_ID_CONFIG, "consumer1");
                sparkSQLService.sparkForwardedToKafka(sparkConf,
                        CONSUMER_TOPIC,
                        PRODUCER_TOPIC,
                        new HashMap<String, Object>((Map) consumerProperties));
    ......
    2.public void sparkForwardedToKafka(SparkConf sparkConf, String consumerTopic, String producerTopic, Map<String, Object> kafkaConsumerParamsMap) {
            sparkConf.registerKryoClasses(new Class[]{SparkSQLService.class, FlatMapFunction.class, JavaPairInputDStream.class, Logger.class});
            JavaStreamingContext javaStreamingContext = new JavaStreamingContext(sparkConf, Durations.milliseconds(DURATION_SECONDS));
            Collection<String> topics = Arrays.asList(consumerTopic);
            JavaInputDStream<ConsumerRecord<String, String>> streams =
                    KafkaUtils.createDirectStream(
                            javaStreamingContext,
                            LocationStrategies.PreferConsistent(),
                            ConsumerStrategies.Subscribe(topics, kafkaConsumerParamsMap)
                    );
            if (producerTopic != null) {
                JavaPairDStream<Long, String> messages = streams.mapToPair(record -> new Tuple2<>(record.timestamp(), record.value()));
     messages.foreachRDD(rdd ->
                        {
                            rdd.foreachPartition(partition -> {
                                partition.forEachRemaining(tuple2 -> {
                                    LOGGER.error("****"+tuple2._1+"|"+tuple2._2);
                                    KafkaService.getInstance().send(producerTopic, TaskContext.get().partitionId(), tuple2._1, null, tuple2._2);
                                });
                            });
                        }
                );

</code>

我的記錄器聲明:私有靜態最終Logger LOGGER = LoggerFactory.getLogger(SparkSQLService.class);

暫無
暫無

聲明:本站的技術帖子網頁,遵循CC BY-SA 4.0協議,如果您需要轉載,請注明本站網址或者原文地址。任何問題請咨詢:yoyou2525@163.com.

 
粵ICP備18138465號  © 2020-2024 STACKOOM.COM