简体   繁体   English

如何更改org.apache.commons.logging.Log.info(“ massage”)将写入日志文件

[英]How to change the org.apache.commons.logging.Log.info(“massage”) will write to log file

I'm working on an open source of hadoop in java platform. 我正在Java平台上开发hadoop的开源。

I added class (in yarn timeline server) 我添加了类(在yarn时间轴服务器中)

Doing all kinds of things in addition to printing information, 除了打印信息外,还要做各种事情,

and I use to write information with two libraries 我用两个库来写信息

import org.apache.commons.logging.Log;

import org.apache.commons.logging.LogFactory;

example: 例:

private static final Log LOG =LogFactory.getLog(IntermediateHistoryStore.class);
 LOG.info("massage");

To see my changes I run the timeline service via cmd of hadoop or via Task manager: 要查看我的更改,我通过hadoop的cmd或通过任务管理器运行时间轴服务:

**C:\hdp\hadoop-2.7.1.2.3.0.0-2557>** C:\Java\jdk1.7.0_79\bin\java -Xmx1000m -Dhadoop.log.dir=c:\hadoop\logs\hadoop -Dyarn.log.dir=c:\hadoop\logs\hadoop -Dhadoop.log.file=yarn-timelineserver-B-YAIF-9020.log -Dyarn.log.file=yarn-timelineserver-B-YAIF-9020.log -Dyarn.home.dir=C:\hdp\hadoop-2.7.1.2.3.0.0-2557 -Dyarn.id.str= -Dhadoop.home.dir=C:\hdp\hadoop-2.7.1.2.3.0.0-2557 -Dhadoop.root.logger=INFO,DRFA -Dyarn.root.logger=INFO,DRFA -Djava.library.path=;C:\hdp\hadoop-2.7.1.2.3.0.0-2557\bin -Dyarn.policy.file=hadoop-policy.xml -Djava.library.path=;C:\hdp\hadoop-2.7.1.2.3.0.0-2557\bin -classpath C:\hdp\hadoop-2.7.1.2.3.0.0-2557\etc\hadoop;C:\hdp\hadoop-2.7.1.2.3.0.0-2557\etc\hadoop;C:\hdp\hadoop-2.7.1.2.3.0.0-2557\etc\hadoop;C:\hdp\hadoop-2.7.1.2.3.0.0-2557\share\hadoop\common\lib\*;C:\hdp\hadoop-2.7.1.2.3.0.0-2557\share\hadoop\common\*;C:\hdp\hadoop-2.7.1.2.3.0.0-2557\share\hadoop\hdfs;C:\hdp\hadoop-2.7.1.2.3.0.0-2557\share\hadoop\hdfs\lib\*;C:\hdp\hadoop-2.7.1.2.3.0.0-2557\share\hadoop\hdfs\*;C:\hdp\hadoop-2.7.1.2.3.0.0-2557\share\hadoop\yarn\lib\*;C:\hdp\hadoop-2.7.1.2.3.0.0-2557\share\hadoop\yarn\*;C:\hdp\hadoop-2.7.1.2.3.0.0-2557\share\hadoop\mapreduce\lib\*;C:\hdp\hadoop-2.7.1.2.3.0.0-2557\share\hadoop\mapreduce\*;C:\hdp\hadoop-2.7.1.2.3.0.0-2557\share\hadoop\yarn\*;C:\hdp\hadoop-2.7.1.2.3.0.0-2557\share\hadoop\yarn\lib\*;C:\hdp\hadoop-2.7.1.2.3.0.0-2557\etc\hadoop\timelineserver-config\log4j.properties org.apache.hadoop.yarn.server.applicationhistoryservice.ApplicationHistoryServer

after this I need to run pig script via hadoop cmd too 之后,我也需要通过hadoop cmd运行Pig脚本

problem : All the information I print is written directly to the console(cmd) And not to file (yarn-timelineserver.log) 问题 :我打印的所有信息都直接写入控制台(cmd),而不是文件(yarn-timelineserver.log)

The result from cmd of hadoop : hadoop的cmd结果

AI: INFO 17-11-2015 11:22, 1: Configuration file has been successfully found as resource
AI: WARN 17-11-2015 11:22, 1: 'MaxTelemetryBufferCapacity': null value is replaced with '500'
AI: WARN 17-11-2015 11:22, 1: 'FlushIntervalInSeconds': null value is replaced with '5'
AI: WARN 17-11-2015 11:22, 1: Found an old version of HttpClient jar, for best performance consider upgrading to version 4.3+
AI: INFO 17-11-2015 11:22, 1: Using Apache HttpClient 4.2
AI: TRACE 17-11-2015 11:22, 1: No back-off container defined, using the default 'EXPONENTIAL'
AI: WARN 17-11-2015 11:22, 1: 'Channel.MaxTransmissionStorageCapacityInMB': null value is replaced with '10'
AI: TRACE 17-11-2015 11:22, 1: C:\Users\b-yaif\AppData\Local\Temp\1\AISDK\native\1.0.2 folder exists
AI: TRACE 17-11-2015 11:22, 1: Java process name is set to 'java#1'
AI: TRACE 17-11-2015 11:22, 1: Successfully loaded library 'applicationinsights-core-native-win64.dll'
AI: TRACE 17-11-2015 11:22, 1: Registering PC 'JSDK_ProcessMemoryPerformanceCounter'
AI: TRACE 17-11-2015 11:22, 1: Registering PC 'JSDK_ProcessCpuPerformanceCounter'
AI: TRACE 17-11-2015 11:22, 1: Registering PC 'JSDK_WindowsPerformanceCounterAsPC'

****[INFO] IntermediateHistoryStore - The variable ( telemetry ) is  initialized successfully....!
[INFO] IntermediateHistoryStore - The variable ( originalStorage ) is  initialized successfully....!****

SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/C:/hdp/hadoop-2.7.1.2.3.0.0-2557/share/hadoop/common/lib/slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinde
r.class]
SLF4J: Found binding in [jar:file:/C:/hdp/hadoop-2.7.1.2.3.0.0-2557/share/hadoop/yarn/SaveHistoryToFile-1.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerB
inder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]

[INFO] MetricsConfig - loaded properties from hadoop-metrics2.properties

[INFO] MetricsSystemImpl - Scheduled snapshot period at 10 second(s).

[INFO] MetricsSystemImpl - ApplicationHistoryServer metrics system started

[INFO] LeveldbTimelineStore - Using leveldb path c:/hadoop/logs/hadoop/timeline/leveldb-timeline-store.ldb

[INFO] LeveldbTimelineStore - Loaded timeline store version info 1.0

[INFO] LeveldbTimelineStore - Starting deletion thread with ttl 604800000 and cycle interval 300000

[INFO] LeveldbTimelineStore - Deleted 2 entities of type MAPREDUCE_JOB

[INFO] LeveldbTimelineStore - Deleted 4 entities of type MAPREDUCE_TASK

[INFO] LeveldbTimelineStateStore - Loading the existing database at th path: c:/hadoop/logs/hadoop/timeline-state/timeline-state-store.ldb

[INFO] LeveldbTimelineStore - Discarded 6 entities for timestamp 1447147360471 and earlier in 0.031 seconds

[INFO] LeveldbTimelineStateStore - Loaded timeline state store version info 1.0

[INFO] LeveldbTimelineStateStore - Loading timeline service state from leveldb

[INFO] LeveldbTimelineStateStore - Loaded 138 master keys and 0 tokens from leveldb, and latest sequence number is 0
[INFO] TimelineDelegationTokenSecretManagerService$TimelineDelegationTokenSecretManager - Recovering TimelineDelegationTokenSecretManager
[INFO] AbstractDelegationTokenSecretManager - Updating the current master key for generating delegation tokens
[INFO] AbstractDelegationTokenSecretManager - Starting expired delegation token remover thread, tokenRemoverScanInterval=60 min(s)
[INFO] AbstractDelegationTokenSecretManager - Updating the current master key for generating delegation tokens
[INFO] CallQueueManager - Using callQueue class java.util.concurrent.LinkedBlockingQueue
[INFO] Server - Starting Socket Reader #1 for port 10200

[INFO] Server - Starting Socket Reader #2 for port 10200

[INFO] Server - Starting Socket Reader #3 for port 10200

[INFO] Server - Starting Socket Reader #4 for port 10200

[INFO] Server - Starting Socket Reader #5 for port 10200

[INFO] RpcServerFactoryPBImpl - Adding protocol org.apache.hadoop.yarn.api.ApplicationHistoryProtocolPB to the server
[INFO] Server - IPC Server Responder: starting
[INFO] Server - IPC Server listener on 10200: starting
[INFO] ApplicationHistoryClientService - Instantiated ApplicationHistoryClientService at b-yaif-9020.middleeast.corp.microsoft.com/10.165.224.174:1020
0
[INFO] ApplicationHistoryServer - Instantiating AHSWebApp at b-yaif-9020.middleeast.corp.microsoft.com:8188
[WARN] HttpRequestLog - Jetty request log can only be enabled using Log4j
[INFO] HttpServer2 - Added global filter 'safety' (class=org.apache.hadoop.http.HttpServer2$QuotingInputFilter)
[INFO] HttpServer2 - Added global filter 'Timeline Authentication Filter' (class=org.apache.hadoop.yarn.server.timeline.security.TimelineAuthenticatio
nFilter)
[INFO] HttpServer2 - Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context applicationhis
tory
[INFO] HttpServer2 - Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context static
[INFO] HttpServer2 - Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs
[INFO] HttpServer2 - adding path spec: /applicationhistory/*
[INFO] HttpServer2 - adding path spec: /ws/*
[INFO] HttpServer2 - Jetty bound to port 8188
[INFO] AbstractDelegationTokenSecretManager - Updating the current master key for generating delegation tokens
[INFO] AbstractDelegationTokenSecretManager - Starting expired delegation token remover thread, tokenRemoverScanInterval=60 min(s)

I want all the lines starting at [INFO] will be written to the file log that yarn timeline write (yarn-timeline.log) 我希望所有从[INFO]开始的行都将被写入纱线时间轴写入的文件日志(yarn-timeline.log)

我认为您应该使用log4j而不是commons logging ..它是非常简单和最常用的日志记录api ..它可以将文件记录到控制台以及文件中。

The Daily Rolling File Appender DRFA runs once a day, try using RFA instead. 每日滚动文件附加器DRFA每天运行一次,请尝试使用RFA。

-Dhadoop.root.logger=INFO,DRFA --> -Dhadoop.root.logger=INFO,RFA 
-Dyarn.root.logger=INFO,DRFA  --> -Dyarn.root.logger=INFO,RFA

Run: 跑:

C:\Java\jdk1.7.0_79\bin\java -Xmx1000m -Dhadoop.log.dir=c:\hadoop\logs\hadoop -Dyarn.log.dir=c:\hadoop\logs\hadoop -Dhadoop.log.file=yarn-timelineserver-B-YAIF-9020.log -Dyarn.log.file=yarn-timelineserver-B-YAIF-9020.log -Dyarn.home.dir=C:\hdp\hadoop-2.7.1.2.3.0.0-2557 -Dyarn.id.str= -Dhadoop.home.dir=C:\hdp\hadoop-2.7.1.2.3.0.0-2557 -Dhadoop.root.logger=INFO,RFA -Dyarn.root.logger=INFO,RFA -Djava.library.path=;C:\hdp\hadoop-2.7.1.2.3.0.0-2557\bin -Dyarn.policy.file=hadoop-policy.xml -Djava.library.path=;C:\hdp\hadoop-2.7.1.2.3.0.0-2557\bin -classpath C:\hdp\hadoop-2.7.1.2.3.0.0-2557\etc\hadoop;C:\hdp\hadoop-2.7.1.2.3.0.0-2557\etc\hadoop;C:\hdp\hadoop-2.7.1.2.3.0.0-2557\etc\hadoop;C:\hdp\hadoop-2.7.1.2.3.0.0-2557\share\hadoop\common\lib\*;C:\hdp\hadoop-2.7.1.2.3.0.0-2557\share\hadoop\common\*;C:\hdp\hadoop-2.7.1.2.3.0.0-2557\share\hadoop\hdfs;C:\hdp\hadoop-2.7.1.2.3.0.0-2557\share\hadoop\hdfs\lib\*;C:\hdp\hadoop-2.7.1.2.3.0.0-2557\share\hadoop\hdfs\*;C:\hdp\hadoop-2.7.1.2.3.0.0-2557\share\hadoop\yarn\lib\*;C:\hdp\hadoop-2.7.1.2.3.0.0-2557\share\hadoop\yarn\*;C:\hdp\hadoop-2.7.1.2.3.0.0-2557\share\hadoop\mapreduce\lib\*;C:\hdp\hadoop-2.7.1.2.3.0.0-2557\share\hadoop\mapreduce\*;C:\hdp\hadoop-2.7.1.2.3.0.0-2557\share\hadoop\yarn\*;C:\hdp\hadoop-2.7.1.2.3.0.0-2557\share\hadoop\yarn\lib\*;C:\hdp\hadoop-2.7.1.2.3.0.0-2557\etc\hadoop\timelineserver-config\log4j.properties org.apache.hadoop.yarn.server.applicationhistoryservice.ApplicationHistoryServer:

If you want to limit the log size, adjust both hadoop.log.maxfilesize and hadoop.log.maxbackupindex parameters. 如果要限制日志大小,请同时调整hadoop.log.maxfilesizehadoop.log.maxbackupindex参数。

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

相关问题 使用org.apache.commons.logging写日志文件 - Write log file using org.apache.commons.logging 如何使用org.apache.commons.logging登录到文件? - How to log to a file with org.apache.commons.logging? org.apache.commons.logging.Log 无法解析 - org.apache.commons.logging.Log cannot be resolved 如何使用Apache Commons记录Digester? - How to log Digester with Apache Commons? 春季项目部署向org.apache.commons.logging.Log抛出ClassNotFoundException - Spring project deployment throws ClassNotFoundException for org.apache.commons.logging.Log Weblogic11:用户指定的日志类'org.apache.commons.logging.impl.Log4JLogger'找不到或无法使用 - Weblogic11: User-specified log class 'org.apache.commons.logging.impl.Log4JLogger' cannot be found or is not useable Wildfly 17:“找不到或无法使用用户指定的日志类'org.apache.commons.logging.impl.Log4JLogger”。”使用commons-configuration2 - Wildfly 17: “User-specified log class 'org.apache.commons.logging.impl.Log4JLogger' cannot be found or is not useable.” using commons-configuration2 如何在Apache Commons Daemon中实现日志文件轮换? - How to implement log file rotation in Apache Commons Daemon? log4j和apache commons日志记录有什么区别? - Whats the difference between log4j and apache commons logging? 使用Commons-logging的JDK日志无法创建日志文件 - log file is not getting created using JDK logging with Commons-logging
 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM