简体   繁体   中英

How to change the org.apache.commons.logging.Log.info(“massage”) will write to log file

I'm working on an open source of hadoop in java platform.

I added class (in yarn timeline server)

Doing all kinds of things in addition to printing information,

and I use to write information with two libraries

import org.apache.commons.logging.Log;

import org.apache.commons.logging.LogFactory;

example:

private static final Log LOG =LogFactory.getLog(IntermediateHistoryStore.class);
 LOG.info("massage");

To see my changes I run the timeline service via cmd of hadoop or via Task manager:

**C:\hdp\hadoop-2.7.1.2.3.0.0-2557>** C:\Java\jdk1.7.0_79\bin\java -Xmx1000m -Dhadoop.log.dir=c:\hadoop\logs\hadoop -Dyarn.log.dir=c:\hadoop\logs\hadoop -Dhadoop.log.file=yarn-timelineserver-B-YAIF-9020.log -Dyarn.log.file=yarn-timelineserver-B-YAIF-9020.log -Dyarn.home.dir=C:\hdp\hadoop-2.7.1.2.3.0.0-2557 -Dyarn.id.str= -Dhadoop.home.dir=C:\hdp\hadoop-2.7.1.2.3.0.0-2557 -Dhadoop.root.logger=INFO,DRFA -Dyarn.root.logger=INFO,DRFA -Djava.library.path=;C:\hdp\hadoop-2.7.1.2.3.0.0-2557\bin -Dyarn.policy.file=hadoop-policy.xml -Djava.library.path=;C:\hdp\hadoop-2.7.1.2.3.0.0-2557\bin -classpath C:\hdp\hadoop-2.7.1.2.3.0.0-2557\etc\hadoop;C:\hdp\hadoop-2.7.1.2.3.0.0-2557\etc\hadoop;C:\hdp\hadoop-2.7.1.2.3.0.0-2557\etc\hadoop;C:\hdp\hadoop-2.7.1.2.3.0.0-2557\share\hadoop\common\lib\*;C:\hdp\hadoop-2.7.1.2.3.0.0-2557\share\hadoop\common\*;C:\hdp\hadoop-2.7.1.2.3.0.0-2557\share\hadoop\hdfs;C:\hdp\hadoop-2.7.1.2.3.0.0-2557\share\hadoop\hdfs\lib\*;C:\hdp\hadoop-2.7.1.2.3.0.0-2557\share\hadoop\hdfs\*;C:\hdp\hadoop-2.7.1.2.3.0.0-2557\share\hadoop\yarn\lib\*;C:\hdp\hadoop-2.7.1.2.3.0.0-2557\share\hadoop\yarn\*;C:\hdp\hadoop-2.7.1.2.3.0.0-2557\share\hadoop\mapreduce\lib\*;C:\hdp\hadoop-2.7.1.2.3.0.0-2557\share\hadoop\mapreduce\*;C:\hdp\hadoop-2.7.1.2.3.0.0-2557\share\hadoop\yarn\*;C:\hdp\hadoop-2.7.1.2.3.0.0-2557\share\hadoop\yarn\lib\*;C:\hdp\hadoop-2.7.1.2.3.0.0-2557\etc\hadoop\timelineserver-config\log4j.properties org.apache.hadoop.yarn.server.applicationhistoryservice.ApplicationHistoryServer

after this I need to run pig script via hadoop cmd too

problem : All the information I print is written directly to the console(cmd) And not to file (yarn-timelineserver.log)

The result from cmd of hadoop :

AI: INFO 17-11-2015 11:22, 1: Configuration file has been successfully found as resource
AI: WARN 17-11-2015 11:22, 1: 'MaxTelemetryBufferCapacity': null value is replaced with '500'
AI: WARN 17-11-2015 11:22, 1: 'FlushIntervalInSeconds': null value is replaced with '5'
AI: WARN 17-11-2015 11:22, 1: Found an old version of HttpClient jar, for best performance consider upgrading to version 4.3+
AI: INFO 17-11-2015 11:22, 1: Using Apache HttpClient 4.2
AI: TRACE 17-11-2015 11:22, 1: No back-off container defined, using the default 'EXPONENTIAL'
AI: WARN 17-11-2015 11:22, 1: 'Channel.MaxTransmissionStorageCapacityInMB': null value is replaced with '10'
AI: TRACE 17-11-2015 11:22, 1: C:\Users\b-yaif\AppData\Local\Temp\1\AISDK\native\1.0.2 folder exists
AI: TRACE 17-11-2015 11:22, 1: Java process name is set to 'java#1'
AI: TRACE 17-11-2015 11:22, 1: Successfully loaded library 'applicationinsights-core-native-win64.dll'
AI: TRACE 17-11-2015 11:22, 1: Registering PC 'JSDK_ProcessMemoryPerformanceCounter'
AI: TRACE 17-11-2015 11:22, 1: Registering PC 'JSDK_ProcessCpuPerformanceCounter'
AI: TRACE 17-11-2015 11:22, 1: Registering PC 'JSDK_WindowsPerformanceCounterAsPC'

****[INFO] IntermediateHistoryStore - The variable ( telemetry ) is  initialized successfully....!
[INFO] IntermediateHistoryStore - The variable ( originalStorage ) is  initialized successfully....!****

SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/C:/hdp/hadoop-2.7.1.2.3.0.0-2557/share/hadoop/common/lib/slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinde
r.class]
SLF4J: Found binding in [jar:file:/C:/hdp/hadoop-2.7.1.2.3.0.0-2557/share/hadoop/yarn/SaveHistoryToFile-1.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerB
inder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]

[INFO] MetricsConfig - loaded properties from hadoop-metrics2.properties

[INFO] MetricsSystemImpl - Scheduled snapshot period at 10 second(s).

[INFO] MetricsSystemImpl - ApplicationHistoryServer metrics system started

[INFO] LeveldbTimelineStore - Using leveldb path c:/hadoop/logs/hadoop/timeline/leveldb-timeline-store.ldb

[INFO] LeveldbTimelineStore - Loaded timeline store version info 1.0

[INFO] LeveldbTimelineStore - Starting deletion thread with ttl 604800000 and cycle interval 300000

[INFO] LeveldbTimelineStore - Deleted 2 entities of type MAPREDUCE_JOB

[INFO] LeveldbTimelineStore - Deleted 4 entities of type MAPREDUCE_TASK

[INFO] LeveldbTimelineStateStore - Loading the existing database at th path: c:/hadoop/logs/hadoop/timeline-state/timeline-state-store.ldb

[INFO] LeveldbTimelineStore - Discarded 6 entities for timestamp 1447147360471 and earlier in 0.031 seconds

[INFO] LeveldbTimelineStateStore - Loaded timeline state store version info 1.0

[INFO] LeveldbTimelineStateStore - Loading timeline service state from leveldb

[INFO] LeveldbTimelineStateStore - Loaded 138 master keys and 0 tokens from leveldb, and latest sequence number is 0
[INFO] TimelineDelegationTokenSecretManagerService$TimelineDelegationTokenSecretManager - Recovering TimelineDelegationTokenSecretManager
[INFO] AbstractDelegationTokenSecretManager - Updating the current master key for generating delegation tokens
[INFO] AbstractDelegationTokenSecretManager - Starting expired delegation token remover thread, tokenRemoverScanInterval=60 min(s)
[INFO] AbstractDelegationTokenSecretManager - Updating the current master key for generating delegation tokens
[INFO] CallQueueManager - Using callQueue class java.util.concurrent.LinkedBlockingQueue
[INFO] Server - Starting Socket Reader #1 for port 10200

[INFO] Server - Starting Socket Reader #2 for port 10200

[INFO] Server - Starting Socket Reader #3 for port 10200

[INFO] Server - Starting Socket Reader #4 for port 10200

[INFO] Server - Starting Socket Reader #5 for port 10200

[INFO] RpcServerFactoryPBImpl - Adding protocol org.apache.hadoop.yarn.api.ApplicationHistoryProtocolPB to the server
[INFO] Server - IPC Server Responder: starting
[INFO] Server - IPC Server listener on 10200: starting
[INFO] ApplicationHistoryClientService - Instantiated ApplicationHistoryClientService at b-yaif-9020.middleeast.corp.microsoft.com/10.165.224.174:1020
0
[INFO] ApplicationHistoryServer - Instantiating AHSWebApp at b-yaif-9020.middleeast.corp.microsoft.com:8188
[WARN] HttpRequestLog - Jetty request log can only be enabled using Log4j
[INFO] HttpServer2 - Added global filter 'safety' (class=org.apache.hadoop.http.HttpServer2$QuotingInputFilter)
[INFO] HttpServer2 - Added global filter 'Timeline Authentication Filter' (class=org.apache.hadoop.yarn.server.timeline.security.TimelineAuthenticatio
nFilter)
[INFO] HttpServer2 - Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context applicationhis
tory
[INFO] HttpServer2 - Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context static
[INFO] HttpServer2 - Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs
[INFO] HttpServer2 - adding path spec: /applicationhistory/*
[INFO] HttpServer2 - adding path spec: /ws/*
[INFO] HttpServer2 - Jetty bound to port 8188
[INFO] AbstractDelegationTokenSecretManager - Updating the current master key for generating delegation tokens
[INFO] AbstractDelegationTokenSecretManager - Starting expired delegation token remover thread, tokenRemoverScanInterval=60 min(s)

I want all the lines starting at [INFO] will be written to the file log that yarn timeline write (yarn-timeline.log)

我认为您应该使用log4j而不是commons logging ..它是非常简单和最常用的日志记录api ..它可以将文件记录到控制台以及文件中。

The Daily Rolling File Appender DRFA runs once a day, try using RFA instead.

-Dhadoop.root.logger=INFO,DRFA --> -Dhadoop.root.logger=INFO,RFA 
-Dyarn.root.logger=INFO,DRFA  --> -Dyarn.root.logger=INFO,RFA

Run:

C:\Java\jdk1.7.0_79\bin\java -Xmx1000m -Dhadoop.log.dir=c:\hadoop\logs\hadoop -Dyarn.log.dir=c:\hadoop\logs\hadoop -Dhadoop.log.file=yarn-timelineserver-B-YAIF-9020.log -Dyarn.log.file=yarn-timelineserver-B-YAIF-9020.log -Dyarn.home.dir=C:\hdp\hadoop-2.7.1.2.3.0.0-2557 -Dyarn.id.str= -Dhadoop.home.dir=C:\hdp\hadoop-2.7.1.2.3.0.0-2557 -Dhadoop.root.logger=INFO,RFA -Dyarn.root.logger=INFO,RFA -Djava.library.path=;C:\hdp\hadoop-2.7.1.2.3.0.0-2557\bin -Dyarn.policy.file=hadoop-policy.xml -Djava.library.path=;C:\hdp\hadoop-2.7.1.2.3.0.0-2557\bin -classpath C:\hdp\hadoop-2.7.1.2.3.0.0-2557\etc\hadoop;C:\hdp\hadoop-2.7.1.2.3.0.0-2557\etc\hadoop;C:\hdp\hadoop-2.7.1.2.3.0.0-2557\etc\hadoop;C:\hdp\hadoop-2.7.1.2.3.0.0-2557\share\hadoop\common\lib\*;C:\hdp\hadoop-2.7.1.2.3.0.0-2557\share\hadoop\common\*;C:\hdp\hadoop-2.7.1.2.3.0.0-2557\share\hadoop\hdfs;C:\hdp\hadoop-2.7.1.2.3.0.0-2557\share\hadoop\hdfs\lib\*;C:\hdp\hadoop-2.7.1.2.3.0.0-2557\share\hadoop\hdfs\*;C:\hdp\hadoop-2.7.1.2.3.0.0-2557\share\hadoop\yarn\lib\*;C:\hdp\hadoop-2.7.1.2.3.0.0-2557\share\hadoop\yarn\*;C:\hdp\hadoop-2.7.1.2.3.0.0-2557\share\hadoop\mapreduce\lib\*;C:\hdp\hadoop-2.7.1.2.3.0.0-2557\share\hadoop\mapreduce\*;C:\hdp\hadoop-2.7.1.2.3.0.0-2557\share\hadoop\yarn\*;C:\hdp\hadoop-2.7.1.2.3.0.0-2557\share\hadoop\yarn\lib\*;C:\hdp\hadoop-2.7.1.2.3.0.0-2557\etc\hadoop\timelineserver-config\log4j.properties org.apache.hadoop.yarn.server.applicationhistoryservice.ApplicationHistoryServer:

If you want to limit the log size, adjust both hadoop.log.maxfilesize and hadoop.log.maxbackupindex parameters.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM