繁体   English   中英

hadoop:如何显示put命令的执行时间? 或如何显示在hdfs中加载文件的持续时间?

[英]hadoop: how to show execution time of put command? Or How to show the duration of load a file in hdfs?

如何在hadoop中配置put命令,以显示执行时间?

因为此命令:

hadoop fs -put table.txt /tables/table

只是返回这个:

16/04/04 01:44:47 WARN util.NativeCodeLoader: 
Unable to load native-hadoop library for your platform... using    
builtin-java classes where applicable

该命令有效,但不显示任何执行时间。 您知道命令显示执行时间的可能性吗? 还是有另一种方式来获取该信息?

根据我的低估hadoop fs命令,它不提供任何调试信息,例如执行时间,但是您可以通过两种方式获得执行时间:

  1. Bash方式: start=$(date +'%s') && hadoop fs -put visit-sequences.csv /user/hadoop/temp && echo "It took $(($(date +'%s') - $start)) seconds"

  2. 从日志文件中:您可以检查namenode日志文件,其中列出了与已执行命令有关的所有详细信息,例如所需时间,文件大小,复制等。

例如,我尝试了此命令hadoop fs -put visit-sequences.csv /user/hadoop/temp并在日志文件中获取了下面特定于放置操作的日志。

2016-04-04 20:30:00,097 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Roll Edit Log from 127.0.0.1
2016-04-04 20:30:00,097 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: Rolling edit logs
2016-04-04 20:30:00,097 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: Ending log segment 38
2016-04-04 20:30:00,097 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: Number of transactions: 2 Total time for transactions(ms): 1 Number of transactions batched in Syncs: 0 Number of syncs: 2 SyncTimes(ms): 75 
2016-04-04 20:30:00,118 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: Number of transactions: 2 Total time for transactions(ms): 1 Number of transactions batched in Syncs: 0 Number of syncs: 3 SyncTimes(ms): 95 
2016-04-04 20:30:00,120 INFO org.apache.hadoop.hdfs.server.namenode.FileJournalManager: Finalizing edits file /data/misc/hadoop/store/hdfs/namenode/current/edits_inprogress_0000000000000000038 -> /data/misc/hadoop/store/hdfs/namenode/current/edits_0000000000000000038-0000000000000000039
2016-04-04 20:30:00,120 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: Starting log segment at 40
2016-04-04 20:30:01,781 INFO org.apache.hadoop.hdfs.server.namenode.TransferFsImage: Transfer took 0.06s at 15.63 KB/s
2016-04-04 20:30:01,781 INFO org.apache.hadoop.hdfs.server.namenode.TransferFsImage: Downloaded file fsimage.ckpt_0000000000000000039 size 1177 bytes.
2016-04-04 20:30:01,830 INFO org.apache.hadoop.hdfs.server.namenode.NNStorageRetentionManager: Going to retain 2 images with txid >= 0
2016-04-04 20:30:56,252 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocate blk_1073741829_1005{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-1b928386-65b9-4438-a781-b154cdb9a579:NORMAL:127.0.0.1:50010|RBW]]} for /user/hadoop/temp/visit-sequences.csv._COPYING_
2016-04-04 20:30:56,532 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: BLOCK* blk_1073741829_1005{UCState=COMMITTED, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-1b928386-65b9-4438-a781-b154cdb9a579:NORMAL:127.0.0.1:50010|RBW]]} is not COMPLETE (ucState = COMMITTED, replication# = 0 <  minimum = 1) in file /user/hadoop/temp/visit-sequences.csv._COPYING_
2016-04-04 20:30:56,533 INFO org.apache.hadoop.hdfs.server.namenode.EditLogFileOutputStream: Nothing to flush
2016-04-04 20:30:56,548 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:50010 is added to blk_1073741829_1005{UCState=COMMITTED, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-1b928386-65b9-4438-a781-b154cdb9a579:NORMAL:127.0.0.1:50010|RBW]]} size 742875
2016-04-04 20:30:56,957 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/hadoop/temp/visit-sequences.csv._COPYING_ is closed by DFSClient_NONMAPREDUCE_1242172231_1    

暂无
暂无

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM