简体   繁体   English

可以使用distcp将文件目录从S3复制到HDFS吗?

[英]Can distcp be used to copy a directory of files from S3 to HDFS?

I am wondering if hadoop distcp can be used to copy multiple files at once from S3 to HDFS. 我想知道hadoop distcp可用于一次将多个文件从S3复制到HDFS。 It appears to only work for individual files with absolute paths. 它似乎仅适用于具有绝对路径的单个文件。 I would like to copy either an entire directory, or use a wildcard. 我想复制整个目录或使用通配符。

See: Hadoop DistCp using wildcards? 请参阅: 使用通配符的Hadoop DistCp?

I am aware of s3distcp , but I would prefer to use distcp for simplicity's sake. 我知道s3distcp ,但是为了简单起见,我宁愿使用distcp

Here was my attempt at copying a directory from S3 to HDFS: 这是我尝试将目录从S3复制到HDFS的尝试:

[root@ip-10-147-167-56 ~]# /root/ephemeral-hdfs/bin/hadoop distcp s3n://<key>:<secret>@mybucket/dir hdfs:///input/
13/05/23 19:58:27 INFO tools.DistCp: srcPaths=[s3n://<key>:<secret>@mybucket/dir]
13/05/23 19:58:27 INFO tools.DistCp: destPath=hdfs:/input
13/05/23 19:58:29 INFO tools.DistCp: sourcePathsCount=4
13/05/23 19:58:29 INFO tools.DistCp: filesToCopyCount=3
13/05/23 19:58:29 INFO tools.DistCp: bytesToCopyCount=87.0
13/05/23 19:58:29 INFO mapred.JobClient: Running job: job_201305231521_0005
13/05/23 19:58:30 INFO mapred.JobClient:  map 0% reduce 0%
13/05/23 19:58:45 INFO mapred.JobClient: Task Id : attempt_201305231521_0005_m_000000_0, Status : FAILED
java.lang.NullPointerException
    at org.apache.hadoop.fs.s3native.NativeS3FileSystem$NativeS3FsInputStream.close(NativeS3FileSystem.java:106)
    at java.io.BufferedInputStream.close(BufferedInputStream.java:468)
    at java.io.FilterInputStream.close(FilterInputStream.java:172)
    at org.apache.hadoop.tools.DistCp.checkAndClose(DistCp.java:1386)
    at org.apache.hadoop.tools.DistCp$CopyFilesMapper.copy(DistCp.java:434)
    at org.apache.hadoop.tools.DistCp$CopyFilesMapper.map(DistCp.java:547)
    at org.apache.hadoop.tools.DistCp$CopyFilesMapper.map(DistCp.java:314)
    at org.apache.hadoop.mapred.MapRunner.run(MapRunner.java:50)
    at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:436)
    at org.apache.hadoop.mapred.MapTask.run(MapTask.java:372)
    at org.apache.hadoop.mapred.Child$4.run(Child.java:255)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:416)
    at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1121)
    at org.apache.hadoop.mapred.Child.main(Child.java:249)

13/05/23 19:58:55 INFO mapred.JobClient: Task Id : attempt_201305231521_0005_m_000000_1, Status : FAILED
java.lang.NullPointerException
    at org.apache.hadoop.fs.s3native.NativeS3FileSystem$NativeS3FsInputStream.close(NativeS3FileSystem.java:106)
    at java.io.BufferedInputStream.close(BufferedInputStream.java:468)
    at java.io.FilterInputStream.close(FilterInputStream.java:172)
    at org.apache.hadoop.tools.DistCp.checkAndClose(DistCp.java:1386)
    at org.apache.hadoop.tools.DistCp$CopyFilesMapper.copy(DistCp.java:434)
    at org.apache.hadoop.tools.DistCp$CopyFilesMapper.map(DistCp.java:547)
    at org.apache.hadoop.tools.DistCp$CopyFilesMapper.map(DistCp.java:314)
    at org.apache.hadoop.mapred.MapRunner.run(MapRunner.java:50)
    at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:436)
    at org.apache.hadoop.mapred.MapTask.run(MapTask.java:372)
    at org.apache.hadoop.mapred.Child$4.run(Child.java:255)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:416)
    at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1121)
    at org.apache.hadoop.mapred.Child.main(Child.java:249)

13/05/23 19:59:04 INFO mapred.JobClient: Task Id : attempt_201305231521_0005_m_000000_2, Status : FAILED
java.lang.NullPointerException
    at org.apache.hadoop.fs.s3native.NativeS3FileSystem$NativeS3FsInputStream.close(NativeS3FileSystem.java:106)
    at java.io.BufferedInputStream.close(BufferedInputStream.java:468)
    at java.io.FilterInputStream.close(FilterInputStream.java:172)
    at org.apache.hadoop.tools.DistCp.checkAndClose(DistCp.java:1386)
    at org.apache.hadoop.tools.DistCp$CopyFilesMapper.copy(DistCp.java:434)
    at org.apache.hadoop.tools.DistCp$CopyFilesMapper.map(DistCp.java:547)
    at org.apache.hadoop.tools.DistCp$CopyFilesMapper.map(DistCp.java:314)
    at org.apache.hadoop.mapred.MapRunner.run(MapRunner.java:50)
    at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:436)
    at org.apache.hadoop.mapred.MapTask.run(MapTask.java:372)
    at org.apache.hadoop.mapred.Child$4.run(Child.java:255)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:416)
    at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1121)
    at org.apache.hadoop.mapred.Child.main(Child.java:249)

13/05/23 19:59:18 INFO mapred.JobClient: Job complete: job_201305231521_0005
13/05/23 19:59:18 INFO mapred.JobClient: Counters: 6
13/05/23 19:59:18 INFO mapred.JobClient:   Job Counters 
13/05/23 19:59:18 INFO mapred.JobClient:     SLOTS_MILLIS_MAPS=38319
13/05/23 19:59:18 INFO mapred.JobClient:     Total time spent by all reduces waiting after reserving slots (ms)=0
13/05/23 19:59:18 INFO mapred.JobClient:     Total time spent by all maps waiting after reserving slots (ms)=0
13/05/23 19:59:18 INFO mapred.JobClient:     Launched map tasks=4
13/05/23 19:59:18 INFO mapred.JobClient:     SLOTS_MILLIS_REDUCES=0
13/05/23 19:59:18 INFO mapred.JobClient:     Failed map tasks=1
13/05/23 19:59:18 INFO mapred.JobClient: Job Failed: # of failed Map Tasks exceeded allowed limit. FailedCount: 1. LastFailedTask: task_201305231521_0005_m_000000
With failures, global counters are inaccurate; consider running with -i
Copy failed: java.io.IOException: Job failed!
    at org.apache.hadoop.mapred.JobClient.runJob(JobClient.java:1265)
    at org.apache.hadoop.tools.DistCp.copy(DistCp.java:667)
    at org.apache.hadoop.tools.DistCp.run(DistCp.java:881)
    at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:65)
    at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:79)
    at org.apache.hadoop.tools.DistCp.main(DistCp.java:908)

You cannot use wildcards in s3n:// addresses. 您不能在s3n://地址中使用通配符。

However, it is possible to copy an entire directory from S3 to HDFS. 但是,可以将整个目录从S3复制到HDFS。 The reason for the null pointer exceptions in this case was that the HDFS destination folder already existed. 在这种情况下,空指针异常的原因是HDFS目标文件夹已经存在。

Fix : delete the HDFS destination folder: ./hadoop fs -rmr /input/ 修复 :删除HDFS目标文件夹: ./hadoop fs -rmr /input/

Note 1: I also tried passing -update and -overwrite , but I still got NPE. 注意1:我也尝试传递-update-overwrite ,但是我仍然得到了NPE。

Note 2: https://hadoop.apache.org/docs/r1.2.1/distcp.html shows how to copy multiple explicit files. 注意2: https : //hadoop.apache.org/docs/r1.2.1/distcp.html显示了如何复制多个显式文件。

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM