[英]hadoop fs -ls s3://bucket or s3a://bucket throws "No such file or directory" error
In a newly created EMR cluster, using:在新建的 EMR 集群中,使用:
hdfs dfs -ls s3://bucket
hadoop fs -ls s3://bucket
hadoop fs -ls s3a://
...all return the error: ...全部返回错误:
"ls: `s3://bucket': No such file or directory" “ls: `s3://bucket': 没有那个文件或目录”
core-site.xml
core-site.xml
中未指定任何内容aws s3 ls
can correctly list all buckets aws s3 ls
可以正确列出所有桶Why does this happen?为什么会这样?
By default, hadoop fs -ls
shows user home directory , which translates to /user/username
.默认情况下,
hadoop fs -ls
显示用户主目录,转换为/user/username
。
When calling hadoop fs -ls s3://bucket
, the S3 connector will try to find s3://bucket/user/hadoop
(substitute tailing hadoop
with your username) which may not exist and will lead to the error.调用
hadoop fs -ls s3://bucket
时,S3 连接器将尝试查找s3://bucket/user/hadoop
(用您的用户名替换 tailing hadoop
),这可能不存在并会导致错误。
The error is not clear, but is different from ls
ing a non-existant bucket.错误不明确,但不同于
ls
一个不存在的桶。 For that the error would be ls: Bucket bucket_name does not exist
.为此,错误将是
ls: Bucket bucket_name does not exist
。
To avoid this:为避免这种情况:
/
after the bucket name /
桶名后To debug this:要调试这个:
export HADOOP_ROOT_LOGGER=DEBUG,console
To turn off debug log关闭调试日志
export HADOOP_ROOT_LOGGER=WARN,console
声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.