[英]Mounting of HDFS to local directory failing
I'm currently trying to implement mounting of hdfs to a local directory on ubuntu machine. 我目前正在尝试将hdfs挂载到ubuntu机器上的本地目录中。 I'm using hadoop-fuse-dfs package.
我正在使用hadoop-fuse-dfs软件包。
So, I'm executing this below command 所以,我正在执行以下命令
ubuntu@dev:~$ hadoop-fuse-dfs dfs://localhost:8020 /mnt/hdfs
Output 产量
INFO /var/lib/jenkins/workspace/generic-package-ubuntu64-12-04/CDH4.5.0-Packaging-Hadoop-2013-11-20_14-31-53/hadoop-2.0.0+1518-1.cdh4.5.0.p0.24~precise/src/hadoop-hdfs-project/hadoop-hdfs/src/main/native/fuse-dfs/fuse_options.c:164 Adding FUSE arg /mnt/hdfs
信息/var/lib/jenkins/workspace/generic-package-ubuntu64-12-04/CDH4.5.0-Packaging-Hadoop-2013-11-20_14-31-53/hadoop-2.0.0+1518-1.cdh4。 5.0.p0.24〜precise / src / hadoop-hdfs-project / hadoop-hdfs / src / main / native / fuse-dfs / fuse_options.c:164添加FUSE arg / mnt / hdfs
But, when I try to access the mounted hdfs locally, I see the error message (please check the snapshot attached) 但是,当我尝试在本地访问已挂载的hdfs时,会看到错误消息(请检查附带的快照)
ls: cannot access /mnt/hdfs: No such file or directory
total 4.0K
d????????? ? ? ? ? ? hdfs
PS : I've already executed following commands, but still I get same output. PS:我已经执行了以下命令,但仍然得到相同的输出。
$ sudo adduser ubuntu fuse
$ sudo addgroup ubuntu fuse
Am I missing something ? 我想念什么吗? Please suggest some workaround.
请提出一些解决方法。
This happens at least when hadoop-fuse-dfs can not connect to filesystem metadata operations running by default on port 8020 eg due to network configuration issues. 至少在hadoop-fuse-dfs无法连接到默认在端口8020上运行的文件系统元数据操作(例如由于网络配置问题)时,会发生这种情况。
You can test from your host that connection works prior running hadoop-fuse-dfs eg by 您可以在运行hadoop-fuse-dfs之前从主机测试连接是否正常运行,例如通过
telnet your-name-node 8020 telnet您的名称节点8020
GET / GET /
You need to use hostname instead of localhost. 您需要使用主机名而不是localhost。 I faced the same issue, after changing localhost to hostname which is also defined in hosts file, it got fixed.
我遇到了同样的问题,将localhost更改为在hosts文件中也定义的主机名后,它得到了解决。
hadoop-fuse-dfs dfs://{hostname}:8020 /mnt/hdfs
According to Cloudera 根据Cloudera
In an HA deployment, use the HDFS nameservice instead of the NameNode URI;
在高可用性部署中,请使用HDFS名称服务,而不要使用NameNode URI。 that is, use the value of dfs.nameservices in hdfs-site.xml.
也就是说,使用hdfs-site.xml中的dfs.nameservices值。
声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.