[英]hadoop fs commands are showing the local filesystem not the hdfs
I installed hadoop in several laptops in order to form a hadoop cluster. 我在几台笔记本电脑中安装了hadoop以形成一个hadoop集群。 First we installed in pseudo-distributed mode, and in all except one verything was perfect (ie all the services run, and when I do tests with
hadoop fs
it shows the hdfs
). 首先我们以伪分布式模式安装,除了一个非常完美(即所有服务运行,当我用
hadoop fs
进行测试时,它显示了hdfs
)。 In the aftermentioned laptop (the one with problems) the `hadoop fs -ls
command shows the information of the local directory not the hdfs
, the same happens with the commands -cat
, -mkdir
, -put
. 在后面提到的笔记本电脑(有问题的笔记本电脑)中,
`hadoop fs -ls
命令显示本地目录而不是 hdfs
,命令-cat
, -mkdir
, -put
。 What could I be doing wrong? 我能做错什么?
Any help would be appreciated 任何帮助,将不胜感激
Here is my core-site.xml
这是我的
core-site.xml
<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<!-- Put site-specific property overrides in this file. -->
<configuration>
<property>
<name>hadoop.tmp.dir</name>
<value>/home/hduser/hdfs_dir/tmp</value>
<description></description>
</property>
<property>
<name>fs.default.name</name>
<value>hdfs://localhost:54310</value>
<description>.</description>
</property>
</configuration>
I must said, that this is the same file for all the other laptops, and they work fine. 我必须说,这是所有其他笔记本电脑的相同文件,它们工作正常。
I had the same problem, and I had to make sure fs.default.name
's value included a trailing /
to refer to the path component: 我遇到了同样的问题,我必须确保
fs.default.name
的值包含一个尾随/
来引用路径组件:
<property>
<name>fs.default.name</name>
<value>hdfs://localhost:54310/</value>
<description>.</description>
</property>
check that fs.default.name
in core-site.xml
points to the correct datanode in ex: 检查
core-site.xml
中的fs.default.name
是否指向ex中的正确datanode:
<property>
<name>fs.default.name</name>
<value>hdfs://target-namenode:54310</value>
</property>
If fs.default.name
in core-site.xml
points to hdfs://localhost:54310/
with or without trailing /
and even if you have same problem then you might be looking at wrong config file. 如果
core-site.xml
fs.default.name
指向hdfs://localhost:54310/
有或没有尾随/
即使你有相同的问题,那么你可能正在查看错误的配置文件。 In my case it is cloudera's cdh4 and check the symbolic links: 在我的情况下,它是cloudera的cdh4并检查符号链接:
ls -l /etc/hadoop/conf
** /etc/hadoop/conf -> /etc/alternatives/hadoop-conf
ls -l /etc/alternatives/hadoop-conf
** /etc/alternatives/hadoop-conf -> /etc/hadoop/conf.cloudera.yarn1
Earlier I used MRv1 and migrated to MRv2 (YARN) and the sym links were broken after upgrade as: 之前我使用过MRv1并迁移到MRv2(YARN),sym链接在升级后被破坏为:
ls -l /etc/hadoop/conf
** /etc/hadoop/conf -> /etc/alternatives/hadoop-conf
ls -l /etc/alternatives/hadoop-conf
** /etc/alternatives/hadoop-conf -> /etc/hadoop/conf.cloudera.mapreduce1
ls -l /etc/hadoop/conf.cloudera.mapreduce1
ls: cannot access /etc/hadoop/conf.cloudera.mapreduce1: No such file or directory
Also, update-alternatives was run to have high priority for /etc/hadoop/conf.cloudera.mapreduce1 path as:
alternatives --display hadoop-conf
To remove old link which has highest priority do:
hadoop-conf - status is manual.
link currently points to /etc/hadoop/conf.cloudera.mapreduce1
/etc/hadoop/conf.cloudera.hdfs1 - priority 90
/etc/hadoop/conf.cloudera.mapreduce1 - priority 92
/etc/hadoop/conf.empty - priority 10
/etc/hadoop/conf.cloudera.yarn1 - priority 91
Current `best' version is /etc/hadoop/conf.cloudera.mapreduce1.
Also, update-alternatives was run to have high priority for /etc/hadoop/conf.cloudera.mapreduce1 path as:
alternatives --display hadoop-conf
hadoop-conf - status is manual.
link currently points to /etc/hadoop/conf.cloudera.mapreduce1
/etc/hadoop/conf.cloudera.hdfs1 - priority 90
/etc/hadoop/conf.cloudera.mapreduce1 - priority 92
/etc/hadoop/conf.empty - priority 10
/etc/hadoop/conf.cloudera.yarn1 - priority 91
Current `best' version is /etc/hadoop/conf.cloudera.mapreduce1.
To remove old link which has highest priority do:
Also, update-alternatives was run to have high priority for /etc/hadoop/conf.cloudera.mapreduce1 path as:
alternatives --display hadoop-conf
To remove old link which has highest priority do:
hadoop-conf - status is manual.
link currently points to /etc/hadoop/conf.cloudera.mapreduce1
/etc/hadoop/conf.cloudera.hdfs1 - priority 90
/etc/hadoop/conf.cloudera.mapreduce1 - priority 92
/etc/hadoop/conf.empty - priority 10
/etc/hadoop/conf.cloudera.yarn1 - priority 91
Current `best' version is /etc/hadoop/conf.cloudera.mapreduce1.
update-alternatives --remove hadoop-conf /etc/hadoop/conf.cloudera.mapreduce1
rm -f /etc/alternatives/hadoop-conf
ln -s /etc/hadoop/conf.cloudera.yarn1 /etc/alternatives/hadoop-conf
声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.