简体   繁体   English

hadoop fs命令显示本地文件系统而不是hdfs

[英]hadoop fs commands are showing the local filesystem not the hdfs

I installed hadoop in several laptops in order to form a hadoop cluster. 我在几台笔记本电脑中安装了hadoop以形成一个hadoop集群。 First we installed in pseudo-distributed mode, and in all except one verything was perfect (ie all the services run, and when I do tests with hadoop fs it shows the hdfs ). 首先我们以伪分布式模式安装,除了一个非常完美(即所有服务运行,当我用hadoop fs进行测试时,它显示了hdfs )。 In the aftermentioned laptop (the one with problems) the `hadoop fs -ls command shows the information of the local directory not the hdfs , the same happens with the commands -cat , -mkdir , -put . 在后面提到的笔记本电脑(有问题的笔记本电脑)中, `hadoop fs -ls命令显示本地目录而不是 hdfs ,命令-cat-mkdir-put What could I be doing wrong? 我能做错什么?

Any help would be appreciated 任何帮助,将不胜感激

Here is my core-site.xml 这是我的core-site.xml

<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>

<!-- Put site-specific property overrides in this file. -->

<configuration>
<property>
 <name>hadoop.tmp.dir</name>
 <value>/home/hduser/hdfs_dir/tmp</value>
 <description></description>
</property>

<property>
 <name>fs.default.name</name>
 <value>hdfs://localhost:54310</value>
 <description>.</description>
</property>
</configuration>

I must said, that this is the same file for all the other laptops, and they work fine. 我必须说,这是所有其他笔记本电脑的相同文件,它们工作正常。

I had the same problem, and I had to make sure fs.default.name 's value included a trailing / to refer to the path component: 我遇到了同样的问题,我必须确保fs.default.name的值包含一个尾随/来引用路径组件:

<property>
 <name>fs.default.name</name>
 <value>hdfs://localhost:54310/</value>
 <description>.</description>
</property>

check that fs.default.name in core-site.xml points to the correct datanode in ex: 检查core-site.xml中的fs.default.name是否指向ex中的正确datanode:

<property>
     <name>fs.default.name</name>
     <value>hdfs://target-namenode:54310</value>
</property>

If fs.default.name in core-site.xml points to hdfs://localhost:54310/ with or without trailing / and even if you have same problem then you might be looking at wrong config file. 如果core-site.xml fs.default.name指向hdfs://localhost:54310/有或没有尾随/即使你有相同的问题,那么你可能正在查看错误的配置文件。 In my case it is cloudera's cdh4 and check the symbolic links: 在我的情况下,它是cloudera的cdh4并检查符号链接:

ls -l /etc/hadoop/conf
** /etc/hadoop/conf -> /etc/alternatives/hadoop-conf
ls -l /etc/alternatives/hadoop-conf
**
/etc/alternatives/hadoop-conf -> /etc/hadoop/conf.cloudera.yarn1

Earlier I used MRv1 and migrated to MRv2 (YARN) and the sym links were broken after upgrade as: 之前我使用过MRv1并迁移到MRv2(YARN),sym链接在升级后被破坏为:

ls -l /etc/hadoop/conf
** /etc/hadoop/conf -> /etc/alternatives/hadoop-conf
ls -l /etc/alternatives/hadoop-conf
**
/etc/alternatives/hadoop-conf -> /etc/hadoop/conf.cloudera.mapreduce1
ls -l /etc/hadoop/conf.cloudera.mapreduce1
ls: cannot access /etc/hadoop/conf.cloudera.mapreduce1: No such file or directory

Also, update-alternatives was run to have high priority for /etc/hadoop/conf.cloudera.mapreduce1 path as:
alternatives --display hadoop-conf
hadoop-conf - status is manual.
link currently points to /etc/hadoop/conf.cloudera.mapreduce1
/etc/hadoop/conf.cloudera.hdfs1 - priority 90
/etc/hadoop/conf.cloudera.mapreduce1 - priority 92
/etc/hadoop/conf.empty - priority 10
/etc/hadoop/conf.cloudera.yarn1 - priority 91
Current `best' version is /etc/hadoop/conf.cloudera.mapreduce1.
To remove old link which has highest priority do:

Also, update-alternatives was run to have high priority for /etc/hadoop/conf.cloudera.mapreduce1 path as:
alternatives --display hadoop-conf
hadoop-conf - status is manual.
link currently points to /etc/hadoop/conf.cloudera.mapreduce1
/etc/hadoop/conf.cloudera.hdfs1 - priority 90
/etc/hadoop/conf.cloudera.mapreduce1 - priority 92
/etc/hadoop/conf.empty - priority 10
/etc/hadoop/conf.cloudera.yarn1 - priority 91
Current `best' version is /etc/hadoop/conf.cloudera.mapreduce1.
To remove old link which has highest priority do:

Also, update-alternatives was run to have high priority for /etc/hadoop/conf.cloudera.mapreduce1 path as:
alternatives --display hadoop-conf
hadoop-conf - status is manual.
link currently points to /etc/hadoop/conf.cloudera.mapreduce1
/etc/hadoop/conf.cloudera.hdfs1 - priority 90
/etc/hadoop/conf.cloudera.mapreduce1 - priority 92
/etc/hadoop/conf.empty - priority 10
/etc/hadoop/conf.cloudera.yarn1 - priority 91
Current `best' version is /etc/hadoop/conf.cloudera.mapreduce1.
To remove old link which has highest priority do:

update-alternatives --remove hadoop-conf /etc/hadoop/conf.cloudera.mapreduce1
rm -f /etc/alternatives/hadoop-conf
ln -s /etc/hadoop/conf.cloudera.yarn1 /etc/alternatives/hadoop-conf

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

相关问题 在python代码中使用hadoop fs -put命令将文件从本地文件系统传输到hdfs中的问题 - Issue in using hadoop fs -put command in python code to transfer file from local filesystem to hdfs org.apache.hadoop.fs.FileSystem:提供者org.apache.hadoop.hdfs.DistributedFileSystem不是子类型 - org.apache.hadoop.fs.FileSystem: Provider org.apache.hadoop.hdfs.DistributedFileSystem not a subtype “hadoop fs”外壳命令和“hdfs dfs”外壳命令有什么区别? - what's the difference between "hadoop fs" shell commands and "hdfs dfs" shell commands? 目录文件未使用命令“ hadoop fs -put”复制到HDFS <my local path><hdfs path> - Directory files are not copying to HDFS with command 'hadoop fs -put <my local path> <hdfs path> 在hadoop中执行HDFS命令 - Executing HDFS commands in hadoop 关于hadoop HDFS文件系统重命名 - About hadoop hdfs filesystem rename 是否可以运行HADOOP并将文件从本地fs复制到JAVA BUT中的HDFS,而无需在文件系统上安装Hadoop? - Is that possible to run HADOOP and copy a file from local fs to HDFS in JAVA BUT without installing Hadoop on file system? Hadoop文件系统读取Linux文件系统而不是hdfs? - Hadoop filesystem reads linux filesystem instead of hdfs? 带有HDFS密钥库的Hadoop KMS:方案“ hdfs”没有文件系统 - Hadoop KMS with HDFS keystore: No FileSystem for scheme “hdfs” 本地文件系统上的Hadoop - Hadoop on Local FileSystem
 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM