简体   繁体   中英

hadoop hdfs points to file:/// not hdfs://

So I installed Hadoop via Cloudera Manager cdh3u5 on CentOS 5. When I run cmd

hadoop fs -ls /

I expected to see the contents of hdfs://localhost.localdomain:8020/

However, it had returned the contents of file:///

Now, this goes without saying that I can access my hdfs:// through

hadoop fs -ls hdfs://localhost.localdomain:8020/

But when it came to installing other applications such as Accumulo, accumulo would automatically detect Hadoop Filesystem in file:///

Question is, has anyone ran into this issue and how did you resolve it?

I had a look at HDFS thrift server returns content of local FS, not HDFS , which was a similar issue, but did not solve this issue. Also, I do not get this issue with Cloudera Manager cdh4.

By default, Hadoop is going to use local mode. You probably need to set fs.default.name to hdfs://localhost.localdomain:8020/ in $HADOOP_HOME/conf/core-site.xml .

To do this, you add this to core-site.xml :

 <property>
  <name>fs.default.name</name>
  <value>hdfs://localhost.localdomain:8020/</value>
</property>

The reason why Accumulo is confused is because it's using the same default configuration to figure out where HDFS is... and it's defaulting to file://

We should specify data node data directory and name node meta data directory.

dfs.name.dir,

dfs.namenode.name.dir,

dfs.data.dir,

dfs.datanode.data.dir,

fs.default.name

in core-site.xml file and format name node.

To format HDFS Name Node:

hadoop namenode -format

Enter 'Yes' to confirm formatting name node. Restart HDFS service and deploy client configuration to access HDFS.

If you have already did the above steps. Ensure client configuration is deployed correctly and it points to the actual cluster endpoints.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM