简体   繁体   中英

How to mount HDFS on Cloudera?

I am working on a cluster running Cloudera 5.3, and I've followed all the instructions to create an NFS gateway and it's running fine. My problem is that I still can't see the HDFS directories as part of the Linux file system (this is RHEL 6). I'm not a UNIX admin so I have no experience mounting directories, and the documentation I'm finding online isn't helping with this specific problem. I've tried the simple

mount /

On the machine that is the NFS gateway, but that didn't work. When I tried from another cluster machine to mount using

mount <myNFSgateway>:/ /

I couldn't see any of the files on the gateway server nor in HDFS (though I can easily see the files using hdfs dfs -ls ).

How do I actually mount HDFS as a directory now that NFS is set up?

Try the below command for checking available mount points

showmount -e <nfs_server_ip_address>

You should see output similar to the following:

Exports list on <nfs_server_ip_address>: 
/ (everyone)

Mounting HDFS on an NFS Client To import the HDFS file system on an NFS client, use a mount command such as the following on the client:

 mount -t  nfs  -o vers=3,proto=tcp,nolock <nfs_server_hostname>:/ /hdfs_nfs_mount

(Before mounting make sure the nfs related libraries are installed. if not, install the libraries using the command sudo yum install nfs-utils nfs-utils-lib )

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM