简体   繁体   中英

Hadoop NFS mount issue

I'm trying to mount an NFS share from windows server 2012 onto my Hadoop cluster (running Hadoop 2.7.3), so it can run MapReduce on files uploaded to the Windows Server. The Hadoop cluster is running on raspberry pi 2 (8 of them), and I've already gone through the configuration on the Hadoop wiki

I've tried mounting the NFS onto the HDFS dir (/hdfs/tmp/datanode) on the master, but it isn't accessible on the namenodes.

Am I mounting it in the wrong place?

Nevermind, I've changed my solution to have the NFS mount onto the namenode, and a crontab script puts files onto hdfs as it needs them

#! /bin/bash
inotifywait -mr -e create --format '%w%f' /hdfs-nfs/* | while read NEWFILE; do
    echo "${NEWFILE} detected"
    hadoop fs -put "${NEWFILE}" /
done

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM