简体   繁体   中英

NoRouteToHostException while hadoop fs -copyFromLocal

I installed hadoop 2.5.1 on CentOS7.0

and I'm using 3 computers with below hosts file, the same as all 3 computers

I'm not using DNS.

XXX.XXX.XXX.65 mccb-com65 #server

XXX.XXX.XXX.66 mccb-com66 #client01

XXX.XXX.XXX.67 mccb-com67 #client02

127.0.0.1 localhost

127.0.1.1 mccb-com65

I execute the command

$hadoop fs -copyFromLocal /home/hadoop/hdfs/hdfs/s_corpus.txt hdfs://XXX.XXX.XXX.65:9000/tmp/

I met below error message

INFO hdfs.DFSClient: Exception in createBlockOutputStream java.net.NoRouteToHostException: No route to host at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:739) at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:529) at org.apache.hadoop.hdfs.DFSOutputStream.createSocketForPipeline(DFSOutputStream.java:1526) at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.createBlockOutputStream(DFSOutputStream.java:1328) at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.nextBlockOutputStream(DFSOutputStream.java:1281) at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:526) 15/02/27 16:57:40 INFO hdfs.DFSClient: Abandoning BP-1257634566-XXX.XXX.XXX.65-1425014347197:blk_1073741837_1013 15/02/27 16:57:40 INFO hdfs.DFSClient: Excluding datanode XXX.XXX.XXX.67:50010 <-- the same as another salve node XXX.XXX.XXX.66

I turn off all firewall of both computers mccb-com66 and mccb-com67 as below state shows.

$systemctl status iptables

iptables.service - IPv4 firewall with iptables Loaded: loaded (/usr/lib/systemd/system/iptables.service; disabled)
Active: inactive (dead)

and Additionally I also turn off selinux.

datanode and nodemanager are alive in both machines I can check the state jps and http://mccb-com65:50070 and http://mccb-com65:8088

What I'm missing?

Could you anybody help me???

Even though I turn off the iptables, it's not valid solution.

After I open port one by one with firewall-cmd, it works..

for all slaves (66 and 67)

$firewall-cmd --zone=public --add-port=8042/tcp
$firewall-cmd --zone=public --add-port=50010/tcp
$firewall-cmd --zone=public --add-port=50020/tcp
$firewall-cmd --zone=public --add-port=50075/tcp
$firewall-cmd --reload

and then it works.

However, since I cannot open all ports which need to run Hadoop App,

turn off firewalld is reasonable such as

$systemctl stop firewalld
$systemctl disable firewalld

and check the status

$Systemctl status firewalld

your /etc/hosts should contain:

XXX.XXX.XXX.65 mccb-com65 #server

XXX.XXX.XXX.66 mccb-com66 #client01

XXX.XXX.XXX.67 mccb-com67 #client02

Remove

127.0.0.1 localhost

127.0.1.1 mccb-com65

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM