简体   繁体   English

hadoop fs -copyFromLocal时发生NoRouteToHostException

[英]NoRouteToHostException while hadoop fs -copyFromLocal

I installed hadoop 2.5.1 on CentOS7.0 我在CentOS7.0上安装了hadoop 2.5.1

and I'm using 3 computers with below hosts file, the same as all 3 computers 我正在使用3台主机文件下面的计算机,与所有3台计算机相同

I'm not using DNS. 我没有使用DNS。

XXX.XXX.XXX.65 mccb-com65 #server XXX.XXX.XXX.65 mccb-com65#服务器

XXX.XXX.XXX.66 mccb-com66 #client01 XXX.XXX.XXX.66 mccb-com66#client01

XXX.XXX.XXX.67 mccb-com67 #client02 XXX.XXX.XXX.67 mccb-com67#client02

127.0.0.1 localhost 127.0.0.1本地主机

127.0.1.1 mccb-com65 127.0.1.1 mccb-com65

I execute the command 我执行命令

$hadoop fs -copyFromLocal /home/hadoop/hdfs/hdfs/s_corpus.txt hdfs://XXX.XXX.XXX.65:9000/tmp/ $ hadoop fs -copyFromLocal /home/hadoop/hdfs/hdfs/s_corpus.txt hdfs://XXX.XXX.XXX.65:9000 / tmp /

I met below error message 我遇到以下错误消息

INFO hdfs.DFSClient: Exception in createBlockOutputStream java.net.NoRouteToHostException: No route to host at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:739) at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:529) at org.apache.hadoop.hdfs.DFSOutputStream.createSocketForPipeline(DFSOutputStream.java:1526) at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.createBlockOutputStream(DFSOutputStream.java:1328) at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.nextBlockOutputStream(DFSOutputStream.java:1281) at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:526) 15/02/27 16:57:40 INFO hdfs.DFSClient: Abandoning BP-1257634566-XXX.XXX.XXX.65-1425014347197:blk_1073741837_1013 15/02/27 16:57:40 INFO hdfs.DFSClient: Excluding datanode XXX.XXX.XXX.67:50010 <-- the same as another salve node 信息hdfs.DFSClient:createBlockOutputStream中的异常java.net.NoRouteToHostException:在sun.nio.ch.SocketChannelImpl.checkConnect(本机方法)处的主机未路由,在sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:739)处org.apache.hadoop.net.NetUtils.connect(NetUtils.java:529)上的org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206)org.apache.hadoop.hdfs.DFSOutputStream.createSocketForPipeline( org.apache.hadoop.hdfs的DFSOutputStream.java:1526)org.apache.hadoop.hdfs.DFSOutputStream $ DataStreamer.nextBlockOutputStream(DFSOutputStream.java:1281)的org.apache.hadoop.hdfs.DFSOutputStream $ DataStreamer.nextBlockOutputStream(DFSOutputStream.java:1281) .apache.hadoop.hdfs.DFSOutputStream $ DataStreamer.run(DFSOutputStream.java:526)15/02/27 16:57:40 INFO hdfs.DFSClient:放弃BP-1257634566-XXX.XXX.XXX.65-1425014347197:blk_1073741837_1013 15/02/27 16:57:40 INFO hdfs.DFSClient:不包含datanode XXX.XXX.XXX.67:50010 <-与另一个从属节点相同 XXX.XXX.XXX.66 XXX.XXX.XXX.66

I turn off all firewall of both computers mccb-com66 and mccb-com67 as below state shows. 我关闭了两台计算机mccb-com66和mccb-com67的所有防火墙,如下状态所示。

$systemctl status iptables $ systemctl状态iptables

iptables.service - IPv4 firewall with iptables Loaded: loaded (/usr/lib/systemd/system/iptables.service; disabled) iptables.service-已加载iptables的IPv4防火墙:已加载(/usr/lib/systemd/system/iptables.service;已禁用)
Active: inactive (dead) 有效:无效(无效)

and Additionally I also turn off selinux. 另外我还关闭了selinux。

datanode and nodemanager are alive in both machines I can check the state jps and http://mccb-com65:50070 and http://mccb-com65:8088 datanode和nodemanager在两台计算机上都还活着,我可以检查jps和http:// mccb-com65:50070http:// mccb-com65:8088的状态

What I'm missing? 我想念的是什么?

Could you anybody help me??? 有人可以帮我吗?

Even though I turn off the iptables, it's not valid solution. 即使我关闭了iptables,它也不是有效的解决方案。

After I open port one by one with firewall-cmd, it works.. 当我用firewall-cmd一一打开端口后,它就可以工作了。

for all slaves (66 and 67) 对于所有奴隶(66和67)

$firewall-cmd --zone=public --add-port=8042/tcp
$firewall-cmd --zone=public --add-port=50010/tcp
$firewall-cmd --zone=public --add-port=50020/tcp
$firewall-cmd --zone=public --add-port=50075/tcp
$firewall-cmd --reload

and then it works. 然后它起作用。

However, since I cannot open all ports which need to run Hadoop App, 但是,由于无法打开需要运行Hadoop App的所有端口,

turn off firewalld is reasonable such as 关闭firewalld是合理的,例如

$systemctl stop firewalld
$systemctl disable firewalld

and check the status 并检查状态

$Systemctl status firewalld

your /etc/hosts should contain: 您的/ etc / hosts应该包含:

XXX.XXX.XXX.65 mccb-com65 #server

XXX.XXX.XXX.66 mccb-com66 #client01

XXX.XXX.XXX.67 mccb-com67 #client02

Remove 去掉

127.0.0.1 localhost

127.0.1.1 mccb-com65

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM