[英]why my datanode run on hadoop cluster but I still can't put file into hdfs?
当我在namenode上jps
stillily@localhost:~$ jps
3669 SecondaryNameNode
3830 ResourceManager
3447 NameNode
4362 Jps
当我在datanode上jps
stillily@localhost:~$ jps
3574 Jps
3417 NodeManager
3292 DataNode
但是当我放一个文件
stillily@localhost:~$ hadoop fs -put txt hdfs://hadoop:9000/txt
15/07/21 22:08:32 WARN hdfs.DFSClient: DataStreamer Exception
org.apache.hadoop.ipc.RemoteException(java.io.IOException): File /txt._COPYING_ could only be replicated to 0 nodes instead of minReplication (=1). There are 0 datanode(s) running and no node(s) are excluded in this operation.
at org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.chooseTarget4NewBlock(BlockManager.java:1550)
at
.......
put: File /txt._COPYING_ could only be replicated to 0 nodes instead of minReplication (=1). There are 0 datanode(s) running and no node(s) are excluded in this operation.
而且我注意到datanode机器中没有“版本”文件,但是无论我运行“ hadoop namenode -format”多少次,都会创建版本文件。
BTW ubuntu。
现在我知道原因是虚拟机的IP已更改。 我只是在namenode中修改了/ etc / hosts,但没有在datanode中进行修改。
声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.