简体   繁体   中英

why my datanode run on hadoop cluster but I still can't put file into hdfs?

When I jps on namenode

stillily@localhost:~$ jps
3669 SecondaryNameNode
3830 ResourceManager
3447 NameNode
4362 Jps

When I jps on datanode

stillily@localhost:~$ jps
3574 Jps
3417 NodeManager
3292 DataNode

But when I put a file

stillily@localhost:~$ hadoop fs  -put txt hdfs://hadoop:9000/txt
15/07/21 22:08:32 WARN hdfs.DFSClient: DataStreamer Exception
org.apache.hadoop.ipc.RemoteException(java.io.IOException): File /txt._COPYING_ could only be replicated to 0 nodes instead of minReplication (=1).  There are 0 datanode(s) running and no node(s) are excluded in this operation.
at org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.chooseTarget4NewBlock(BlockManager.java:1550)
at 
.......
put: File /txt._COPYING_ could only be replicated to 0 nodes instead of minReplication (=1).  There are 0 datanode(s) running and no node(s) are excluded in this operation.

And I notice that there is no "version" file in the datanode machine, but no matter how many times I run "hadoop namenode -format" there is version file created.

BTW ubuntu.

And now I know the reason is that the vm's ip has changed. I just modified the /etc/hosts in namenode but didn't modify that in datanode.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM