简体   繁体   中英

Cloudera: upload a File in the HDFS Exception

I use a MAC OS X Yosemite with a VM cloudera-quickstart-vm-5.4.2-0-virtualbox. When I type "hdfs dfs -put testfile.txt" to put a TEXT FILE into HDFS I get a DataStreamer Exception . I notice that the main problem is that the number of nodes that I have is null. I copy here below the complete error message and I would like to know how should I do to solve this.

> WARN hdfs.DFSClient: DataStreamer
> Exceptionorg.apache.hadoop.ipc.RemoteException(java.io.IOException):
> File /user/cloudera/testfile.txt._COPYING_ could only be replicated to
> 0 nodes instead of minReplication (=1). There are 0 datanode(s)
> running and no node(s) are excluded in this operation. at
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.chooseTarget4NewBlock(BlockManager.java:1541)
> at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:3286)
> at
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNodeRpcServer.java:667)
> at
> org.apache.hadoop.hdfs.server.namenode.AuthorizationProviderProxyClientProtocol.addBlock(AuthorizationProviderProxyClientProtocol.java:212)
> at
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.addBlock(ClientNamenodeProtocolServerSideTranslatorPB.java:483)
> at
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
> at
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:619)
> at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1060) at
> org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2044) at
> org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2040) at
> java.security.AccessController.doPrivileged(Native Method) at
> javax.security.auth.Subject.doAs(Subject.java:415) at
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1671)
> at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2038) at
> org.apache.hadoop.ipc.Client.call(Client.java:1468) at
> org.apache.hadoop.ipc.Client.call(Client.java:1399) at
> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:232)
> at com.sun.proxy.$Proxy14.addBlock(Unknown Source) at
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.addBlock(ClientNamenodeProtocolTranslatorPB.java:399)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
> at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:606) at
> org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:187)
> at
> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)
> at com.sun.proxy.$Proxy15.addBlock(Unknown Source) at
> org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.locateFollowingBlock(DFSOutputStream.java:1544)
> at
> org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.nextBlockOutputStream(DFSOutputStream.java:1361)
> at
> org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:600)put:
> File /user/cloudera/testfile.txt._COPYING_ could only be replicated to
> 0 nodes instead of minReplication (=1). There are 0 datanode(s)
> running and no node(s) are excluded in this
> operation.[cloudera@quickstart ~]$ hdfs dfs -put testfile.txt15/10/18
> 03:51:51 WARN hdfs.DFSClient: DataStreamer
> Exceptionorg.apache.hadoop.ipc.RemoteException(java.io.IOException):
> File /user/cloudera/testfile.txt._COPYING_ could only be replicated to
> 0 nodes instead of minReplication (=1). There are 0 datanode(s)
> running and no node(s) are excluded in this operation. at
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.chooseTarget4NewBlock(BlockManager.java:1541)
> at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:3286)
> at
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNodeRpcServer.java:667)
> at
> org.apache.hadoop.hdfs.server.namenode.AuthorizationProviderProxyClientProtocol.addBlock(AuthorizationProviderProxyClientProtocol.java:212)
> at
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.addBlock(ClientNamenodeProtocolServerSideTranslatorPB.java:483)
> at
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
> at
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:619)
> at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1060) at
> org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2044) at
> org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2040) at
> java.security.AccessController.doPrivileged(Native Method) at
> javax.security.auth.Subject.doAs(Subject.java:415) at
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1671)
> at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2038) at
> org.apache.hadoop.ipc.Client.call(Client.java:1468) at
> org.apache.hadoop.ipc.Client.call(Client.java:1399) at
> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:232)
> at com.sun.proxy.$Proxy14.addBlock(Unknown Source) at
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.addBlock(ClientNamenodeProtocolTranslatorPB.java:399)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
> at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:606) at
> org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:187)
> at
> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)
> at com.sun.proxy.$Proxy15.addBlock(Unknown Source) at
> org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.locateFollowingBlock(DFSOutputStream.java:1544)
> at
> org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.nextBlockOutputStream(DFSOutputStream.java:1361)
> at
> org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:600)put:
> File /user/cloudera/testfile.txt._COPYING_ could only be replicated to
> 0 nodes instead of minReplication (=1). There are 0 datanode(s)
> running and no node(s) are excluded in this
> operation.[cloudera@quickstart ~]$

1. Stop Hadoop services as describe in Stopping Services

for x in `cd /etc/init.d ; ls hadoop*` ; do sudo service $x stop ; done

2. Remove all files from /var/lib/hadoop-hdfs/cache/

sudo rm -r /var/lib/hadoop-hdfs/cache/

3. Format Namenode

sudo -u hdfs hdfs namenode -format

Note: Answer with a capital Y

Note: Data is lost during format process.

4. Start Hadoop Services

for x in `cd /etc/init.d ; ls hadoop*` ; do sudo service $x start ; done

5. Ensure that your system is not running on low diskspace. You can also confirm this if there is any WARNING about low diskspace in Log files.

6. Create the /tmp Directory

Remove the old /tmp if it exists:
    $ sudo -u hdfs hadoop fs -rm -r /tmp

Create a new /tmp directory and set permissions:

    $ sudo -u hdfs hadoop fs -mkdir /tmp 
    $ sudo -u hdfs hadoop fs -chmod -R 1777 /tmp

7. Create User Directories:

$ sudo -u hdfs hadoop fs -mkdir /user/<user> 
$ sudo -u hdfs hadoop fs -chown <user> /user/<user>

where <user> is the Linux username

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM