简体   繁体   English

如何将文件复制到HDFS?

[英]How to copy files into HDFS?

I am trying to start a hadoop single node cluster on my local machine. 我正在尝试在本地计算机上启动hadoop单节点集群。 I have configured the following files according to https://amodernstory.com/2014/09/23/installing-hadoop-on-mac-osx-yosemite/ : hadoop-env.sh, core-site.xml, mapred-site.xml and hdfs-site.xml. 我已经根据https://amodernstory.com/2014/09/23/installing-hadoop-on-mac-osx-yosemite/配置了以下文件:hadoop-env.sh,core-site.xml,mapred-site .xml和hdfs-site.xml。 When I run the script start-dfs.sh and then the command jps (right after running start-dfs.sh ) I see that the datanode is up and running: 当我运行脚本start-dfs.sh然后运行命令jps (运行start-dfs.sh )时,我看到datanode已启动并正在运行:

15735 Jps
15548 DataNode
15660 SecondaryNameNode
15453 NameNode

A few seconds later, I re-run the command jps and I see that the datanode is not running. 几秒钟后,我重新运行命令jps ,我发现datanode没有运行。 Why? 为什么? How to resolve this? 如何解决呢?

After that I run the script start-yarn.sh and then the command jps . 之后,我运行脚本start-yarn.sh ,然后运行命令jps I see this: 我看到这个:

15955 NodeManager
16011 Jps
15660 SecondaryNameNode
15453 NameNode
15854 ResourceManager

My ultimate aim is to copy files into the HDFS from my local filesystem. 我的最终目的是将文件从本地文件系统复制到HDFS中。 For doing so, I run the command hdfs dfs -copyFromLocal /source-file-path/filename /destination-file-path/ . 为此,我运行命令hdfs dfs -copyFromLocal /source-file-path/filename /destination-file-path/ I get the following error: 我收到以下错误:

17/07/10 17:09:00 WARN hdfs.DataStreamer: DataStreamer Exception
org.apache.hadoop.ipc.RemoteException(java.io.IOException): File /pay/txnlinking/redshift.yml._COPYING_ could only be replicated to 0 nodes instead of minReplication (=1).  There are 0 datanode(s) running and no node(s) are excluded in this operation.
    at org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.chooseTarget4NewBlock(BlockManager.java:1733)
    at org.apache.hadoop.hdfs.server.namenode.FSDirWriteFileOp.chooseTargetForNewBlock(FSDirWriteFileOp.java:265)
    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:2496)
    at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNodeRpcServer.java:828)
    at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.addBlock(ClientNamenodeProtocolServerSideTranslatorPB.java:506)
    at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
    at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:447)
    at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:989)
    at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:845)
    at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:788)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:422)
    at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1807)
    at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2455)

    at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1481)
    at org.apache.hadoop.ipc.Client.call(Client.java:1427)
    at org.apache.hadoop.ipc.Client.call(Client.java:1337)
    at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:227)
    at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:116)
    at com.sun.proxy.$Proxy10.addBlock(Unknown Source)
    at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.addBlock(ClientNamenodeProtocolTranslatorPB.java:440)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:498)
    at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:398)
    at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeMethod(RetryInvocationHandler.java:163)
    at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invoke(RetryInvocationHandler.java:155)
    at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeOnce(RetryInvocationHandler.java:95)
    at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:335)
    at com.sun.proxy.$Proxy11.addBlock(Unknown Source)
    at org.apache.hadoop.hdfs.DataStreamer.locateFollowingBlock(DataStreamer.java:1733)
    at org.apache.hadoop.hdfs.DataStreamer.nextBlockOutputStream(DataStreamer.java:1536)
    at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:658)
copyFromLocal: File /pay/txnlinking/redshift.yml._COPYING_ could only be replicated to 0 nodes instead of minReplication (=1).  There are 0 datanode(s) running and no node(s) are excluded in this operation.

How to avoid the above error and copy files into the HDFS? 如何避免上述错误并将文件复制到HDFS中?

PS: I created the destination path folders in HDFS explicitly before doing the copy. PS:在执行复制之前,我在HDFS中明确创建了目标路径文件夹。

Do

hadoop namenode -format

then stop all services using 然后使用停止所有服务

stop-all.sh

then restart all services using 然后使用重新启动所有服务

start-all.sh

start-all.sh and stop-all.sh are deprecated use start-dfs.sh and stop-dfs.sh instead 不推荐使用start-all.sh和stop-all.sh,而使用start-dfs.sh和stop-dfs.sh代替

First delete the contents of the hadoop.tmp.dir folder that you specified in core-site.xml. 首先删除您在core-site.xml中指定的hadoop.tmp.dir文件夹的内容。 Then do a namenode format using hdfs namenode -format . 然后使用hdfs namenode -format来执行hdfs namenode -format Your datanode should be up and running properly after which all the copy operations will be successfully executed. 您的datanode应该正常运行,然后所有复制操作将成功执行。

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM