简体   繁体   English

无法连接到服务器:localhost / 127.0.0.1:9000:尝试一次失败。 java.net.ConnectException:连接被拒绝

[英]Failed to connect to server: localhost/127.0.0.1:9000: try once and fail. java.net.ConnectException: Connection refused

I'm trying to put a file into my local hdfs by running this: hadoop fs -put part-00000 /hbase/ , it gave me this: 我试图通过运行以下命令将文件放入本地hdfs: hadoop fs -put part-00000 /hbase/ ,它给了我这个:

17/05/30 16:11:52 WARN ipc.Client: Failed to connect to server: localhost/127.0.0.1:9000: try once and fail.
java.net.ConnectException: Connection refused
    at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
    at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:717)
    at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206)
    at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:531)
    at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:495)
    at org.apache.hadoop.ipc.Client$Connection.setupConnection(Client.java:681)
    at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:777)
    at org.apache.hadoop.ipc.Client$Connection.access$3500(Client.java:409)
    at org.apache.hadoop.ipc.Client.getConnection(Client.java:1542)
    at org.apache.hadoop.ipc.Client.call(Client.java:1373)
    at org.apache.hadoop.ipc.Client.call(Client.java:1337)
    at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:227)
    at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:116)
    at com.sun.proxy.$Proxy10.getFileInfo(Unknown Source)
    at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getFileInfo(ClientNamenodeProtocolTranslatorPB.java:787)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:498)
    at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:398)
    at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeMethod(RetryInvocationHandler.java:163)
    at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invoke(RetryInvocationHandler.java:155)
    at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeOnce(RetryInvocationHandler.java:95)
    at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:335)
    at com.sun.proxy.$Proxy11.getFileInfo(Unknown Source)
    at org.apache.hadoop.hdfs.DFSClient.getFileInfo(DFSClient.java:1700)
    at org.apache.hadoop.hdfs.DistributedFileSystem$27.doCall(DistributedFileSystem.java:1436)
    at org.apache.hadoop.hdfs.DistributedFileSystem$27.doCall(DistributedFileSystem.java:1433)
    at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
    at org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:1433)
    at org.apache.hadoop.fs.Globber.getFileStatus(Globber.java:64)
    at org.apache.hadoop.fs.Globber.doGlob(Globber.java:269)
    at org.apache.hadoop.fs.Globber.glob(Globber.java:148)
    at org.apache.hadoop.fs.FileSystem.globStatus(FileSystem.java:1685)
    at org.apache.hadoop.fs.shell.PathData.expandAsGlob(PathData.java:326)
    at org.apache.hadoop.fs.shell.CommandWithDestination.getRemoteDestination(CommandWithDestination.java:195)
    at org.apache.hadoop.fs.shell.CopyCommands$Put.processOptions(CopyCommands.java:256)
    at org.apache.hadoop.fs.shell.Command.run(Command.java:164)
    at org.apache.hadoop.fs.FsShell.run(FsShell.java:315)
    at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76)
    at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:90)
    at org.apache.hadoop.fs.FsShell.main(FsShell.java:378)
put: Call From steves-macbook-pro.local/172.29.16.117 to localhost:9000 failed on connection exception: java.net.ConnectException: Connection refused; For more details see:  http://wiki.apache.org/hadoop/ConnectionRefused

before that I did: $hadoop fs -mkdir /hbase which ran successfully. 在此之前,我做了: $hadoop fs -mkdir /hbase成功运行。

I checked my logs for datanode, here's it: 我检查了日志中的datanode,如下所示:

2017-05-30 16:21:48,137 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: Problem connecting to server: localhost/127.0.0.1:9000
2017-05-30 16:21:54,147 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: localhost/127.0.0.1:9000. Already tried 0 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
2017-05-30 16:21:55,150 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: localhost/127.0.0.1:9000. Already tried 1 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
2017-05-30 16:21:56,154 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: localhost/127.0.0.1:9000. Already tried 2 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
2017-05-30 16:21:57,158 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: localhost/127.0.0.1:9000. Already tried 3 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
2017-05-30 16:21:58,162 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: localhost/127.0.0.1:9000. Already tried 4 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
2017-05-30 16:21:59,165 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: localhost/127.0.0.1:9000. Already tried 5 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
2017-05-30 16:22:00,168 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: localhost/127.0.0.1:9000. Already tried 6 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
2017-05-30 16:22:01,174 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: localhost/127.0.0.1:9000. Already tried 7 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
2017-05-30 16:22:02,179 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: localhost/127.0.0.1:9000. Already tried 8 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
2017-05-30 16:22:03,183 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: localhost/127.0.0.1:9000. Already tried 9 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
2017-05-30 16:22:03,183 WARN org.apache.hadoop.ipc.Client: Failed to connect to server: localhost/127.0.0.1:9000: retries get failed due to exceeded maximum allowed retries number: 10
java.net.ConnectException: Connection refused
        at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
        at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:717)
        at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206)
        at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:531)
        at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:495)
        at org.apache.hadoop.ipc.Client$Connection.setupConnection(Client.java:681)
        at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:777)
        at org.apache.hadoop.ipc.Client$Connection.access$3500(Client.java:409)
        at org.apache.hadoop.ipc.Client.getConnection(Client.java:1542)
        at org.apache.hadoop.ipc.Client.call(Client.java:1373)
        at org.apache.hadoop.ipc.Client.call(Client.java:1337)
        at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:227)
        at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:116)
        at com.sun.proxy.$Proxy15.versionRequest(Unknown Source)
        at org.apache.hadoop.hdfs.protocolPB.DatanodeProtocolClientSideTranslatorPB.versionRequest(DatanodeProtocolClientSideTranslatorPB.java:274)
        at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.retrieveNamespaceInfo(BPServiceActor.java:215)
        at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.connectToNNAndHandshake(BPServiceActor.java:261)
        at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:746)
        at java.lang.Thread.run(Thread.java:745)

I found a couple very similar questions on StackOverflow, in sum, here's what I've tried: 总之,我在StackOverflow上发现了几个非常相似的问题,这是我尝试过的事情:

/usr/local/Cellar/hadoop/2.8.0/sbin/stop-all.sh

/usr/local/Cellar/hadoop/2.8.0/bin/hdfs namenode -format

/usr/local/Cellar/hadoop/2.8.0/sbin/start-all.sh

/usr/local/Cellar/hadoop/2.8.0/sbin/start-dfs.sh

then I do a $jps, this is what I have: 然后我做一个$ jps,这就是我所拥有的:

13568 Main
23154 NodeManager
13477 HMaster
21927 DataNode
12696 Launcher
13674 GradleDaemon
22042 SecondaryNameNode
23052 ResourceManager
23502 Jps

Also, I've checked my /usr/local/Cellar/hadoop/2.8.0/libexec/etc/hadoop/core-site.xml , it's pointing to localhost:9000 另外,我已经检查了/usr/local/Cellar/hadoop/2.8.0/libexec/etc/hadoop/core-site.xml ,它指向localhost:9000

<configuration>
<property>
<name>hadoop.tmp.dir</name>
<value>/usr/local/Cellar/hadoop/hdfs/tmp</value>
<description>A base for other temporary directories.</description>
</property>
<property>
<name>fs.default.name</name>
<value>hdfs://localhost:9000</value>
</property>
</configuration>

So, somehow my hadoop service is NOT up? 因此,以某种方式我的hadoop服务没有启动? any pointers where I should go next please? 请问我接下来应该去哪里?

Thanks a lot! 非常感谢! Really appreciate it! 真的很感激!

Edit: 编辑:

I found something else really interesting/weird: (I'm not sure why this is happening and if it's related) 我发现了其他确实有趣/奇怪的东西:(我不确定为什么会发生这种情况以及是否相关)

  1. when I don't have a datanode running, I'm able to access this web UI: http://localhost:50070/ to see how my local hadoop is working. 当我没有正在运行的datanode时,我可以访问以下Web UI: http:// localhost:50070 /以查看本地hadoop的工作方式。

  2. when I do 当我做

/usr/local/Cellar/hadoop/2.8.0/bin/hdfs namenode -format

/usr/local/Cellar/hadoop/2.8.0/sbin/stop-all.sh

/usr/local/Cellar/hadoop/2.8.0/sbin/start-all.sh

and then I do jps , I got a running datanode , but I cannot access the web UI anymore: http://localhost:50070/ 然后我执行jps ,我有一个正在运行的datanode ,但是我无法再访问Web UI: http:// localhost:50070 /

It turns out I'm missing some configurations in my hdfs-site.xml,, 事实证明,我的hdfs-site.xml中缺少一些配置,

I added below into it: 我在下面添加了它:

<configuration>
<property>
<name>dfs.replication</name>
<value>1</value>
</property>
<property>
<name>dfs.namenode.name.dir</name>
<value>file:/Users/USERNAME/data/hadoop/hdfs/nn</value>
</property>
<property>
<name>fs.checkpoint.dir</name>
<value>file:/Users/USERNAME/data/hadoop/hdfs/snn</value>
</property>
<property>
<name>fs.checkpoint.edits.dir</name>
<value>file:/Users/USERNAME/data/hadoop/hdfs/snn</value>
</property>
<property>
<name>dfs.datanode.data.dir</name>
<value>file:/Users/USERNAME/data/hadoop/hdfs/dn</value>
</property>
</configuration>

then I did hadoop namenode -format -force stop-all.sh start-all.sh 然后我做了hadoop namenode -format -force stop-all.sh start-all.sh

and it works fine. 而且效果很好。

首次运行hadoop时,必须运行命令hdfs namenode -format ,否则namenode不起作用!

why do we have to run this command, every time we have to start the namenode? 为什么每次必须启动namenode时都必须运行此命令? Doesn't this erase the data on hdfs? 这不会清除hdfs上的数据吗?

hdfs namenode -format 

暂无
暂无

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

相关问题 java.net.ConnectException:无法连接到 localhost/127.0.0.1(端口 80):连接失败:ECONNREFUSED(连接被拒绝) - java.net.ConnectException: failed to connect to localhost/127.0.0.1 (port 80): connect failed: ECONNREFUSED (Connection refused) IOException java.net.ConnectException:无法连接到/127.0.0.1(port 5000):connect failed:ECONNREFUSED(Connection Refused) - IOException java.net.ConnectException: failed to connect to /127.0.0.1(port 5000):connect failed:ECONNREFUSED(Connection Refused) java.net.ConnectException:无法连接到 localhost/127.0.0.1(端口 8080):连接失败:ECONNREFUSED ....(代号一个应用程序) - java.net.ConnectException: fail to connect to localhost/127.0.0.1(port 8080): connect failed:ECONNREFUSED….(Codename One App) java.net.ConnectException:拒绝连接:connect:localhost - java.net.ConnectException: Connection refused: connect: localhost 来电 <hostname> / <ip> 至 <hostname> :9000连接异常失败:java.net.ConnectException:连接被拒绝 - Call From <hostname>/<ip> to <hostname>:9000 failed on connection exception: java.net.ConnectException: Connection refused Tomee服务器:java.net.ConnectException:连接被拒绝:connect - Tomee Server :java.net.ConnectException: Connection refused: connect 引起:java.net.ConnectException:连接被拒绝:没有更多信息:localhost/127.0.0.1:9300 - Caused by: java.net.ConnectException: Connection refused: no further information: localhost/127.0.0.1:9300 Eclipse 4.23.0 连接拒绝主机:localhost; 嵌套异常是:java.net.ConnectException:连接被拒绝:连接 - Eclipse 4.23.0 Connection refused to host: localhost; nested exception is: java.net.ConnectException: Connection refused: connect java.net.ConnectException:在无头模式下调用Firefox浏览器时,无法连接到localhost / 127.0.0.1:xxxx - java.net.ConnectException: Failed to connect to localhost/127.0.0.1:xxxx while invoking Firefox browser in Headless mode RMI服务器连接拒绝托管:#############; 嵌套的异常是:java.net.ConnectException:连接被拒绝:connect - RMI Server Connection refused to host: ############ ; nested exception is: java.net.ConnectException: Connection refused: connect
 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM