简体   繁体   English

Hadoop:java.net.ConnectException:连接被拒绝

[英]Hadoop: java.net.ConnectException: Connection refused

Hello I have been trying to follow this tutorial: http://www.tutorialspoint.com/apache_flume/fetching_twitter_data.htm for a very long time now and I am absolutely stuck at Step 3: Create a Directory in HDFS. 您好我一直在尝试按照本教程: http//www.tutorialspoint.com/apache_flume/fetching_twitter_data.htm很长一段时间,我完全陷入第3步:在HDFS中创建目录。 I have ran start-dfs.sh and start-yarn.sh and both seem to have worked correctly as I am getting the same output as the tutorial but when I try to run: 我已经运行了start-dfs.sh和start-yarn.sh,两者似乎都正常工作,因为我得到了与教程相同的输出,但是当我尝试运行时:

hdfs dfs -mkdir hdfs://localhost:9000/user/Hadoop/twitter_data 

I keep receiving the same error: 我一直收到同样的错误:

mkdir: Call From trz-VirtualBox/10.0.2.15 to localhost:9000 failed on connection exception: java.net.ConnectException: Connection refused; For more details see:  http://wiki.apache.org/hadoop/ConnectionRefused

I can not figure out why as I have searched everywhere and tried a number of solutions but can't seem to make progress. 我无法弄清楚为什么我到处搜索并尝试了许多解决方案,但似乎无法取得进展。 I am going to list all of the files that I think could cause this but I could be wrong: My core.site.xml is: 我将列出我认为可能导致这种情况的所有文件,但我可能错了:我的core.site.xml是:

<configuration>
<property>  
<name>hadoop.tmp.dir</name>
<value>/Public/hadoop-2.7.1/tmp</value>
<description>A base for other temporary directories.</description>
</property>
<property>
<name>fs.default.name</name>
<value>hdfs://localhost:9000</value>
</property>

My mapred-site.xml is: 我的mapred-site.xml是:

<configuration>
<property>
<name>mapred.job.tracker</name>
<value>hdfs://localhost:9001</value>
</property>
</configuration>

My hdfs.site.xml is: 我的hdfs.site.xml是:

<configuration>
<property>
<name>dfs.replication</name>
<value>1</value>
</property>
<property>
<name>dfs.permission</name>
<value>false</value>
</property>
</configuration>

I am running Ubuntu 14.04.4 LTS on virtual box. 我正在虚拟机上运行Ubuntu 14.04.4 LTS。 My ~/.bashrc looks as so: 我的〜/ .bashrc看起来如此:

export JAVA_HOME=/usr/lib/jvm/java-1.7.0-openjdk-amd64
export HADOOP_HOME=/usr/local/hadoop/bin 
export HADOOP_HOME=/usr/local/hadoop/bin
export HADOOP_INSTALL=$HADOOP_HOME
export HADOOP_MAPRED_HOME=$HADOOP_HOME
export HADOOP_COMMON_HOME=$HADOOP_HOME
export HADOOP_HDFS_HOME=$HADOOP_HOME
export YARN_HOME=$HADOOP_HOME
export HADOOP_COMMON_LIB_NATIVE_DIR=$HADOOP_HOME/lib/native
export PATH=$PATH:$HADOOP_HOME/sbin:$HADOOP_HOME/bin
#flume
export FLUME_HOME=/usr/local/Flume
export PATH=$PATH:/FLUME_HOME/apache-flume-1.6.0-bin/bin
export CLASSPATH=$CLASSPATH:/FLUME_HOME/apache-flume-1.6.0-bin/lib/*

And finally my /etc/hosts file is set up as so: 最后我的/ etc / hosts文件设置如下:

127.0.0.1  localhost
10.0.2.15  trz-VirtualBox
10.0.2.15  hadoopmaster


# The following lines are desirable for IPv6 capable hosts
::1     ip6-localhost ip6-loopback
fe00::0 ip6-localnet
ff00::0 ip6-mcastprefix
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters

The added hadoopmaster I am currently not using, that was one of my attempts to fix this by trying not to use local host (didn't work). 添加的hadoopmaster我目前没有使用,这是我尝试通过尝试不使用本地主机(没有工作)来解决这个问题之一。 trz-VirtualBox was originally 127.0.1.1 but I read that you should use your real IP address? trz-VirtualBox最初是127.0.1.1,但我读到你应该使用你的真实IP地址? Neither worked so I am not sure. 两者都没有,所以我不确定。 I posted all of these files because I do not know where the error is. 我发布了所有这些文件,因为我不知道错误在哪里。 I do not think it is a path issue (I had a lot before I got to this step and was able to resolve them myself) so I am out of ideas. 我不认为这是一个路径问题(在我完成这一步之前我已经做了很多事情并且能够自己解决它们)所以我没有想法。 I've been at this for a number of hours now so any help is appreciated. 我已经在这里工作了几个小时,所以任何帮助都表示赞赏。 Thank you. 谢谢。

You have to set permissions to the hadoop's directory 您必须设置hadoop目录的权限

sudo chown -R user:pass /hadoop_path/hadoop

Then start the cluster and run jps command to see the DataNode and NameNode process. 然后启动集群并运行jps命令以查看DataNode和NameNode进程。

I was getting similar error. 我得到了类似的错误。 Upon checking I found that my namenode service was in stopped state. 检查后发现我的namenode服务处于停止状态。 sudo status hadoop-hdfs-namenode - check status of the namenode sudo status hadoop-hdfs-namenode - 检查sudo status hadoop-hdfs-namenode状态

if its not in started/running state sudo start hadoop-hdfs-namenode - start namenode service 如果它没有处于启动/运行状态sudo start hadoop-hdfs-namenode - 启动namenode服务

Do keep in mind that it takes time before name node service becomes fully functional after restart. 请记住,名称节点服务在重新启动后变得完全正常运行需要一些时间。 It reads all the hdfs edits in memory. 它读取内存中的所有hdfs编辑。 You can check progress of this in /var/log/hadoop-hdfs/ using command tail -f /var/log/hadoop-hdfs/{Latest log file} 您可以使用命令tail -f /var/log/hadoop-hdfs/{Latest log file}检查/ var / log / hadoop-hdfs /中的进度

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM