简体   繁体   English

自动故障转移在Hadoop中不起作用

[英]Automatic Failover not working in Hadoop

I'm trying to build a 3 node cluster (2 Namenode(nn1,nn2) and 1 datanode(dn1)) .Using Namenode WEBUI, I'm able to view that nn1 is active and nn2 is standby. 我正在尝试构建3个节点的群集(2个Namenode(nn1,nn2)和1个datanode(dn1))。使用Namenode WEBUI,我可以看到nn1是活动的,而nn2是备用的。 however, when I kill the active nn1, standby nn2 is not going active. 但是,当我杀死活动nn1时,备用nn2不会活动。 Please help me what am I doing wrong or what needs to be modified 请帮助我我做错了什么或需要修改什么

nn1 /etc/hosts nn1 / etc / hosts

127.0.0.1 localhost
192.168.10.153 nn1
192.168.10.154 dn1
192.168.10.155 nn2

nn2 /etc/hosts nn2 / etc / hosts

127.0.0.1       localhost nn2
127.0.1.1       ubuntu

    # The following lines are desirable for IPv6 capable hosts
    ::1     ip6-localhost ip6-loopback
    fe00::0 ip6-localnet
    ff00::0 ip6-mcastprefix
    ff02::1 ip6-allnodes
    ff02::2 ip6-allrouters

core-site.xml (nn1,nn2) core-site.xml(nn1,nn2)

<configuration>
<property>
<name>fs.defaultFS</name>
<value>hdfs://192.168.10.153:8020</value>
</property>

<property>
<name>dfs.journalnode.edits.dir</name>
<value>/usr/local/hadoop/hdfs/data/jn</value>
</property>
 <property>
 <name>ha.zookeeper.quorum</name>
 <value>192.168.10.153:2181,192.168.10.155:2181,192.168.10.154:2181</value>
 </property>

</configuration>

hdfs-site.xml(nn1,nn2,dn1) HDFS-site.xml中(NN1,NN2,DN1)

<property>
 <name>dfs.replication</name>
 <value>1</value>
 </property>
 <property>
 <name>dfs.permissions</name>
 <value>false</value>
 </property>
 <property>
 <name>dfs.nameservices</name>
 <value>ha-cluster</value>
 </property>
 <property>
 <name>dfs.ha.namenodes.ha-cluster</name>
 <value>nn1,nn2</value>
 </property>
 <property>
 <name>dfs.namenode.rpc-address.ha-cluster.nn1</name>
 <value>192.168.10.153:9000</value>
 </property>
 <property>
 <name>dfs.namenode.rpc-address.ha-cluster.nn2</name>
 <value>192.168.10.155:9000</value>
 </property>
 <property>/usr/local/hadoop/hdfs/datanode</value>
 <name>dfs.namenode.http-address.ha-cluster.nn1</name>
 <value>192.168.10.153:50070</value>
 </property>
 <property>
 <name>dfs.namenode.http-address.ha-cluster.nn2</name>
 <value>192.168.10.155:50070</value>
 </property>
 <property>
 <name>dfs.namenode.shared.edits.dir</name>
 <value>qjournal://192.168.10.153:8485;192.168.10.155:8485;192.168.10.154:8485/ha-cluster</value>
 </property>
 <property>
 <name>dfs.client.failover.proxy.provider.ha-cluster</name>
 <value>org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider</value>
 </property>
 <property>
 <name>dfs.ha.automatic-failover.enabled</name>
 <value>true</value>
 </property>
 <property>
 <name>ha.zookeeper.quorum</name>
 <value>192.168.10.153:2181,192.168.10.155:2181,192.168.10.154:2181</value>
 </property>

<property>
 <name>dfs.ha.fencing.methods</name>
 <value>sshfence</value>
 </property>
 <property>
 <name>dfs.ha.fencing.ssh.private-key-files</name>
 <value>/home/ci/.ssh/id_rsa</value></property></configuration>

LOGS :(zkfc nn1,nn2)(namenode nn1,nn2) on stopping nn1(active node) https://pastebin.com/bWvfnanQ 日志:(zkfc nn1,nn2)(namenode nn1,nn2)在停止nn1(活动节点)时https://pastebin.com/bWvfnanQ

Your mentioning <IP>:<port> for fs.defaultFS in core-site.xml for a HA cluster. 您在HA群集的core-site.xml中 fs.defaultFS <IP>:<port> So when shutting down your active namenode, it doesn't know where to redirect. 因此,当关闭活动的名称节点时,它不知道重定向到哪里。

Choose logical name for a nameservice, for example “mycluster”. 选择名称服务的逻辑名称,例如“ mycluster”。

Then change in hdfs-site.xml as well, dfs.namenode.http-address.[nameservice ID].[name node ID] - the fully-qualified HTTP address for each NameNode to listen on 然后也更改hdfs-site.xml,即dfs.namenode.http-address.[nameservice ID].[name node ID] -每个NameNode监听的标准HTTP地址

In your case, you have to give 就您而言,您必须付出

core-site.xml 核心的site.xml

<property>
<name>fs.defaultFS</name>
<value>hdfs://myCluster</value>
</property>

hdfs-site.xml HDFS-site.xml中

 <property>
 <name>dfs.namenode.rpc-address.myCluster.nn1</name>
 <value>192.168.10.153:9000</value>
 </property>

Read the manual clearly https://hadoop.apache.org/docs/r2.7.2/hadoop-project-dist/hadoop-hdfs/HDFSHighAvailabilityWithQJM.html 清楚阅读手册https://hadoop.apache.org/docs/r2.7.2/hadoop-project-dist/hadoop-hdfs/HDFSHighAvailabilityWithQJM.html

Hope this will help you. 希望这会帮助你。

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM