简体   繁体   中英

HBase with YARN throws ERROR

I'm using Hadoop 2.5.1 with HBase 0.98.11 on Ubuntu 14.04

I could run it in Pseudo-distributed mode. Now that I want to run on distributed mode. I follow the instruction from sites and end up having an error in RUNTIME called "Error: org/apache/hadoop/hbase/HBaseConfiguration" (while there is no error when I compile the code).

After trying things, I found that if I comment the mapreduce.framework.name in mapred-site.xml and also stuffs in yarn-site, I could be able to run the hadoop successfully.

But I think it's the single-node running (I have no idea, just guessing by comparing the running time to what I ran in Pseudo and there is no MR in slave's node jps when running the job on master).

Here are some of my conf:

hdfs-site

<property>
<name>dfs.replication</name>
<value>2</value>
<description>Default block replication.
The actual number of replications can be specified when the file is created.
The default is used if replication is not specified in create time.
</description>
</property>
<!-- <property>
<name>dfs.namenode.name.dir</name>
<value>file:/usr/local/hadoop_store/hdfs/namenode</value>
</property>-->
<property>
<name>dfs.datanode.data.dir</name>
<value>file:/usr/local/hadoop_store/hdfs/datanode</value>
</property>

<property>
<name>dfs.datanode.use.datanode.hostname</name>
<value>false</value>
</property>

<property>
<name>dfs.permissions</name>
<value>false</value>
</property>

mapred-site

 <property>
   <name>mapred.job.tracker</name>
   <value>localhost:54311</value>
   <description>The host and port that the MapReduce job tracker runs
   at.  If "local", then jobs are run in-process as a single map
   and reduce task.
   </description>
 </property>

 <!--<property>
   <name>mapreduce.framework.name</name>
   <value>yarn</value>
 </property>-->

yarn-site

 <!-- Site specific YARN configuration properties -->

 <!--<property>
     <name>yarn.nodemanager.aux-services</name>
     <value>mapreduce_shuffle</value>
 </property>
 <property>
     <name>yarn.nodemanager.aux-services.mapreduce_shuffle.class</name>
     <value>org.apache.hadoop.mapred.ShuffleHandler</value>
 </property>
 <property>
     <name>yarn.resourcemanager.address</name>
     <value>10.1.1.177:8032</value>
 </property>
 <property>
     <name>yarn.resourcemanager.scheduler.address</name>
     <value>10.1.1.177:8030</value>
 </property>
 <property>
     <name>yarn.resourcemanager.resource-tracker.address</name>
     <value>10.1.1.177:8031</value>
 </property>-->

Thank you so much for every help

UPDATE: I try making some changes to the yarn-site by adding yarn.applicaton.classpath like this

https://dl-web.dropbox.com/get/Public/yarn.png?_subject_uid=51053996&w=AABeDJfRp_D31RiVHqBWn0r9naQR_lFVJXIlwvCwjdhCAQ

The error changed to EXIT CODE.

https://dl-web.dropbox.com/get/Public/exitcode.jpg?_subject_uid=51053996&w=AAAQ-bYoRSrQV3yFq36vEDPnAB9aIHnyOQfnvt2cUHn5IQ

UPDATE2: In syslog of the application logs it says

2015-04-24 20:34:59,164 INFO [main] org.apache.hadoop.mapreduce.v2.app.MRAppMaster: Created MRAppMaster for application appattempt_1429792550440_0035_000002 2015-04-24 20:34:59,589 WARN [main] org.apache.hadoop.conf.Configuration: job.xml:an attempt to override final parameter: mapreduce.job.end-notification.max.retry.interval; Ignoring. 2015-04-24 20:34:59,610 WARN [main] org.apache.hadoop.conf.Configuration: job.xml:an attempt to override final parameter: mapreduce.job.end-notification.max.attempts; Ignoring. 2015-04-24 20:34:59,616 FATAL [main] org.apache.hadoop.mapreduce.v2.app.MRAppMaster: Error starting MRAppMaster java.lang.NoSuchMethodError: org.apache.hadoop.http.HttpConfig.setPolicy(Lorg/apache/hadoop/http/HttpConfig$Policy;)V at org.apache.hadoop.mapreduce.v2.app.MRAppMaster.main(MRAppMaster.java:1364) 2015-04-24 20:34:59,621 INFO [Thread-1] org.apache.hadoop.mapreduce.v2.app.MRAppMaster: MRAppMaster received a signal. Signaling RMCommunicator and JobHistoryEventHandler.

Any suggestions pls

I guess that you didn't set up your hadoop cluster correctly please follow these steps :

Hadoop Configuration:

step 1 : edit hadoop-env.sh as following:

# The java implementation to use.  Required.
export JAVA_HOME=/usr/lib/jvm/java-6-sun

step 2 : Now create a directory and set the required ownerships and permissions

$ sudo mkdir -p /app/hadoop/tmp
$ sudo chown hduser:hadoop /app/hadoop/tmp
# ...and if you want to tighten up security, chmod from 755 to 750...
$ sudo chmod 750 /app/hadoop/tmp

step 3 : edit core-site.xml

<property>
  <name>hadoop.tmp.dir</name>
  <value>/app/hadoop/tmp</value>
</property>

<property>
  <name>fs.default.name</name>
  <value>hdfs://localhost:54310</value>
</property>

step 5 : edit mapred-site.xml

  <property>
      <name>mapreduce.framework.name</name>
      <value>yarn</value>
    </property>

step 6 : edit hdfs-site.xml

<property>
  <name>dfs.replication</name>
          <value>1</value>
</property>

<property>
  <name>dfs.name.dir</name>
  <value>file:///home/hduser/hadoopdata/hdfs/namenode</value>
</property>

<property>
  <name>dfs.data.dir</name>
  <value>file:///home/hduser/hadoop/hadoopdata/hdfs/datanode</value>
</property>

step 7 : edit yarn-site.xml

<property>
   <name>yarn.nodemanager.aux-services</name>
   <value>mapreduce_shuffle</value>
</property>

finally format your hdfs (You need to do this the first time you set up a Hadoop cluster)

$ /usr/local/hadoop/bin/hadoop namenode -format

Hbase Configuration:

edit you hbase-site.xml :

<property>
  <name>hbase.rootdir</name>
  <value>hdfs://localhost:54310/hbase</value>
</property> 

<property>
   <name>hbase.cluster.distributed</name>
   <value>true</value>
</property>

<property>
   <name>hbase.zookeeper.quorum</name>  
   <value>localhost</value>
</property>

<property>   
  <name>dfs.replication</name>   
  <value>1</value>    
</property>

 <property>       
  <name>hbase.zookeeper.property.clientPort</name>  
  <value>2181</value>                                                                                 
 </property>

 <property>
    <name>hbase.zookeeper.property.dataDir</name>
    <value>/usr/local/hbase/zookeeper</value>   
 </property>                                                                                                                          

Hope this helps you

After sticking with the problem for more than 3 days (maybe it's from my misunderstanding the concept), I can fix the problem by adding HADOOP_CLASSPATH (like what I did when setting up the pseudo-distribute in hadoop-env) into the yarn-env.

I have no idea much in detail. But, yeah, hope this may be able to help someone in the future.

Cheers.

I was using Spark on Yarn and was getting the same error. Actually, the spark jar had a internal dependency of hadoop-client and hadoop-mapreduce-client-* jars pointing to older 2.2.0 versions. So, I included these entries in my POM with the Hadoop version that I was running and did a clean build.

This resolved the issue for me. Hope this helps someone.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM