简体   繁体   中英

Spark installation on Hadoop Yarn

please someone help me, i am trying to install spark on Haoop Yarn, i am getting this error:

org.apache.spark.SparkException: Yarn application has already ended! It might have been killed or unable to launch application master.
        at org.apache.spark.scheduler.cluster.YarnClientSchedulerBackend.waitForApplication(YarnClientSchedulerBackend.scala:113)
        at org.apache.spark.scheduler.cluster.YarnClientSchedulerBackend.start(YarnClientSchedulerBackend.scala:59)
        at org.apache.spark.scheduler.TaskSchedulerImpl.start(TaskSchedulerImpl.scala:141)
        at org.apache.spark.SparkContext.<init>(SparkContext.scala:379)
java.lang.NullPointerException
    at org.apache.spark.sql.SQLContext.<init>(SQLContext.scala:141)
    at org.apache.spark.sql.hive.HiveContext.<init>(HiveContext.scala:49)
    at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
    at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
    at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
    at java.lang.reflect.Constructor.newInstance(Constructor.java:526)

and hadoop daemons are :

4064 SecondaryNameNode
3478 NameNode
4224 ResourceManager
4480 NodeManager
3727 DataNode
6279 Jps

and bash file:

export JAVA_HOME=/home/user/hadoop-two/jdk1.7.0_71
export HADOOP_INSTALL=/home/user/hadoop-two/hadoop-2.6.0
export PATH=$PATH:$HADOOP_INSTALL/bin
export PATH=$PATH:$HADOOP_INSTALL/sbin
export HADOOP_MAPRED_HOME=$HADOOP_INSTALL
export HADOOP_COMMON_HOME=$HADOOP_INSTALL
export HADOOP_HDFS_HOME=$HADOOP_INSTALL
export YARN_HOME=$HADOOP_INSTALL
export HADOOP_COMMON_LIB_NATIVE_DIR=$HADOOP_INSTALL/lib/native
export HADOOP_OPTS="-Djava.library.path=$HADOOP_INSTALL/lib"
export HADOOP_CONF_DIR=$HADOOP_INSTALL/etc/hadoop
export YARN_CONF_DIR=$HADOOP_INSTALL/etc/hadoop
export SPARK_HOME=/home/user/hadoop-two/spark-1.4.0

Install Spark, and Configure along with above setting up environment variables. Configure the JAVA_HOME and HADOOP_CONF_DIR in conf/spark-env.sh file:

export HADOOP_CONF_DIR=/home/user/hadoop-2.7.1/etc/hadoop
export JAVA_HOME=/home/user/jdk1.8.0_60

and Define the slave(put dns names of slaves) in spark Conf directory:

conf/slaves

and start the spark on YARN by using command:

bin/spark-shell --master yarn-client

Thats It You done !!!!

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM