简体   繁体   English

如何通过Yarn,Hadoop提交Spark Scala作业

[英]How to submit a Spark scala job over Yarn, Hadoop

I am new to Spark and i'm trying to run a scala job on a pseudo distributed Hadoop system. 我是Spark的新手,我正在尝试在伪分布式Hadoop系统上运行scala作业。

Hadoop 2.6 + Yarn + Spark 1.6.1 + scala 2.10.6 + JVM 8, everything installed from scratch. Hadoop 2.6 + Yarn + Spark 1.6.1 + scala 2.10.6 + JVM 8,一切从头开始安装。

My Scala app is the simple WordCount example, i don't have a clue on what's the error. 我的Scala应用程序是简单的WordCount示例,我不知道发生了什么错误。

/usr/local/sparkapps/WordCount/src/main/scala/com/mydomain/spark/wordcount/WordCount.scala

package com.mydomain.spark.wordcount
import org.apache.spark.{SparkConf, SparkContext}
import org.apache.spark.SparkContext._

object ScalaWordCount {
    def main(args: Array[String]) {
        val logFile = "/home/hduser/inputfile.txt"
        val sparkConf = new SparkConf().setAppName("Spark Word Count")
        val sc = new SparkContext(sparkConf)
        val file = sc.textFile(logFile)
        val counts = file.flatMap(_.split(" ")).map(word => (word, 1)).reduceByKey(_ + _)
        counts.saveAsTextFile("/home/hduser/output")
    }
}

sbt file: sbt文件:

/usr/local/sparkapps/WordCount/WordCount.sbt


name := "ScalaWordCount"

version := "1.0"

scalaVersion := "2.10.6"

libraryDependencies += "org.apache.spark" %% "spark-core" % "1.6.1"

Compile: 编译:

$ cd /usr/local/sparkapps/WordCount/
$ sbt package

Submit: 提交:

spark-submit --class com.mydomain.spark.wordcount.ScalaWordCount --master yarn-cluster  /usr/local/sparkapps/WordCount/target/scala-2.10/scalawordcount_2.10-1.0.jar 

Output: 输出:

Exception in thread "main" org.apache.spark.SparkException: Application application_1460107053907_0003 finished with failed status
at org.apache.spark.deploy.yarn.Client.run(Client.scala:1034)
at org.apache.spark.deploy.yarn.Client$.main(Client.scala:1081)
at org.apache.spark.deploy.yarn.Client.main(Client.scala)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:731)
at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:181)
at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:206)
at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:121)
at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)

Spark Log File: http://pastebin.com/FnxFXimM Spark日志文件: http : //pastebin.com/FnxFXimM

From logs:: 从日志::

16/04/08 12:24:41 ERROR ApplicationMaster: User class threw exception: org.apache.hadoop.mapred.InvalidInputException: Input path does not exist: hdfs://localhost:9000/home/hduser/inputfile.txt

if you want to read local file, use 如果您想读取本地文件,请使用

val logFile = "file:///home/hduser/inputfile.txt"

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM