[英]issue Running Spark Job on Yarn Cluster
I want to run my spark Job in Hadoop YARN cluster mode, and I am using the following command: 我想在Hadoop YARN集群模式下运行我的spark Job,我使用以下命令:
spark-submit --master yarn-cluster
--driver-memory 1g
--executor-memory 1g
--executor-cores 1
--class com.dc.analysis.jobs.AggregationJob
sparkanalitic.jar param1 param2 param3
I am getting error below, kindly suggest whats going wrong, is the command correct or not. 我收到错误,请提出错误,命令是否正确。 I am using CDH 5.3.1. 我正在使用CDH 5.3.1。
Diagnostics: Application application_1424284032717_0066 failed 2 times due
to AM Container for appattempt_1424284032717_0066_000002 exited with
exitCode: 15 due to: Exception from container-launch.
Container id: container_1424284032717_0066_02_000001
Exit code: 15
Stack trace: ExitCodeException exitCode=15:
at org.apache.hadoop.util.Shell.runCommand(Shell.java:538)
at org.apache.hadoop.util.Shell.run(Shell.java:455)
at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:702)
at org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:197)
at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:299)
at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:81)
at java.util.concurrent.FutureTask.run(FutureTask.java:262)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
Container exited with a non-zero exit code 15
.Failing this attempt.. Failing the application.
ApplicationMaster host: N/A
ApplicationMaster RPC port: -1
queue: root.hdfs
start time: 1424699723648
final status: FAILED
tracking URL: http://myhostname:8088/cluster/app/application_1424284032717_0066
user: hdfs
2015-02-23 19:26:04 DEBUG Client - stopping client from cache: org.apache.hadoop.ipc.Client@4085f1ac
2015-02-23 19:26:04 DEBUG Utils - Shutdown hook called
2015-02-23 19:26:05 DEBUG Utils - Shutdown hook called
Any help would be greatly appreciated. 任何帮助将不胜感激。
It can mean a lot of things, for us, we get the similar error message because of unsupported Java class version, and we fixed the problem by deleting the referenced Java class in our project. 它可能意味着很多东西,对于我们来说,由于不支持的Java类版本,我们得到类似的错误消息,我们通过删除项目中引用的Java类来解决问题。
Use this command to see the detailed error message: 使用此命令可以查看详细的错误消息:
yarn logs -applicationId application_1424284032717_0066
您应该在代码中删除“.setMaster(”local“)”。
The command looks correct. 该命令看起来正确。
What I've come across is that the "exit code 15" normally indicates a TableNotFound Exception. 我遇到的是“退出代码15”通常表示TableNotFound异常。 That usually means there's an error in the code you're submitting. 这通常意味着您提交的代码中存在错误。
You can check this by visiting the tracking URL. 您可以访问跟踪网址进行检查。
对我来说,通过将hive-site.xml
放在spark/conf
目录中来解决退出代码问题。
Remove the line "spark.master":"local[*]
" in the spark configuration file if you are running the spark jobs under cluster. 如果您正在群集下运行spark作业,请删除spark配置文件中的"spark.master":"local[*]
”行。
Suppose run on the local pc, include it. 假设在本地PC上运行,包括它。
Mani 玛尼
声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.