繁体   English   中英

(没有活动的SparkContext。)将作业提交到本地Spark主数据库时出错

[英](No active SparkContext.) Error submitting Job to local Spark master

我安装了Spark Master和Spark Slave,并在本地计算机上运行。 我想通过这样的命令行配置将代码提交给运行中的Spark Master,就像docs https://spark.apache.org/docs/latest/configuration.html#dynamically-loading-spark-properties中所述

JavaSparkContext sc = new JavaSparkContext(new SparkConf());
JavaStreamingContext jssc = new JavaStreamingContext(sc, BATCH_SIZE);
...

建立.jar之后,我通过

bin/spark-submit --class logAnalysis.myApp --name "myApp" --master "spark://some.server:7077" /jars/myApp-0.3.jar

编辑:我尝试设置没有引号之前的母版。

在此之后,我得到以下错误:

17/03/22 12:23:02 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
17/03/22 12:23:04 ERROR StandaloneSchedulerBackend: Application has been killed. Reason: Master removed our application: FAILED
17/03/22 12:23:04 ERROR SparkContext: Error initializing SparkContext.
java.lang.IllegalStateException: Cannot call methods on a stopped SparkContext.
This stopped SparkContext was created at:

org.apache.spark.api.java.JavaSparkContext.<init>(JavaSparkContext.scala:58)
logAnalysis.myApp.main(myApp.java:48)
sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
java.lang.reflect.Method.invoke(Method.java:606)
org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:736)
org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:185)
org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:210)
org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:124)
org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)

The currently active SparkContext was created at:

(No active SparkContext.)

        at org.apache.spark.SparkContext.assertNotStopped(SparkContext.scala:101)
        at org.apache.spark.SparkContext.getSchedulingMode(SparkContext.scala:1658)
        at org.apache.spark.SparkContext.postEnvironmentUpdate(SparkContext.scala:2162)
        at org.apache.spark.SparkContext.<init>(SparkContext.scala:542)
        at org.apache.spark.api.java.JavaSparkContext.<init>(JavaSparkContext.scala:58)
        at logAnalysis.myApp.main(myApp.java:48)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:606)
        at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:736)
        at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:185)
        at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:210)
        at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:124)
        at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
Exception in thread "main" java.lang.IllegalStateException: Cannot call methods on a stopped SparkContext.

我的Spark Master在作业列表中显示该作业失败,因此我已成功连接到Master。

当我通过提交工作时

bin/spark-submit --class logAnalysis.myApp--name "myApp" --master local[8] /jars/myApp-0.3.jar

它工作正常。

如该线程所述,在使用spark 2.0.2时,我的scala版本不是问题: 为什么从Java应用程序连接到Spark Standalone时会引发“无法在已停止的SparkContext上调用方法”?

一切都已默认设置。 为什么出现这种情况的一些建议?

现在,我向集群添加了另一个节点。 现在可以成功使用1x Master-2x Worker Setup运行。

除了将ElasticSearch-HadoopConnector添加到配置之外,没有更改代码中的任何内容:

JavaSparkContext sc = new JavaSparkContext(new SparkConf().set("es.nodes", "node1").set("es.port", "9200"));

我不知道问题是什么,也许是由配置引起的。 但是如前所述,当将Master设置为local [*]时,作业已成功运行。

暂无
暂无

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM