简体   繁体   English

(没有活动的SparkContext。)将作业提交到本地Spark主数据库时出错

[英](No active SparkContext.) Error submitting Job to local Spark master

I got a Spark Master and a Spark Slave up and running on my local Machine. 我安装了Spark Master和Spark Slave,并在本地计算机上运行。 I want to submit my code to my running Spark Master via command line configurations like this, just like described in the docs https://spark.apache.org/docs/latest/configuration.html#dynamically-loading-spark-properties 我想通过这样的命令行配置将代码提交给运行中的Spark Master,就像docs https://spark.apache.org/docs/latest/configuration.html#dynamically-loading-spark-properties中所述

JavaSparkContext sc = new JavaSparkContext(new SparkConf());
JavaStreamingContext jssc = new JavaStreamingContext(sc, BATCH_SIZE);
...

After building my .jar, i submit via 建立.jar之后,我通过

bin/spark-submit --class logAnalysis.myApp --name "myApp" --master "spark://some.server:7077" /jars/myApp-0.3.jar

Edit: I tried setting the master without quotes before. 编辑:我尝试设置没有引号之前的母版。

After this, i get following Error: 在此之后,我得到以下错误:

17/03/22 12:23:02 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
17/03/22 12:23:04 ERROR StandaloneSchedulerBackend: Application has been killed. Reason: Master removed our application: FAILED
17/03/22 12:23:04 ERROR SparkContext: Error initializing SparkContext.
java.lang.IllegalStateException: Cannot call methods on a stopped SparkContext.
This stopped SparkContext was created at:

org.apache.spark.api.java.JavaSparkContext.<init>(JavaSparkContext.scala:58)
logAnalysis.myApp.main(myApp.java:48)
sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
java.lang.reflect.Method.invoke(Method.java:606)
org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:736)
org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:185)
org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:210)
org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:124)
org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)

The currently active SparkContext was created at:

(No active SparkContext.)

        at org.apache.spark.SparkContext.assertNotStopped(SparkContext.scala:101)
        at org.apache.spark.SparkContext.getSchedulingMode(SparkContext.scala:1658)
        at org.apache.spark.SparkContext.postEnvironmentUpdate(SparkContext.scala:2162)
        at org.apache.spark.SparkContext.<init>(SparkContext.scala:542)
        at org.apache.spark.api.java.JavaSparkContext.<init>(JavaSparkContext.scala:58)
        at logAnalysis.myApp.main(myApp.java:48)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:606)
        at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:736)
        at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:185)
        at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:210)
        at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:124)
        at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
Exception in thread "main" java.lang.IllegalStateException: Cannot call methods on a stopped SparkContext.

My Spark Master shows the job as failed in the job list, so i am sucessfully connecting to my Master. 我的Spark Master在作业列表中显示该作业失败,因此我已成功连接到Master。

when i do submit my job via 当我通过提交工作时

bin/spark-submit --class logAnalysis.myApp--name "myApp" --master local[8] /jars/myApp-0.3.jar

it works just fine. 它工作正常。

Am using spark 2.0.2, my scala versions are not the problem, as stated in this thread: Why is "Cannot call methods on a stopped SparkContext" thrown when connecting to Spark Standalone from Java application? 如该线程所述,在使用spark 2.0.2时,我的scala版本不是问题: 为什么从Java应用程序连接到Spark Standalone时会引发“无法在已停止的SparkContext上调用方法”?

Everything is kinda set up as default. 一切都已默认设置。 Some suggestions why this is happening? 为什么出现这种情况的一些建议?

I now have added another Node to my Cluster. 现在,我向集群添加了另一个节点。 It's sucessfully running now with 1x Master - 2x Worker Setup. 现在可以成功使用1x Master-2x Worker Setup运行。

Didnt change anything in the Code besides adding the ElasticSearch-HadoopConnector to the Configuration: 除了将ElasticSearch-HadoopConnector添加到配置之外,没有更改代码中的任何内容:

JavaSparkContext sc = new JavaSparkContext(new SparkConf().set("es.nodes", "node1").set("es.port", "9200"));

I dont know what the Problem was, maybe it was caused by the Config. 我不知道问题是什么,也许是由配置引起的。 But as said before, the job was successfully running when setting the Master to local[*]. 但是如前所述,当将Master设置为local [*]时,作业已成功运行。

暂无
暂无

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

相关问题 错误 SparkContext:初始化 SparkContext 时出错。 java.lang.IllegalArgumentException: 系统内存 259522560 必须至少为 471859200 - ERROR SparkContext: Error initializing SparkContext. java.lang.IllegalArgumentException: System memory 259522560 must be at least 471859200 Apache Spark:启动SparkContext类时,ERROR本地类不兼容 - Apache Spark: ERROR local class incompatible when initiating a SparkContext class 初始化SparkContext时出错。 java.util.concurrent.TimeoutException:期货在[10000毫秒]后超时 - Error initializing SparkContext. java.util.concurrent.TimeoutException: Futures timed out after [10000 milliseconds] 错误 SparkContext:初始化 SparkContext 时出错。 java.lang.RuntimeException:java.lang.NoSuchFieldException:DEFAULT_TINY_CACHE_SIZE - ERROR SparkContext: Error initializing SparkContext. java.lang.RuntimeException: java.lang.NoSuchFieldException: DEFAULT_TINY_CACHE_SIZE 错误SparkContext:初始化SparkContext时出错-Java + Eclipse + Spark - ERROR SparkContext: Error initializing SparkContext - Java + Eclipse + Spark Apache Spark-JavaSparkContext无法转换为SparkContext错误 - Apache Spark - JavaSparkContext cannot be converted to SparkContext error windows 上的 Spark - 初始化 SparkContext 时出错,无效的 spark URL - Spark on windows - Error initializing SparkContext, Invalid spark URL 将作业提交到远程Apache Spark服务器 - Submitting Job to Remote Apache Spark Server 由于java.io.NotSerializableException:org.apache.spark.SparkContext,Spark作业失败 - Spark job is failed due to java.io.NotSerializableException: org.apache.spark.SparkContext Java作业无法连接到独立本地主机 - Java Job Failed to Connect to Standalone Local Master
 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM