简体   繁体   English

spark submit java.lang.NullPointerException错误

[英]spark submit java.lang.NullPointerException error

I am trying to submit my spark-mongo code jar through spark on windows.I am using spark in standalone mode. 我试图通过windows上的spark提交我的spark-mongo代码jar。我在独立模式下使用spark。 I have configured spark master and two workers on same machine. 我在同一台机器上配置了spark master和两个worker。 I want to execute my jar with one master and two workers.I am trying to execute following command: spark-submit --master spark://localhost:7077 --deploy-mode cluster --executor-memory 5G --class spark.mongohadoop.testing3 G:\\sparkmon1.jar 我想用一个主人和两个工人执行我的jar。我正在尝试执行以下命令: spark-submit --master spark://localhost:7077 --deploy-mode cluster --executor-memory 5G --class spark.mongohadoop.testing3 G:\\sparkmon1.jar

I am facing following error: 我正面临以下错误:

Running Spark using the REST application submission protocol.
Using Spark's default log4j profile: org/apache/spark/log4j-defaults.properties
17/02/28 17:09:13 INFO RestSubmissionClient: Submitting a request to launch an application in spark://192.168.242.1:7077.
17/02/28 17:09:24 WARN RestSubmissionClient: Unable to connect to server spark://192.168.242.1:7077.
Warning: Master endpoint spark://192.168.242.1:7077 was not a REST server. Falling back to legacy submission gateway instead.
17/02/28 17:09:25 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
17/02/28 17:09:32 ERROR ClientEndpoint: Exception from cluster was: java.lang.NullPointerException
java.lang.NullPointerException
        at java.lang.ProcessBuilder.start(ProcessBuilder.java:1012)
        at org.apache.hadoop.util.Shell.runCommand(Shell.java:482)
        at org.apache.hadoop.util.Shell.run(Shell.java:455)
        at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:715)
        at org.apache.hadoop.fs.FileUtil.chmod(FileUtil.java:873)
        at org.apache.hadoop.fs.FileUtil.chmod(FileUtil.java:853)
        at org.apache.spark.util.Utils$.fetchFile(Utils.scala:474)
        at org.apache.spark.deploy.worker.DriverRunner.org$apache$spark$deploy$worker$DriverRunner$$downloadUserJar(DriverRunner.scala:154)
        at org.apache.spark.deploy.worker.DriverRunner$$anon$1.run(DriverRunner.scala:83

I have already set winutil path in env. 我已经在env中设置了winutil路径。 why I am getting this error and what is the solution? 为什么我得到这个错误,解决方案是什么?

I encountered the same error on Linux but with me it was coming when the driver was getting initiated from a particular machine in my cluster, if request to launch driver was going to any other machine in cluster, ten it was working fine. 我在Linux上遇到了同样的错误,但是当我的驱动程序从我的集群中的特定机器启动时,如果启动驱动程序的请求发送到集群中的任何其他计算机,那么它就会正常运行。 So, in my cased seemed to be as an environmental issue. 因此,我的套装似乎是一个环境问题。 I then checked the code at org.apache.hadoop.util.Shell$ShellCommandExecutor class and got that it is trying to run a command but before that it tries lo run "bash" for that machine. 然后我检查了org.apache.hadoop.util.Shell $ ShellCommandExecutor类中的代码并得到它正在尝试运行命令但在此之前它尝试运行该机器的“bash”。 I observed that my bash was responding slow.made some changes in bashrc and restarted my cluster. 我观察到我的bash响应缓慢。在bashrc中做了一些更改并重新启动了我的集群。 Now its working fine. 现在它的工作正常。

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM