簡體   English   中英

為什么spark-shell會因“SymbolTable.exitingPhase ... java.lang.NullPointerException”而失敗?

[英]Why does spark-shell fail with “SymbolTable.exitingPhase…java.lang.NullPointerException”?

剛剛在我的arch linux中安裝了Apache Spark 2.2.0-4,以及Scala 2.12.4和Apache Hadoop 3.0,我執行spark-shell面臨以下異常。

Exception in thread "main" java.lang.NullPointerException
at scala.reflect.internal.SymbolTable.exitingPhase(SymbolTable.scala:256)
at scala.tools.nsc.interpreter.IMain$Request.x$20$lzycompute(IMain.scala:896)
at scala.tools.nsc.interpreter.IMain$Request.x$20(IMain.scala:895)
at scala.tools.nsc.interpreter.IMain$Request.headerPreamble$lzycompute(IMain.scala:895)
at scala.tools.nsc.interpreter.IMain$Request.headerPreamble(IMain.scala:895)
at scala.tools.nsc.interpreter.IMain$Request$Wrapper.preamble(IMain.scala:918)
at scala.tools.nsc.interpreter.IMain$CodeAssembler$$anonfun$apply$23.apply(IMain.scala:1337)
at scala.tools.nsc.interpreter.IMain$CodeAssembler$$anonfun$apply$23.apply(IMain.scala:1336)
at scala.tools.nsc.util.package$.stringFromWriter(package.scala:64)
at scala.tools.nsc.interpreter.IMain$CodeAssembler$class.apply(IMain.scala:1336)
at scala.tools.nsc.interpreter.IMain$Request$Wrapper.apply(IMain.scala:908)
at scala.tools.nsc.interpreter.IMain$Request.compile$lzycompute(IMain.scala:1002)
at scala.tools.nsc.interpreter.IMain$Request.compile(IMain.scala:997)
at scala.tools.nsc.interpreter.IMain.compile(IMain.scala:579)
at scala.tools.nsc.interpreter.IMain.interpret(IMain.scala:567)
at scala.tools.nsc.interpreter.IMain.interpret(IMain.scala:565)
at scala.tools.nsc.interpreter.ILoop.interpretStartingWith(ILoop.scala:807)
at scala.tools.nsc.interpreter.ILoop.command(ILoop.scala:681)
at scala.tools.nsc.interpreter.ILoop.processLine(ILoop.scala:395)
at org.apache.spark.repl.SparkILoop$$anonfun$initializeSpark$1.apply$mcV$sp(SparkILoop.scala:38)
at org.apache.spark.repl.SparkILoop$$anonfun$initializeSpark$1.apply(SparkILoop.scala:37)
at org.apache.spark.repl.SparkILoop$$anonfun$initializeSpark$1.apply(SparkILoop.scala:37)
at scala.tools.nsc.interpreter.IMain.beQuietDuring(IMain.scala:214)
at org.apache.spark.repl.SparkILoop.initializeSpark(SparkILoop.scala:37)
at org.apache.spark.repl.SparkILoop.loadFiles(SparkILoop.scala:98)
at scala.tools.nsc.interpreter.ILoop$$anonfun$process$1.apply$mcZ$sp(ILoop.scala:920)
at scala.tools.nsc.interpreter.ILoop$$anonfun$process$1.apply(ILoop.scala:909)
at scala.tools.nsc.interpreter.ILoop$$anonfun$process$1.apply(ILoop.scala:909)
at scala.reflect.internal.util.ScalaClassLoader$.savingContextLoader(ScalaClassLoader.scala:97)
at scala.tools.nsc.interpreter.ILoop.process(ILoop.scala:909)
at org.apache.spark.repl.Main$.doMain(Main.scala:70)
at org.apache.spark.repl.Main$.main(Main.scala:53)
at org.apache.spark.repl.Main.main(Main.scala)
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:564)
at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:755)
at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:180)
at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:205)
at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:119)
at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)

Mac上檢查Spark Shell“無法初始化編譯器”錯誤后,我嘗試使用jdk 8,但此解決方案對我不起作用。

你能否說一下它還能做些什么呢?

編輯2017-12-30:

這是我的控制台:

[yago@CRISTINA-PC ~]$ java -version
java version "1.8.0_152"
Java(TM) SE Runtime Environment (build 1.8.0_152-b16)
Java HotSpot(TM) 64-Bit Server VM (build 25.152-b16, mixed mode)

[yago@CRISTINA-PC ~]$ spark-shell 
/usr/bin/hadoop
WARNING: HADOOP_SLAVES has been replaced by HADOOP_WORKERS. Using 
value of HADOOP_SLAVES.
Setting default log level to "WARN".
To adjust logging level use sc.setLogLevel(newLevel). For SparkR, use 
setLogLevel(newLevel).

Failed to initialize compiler: object java.lang.Object in compiler 
mirror not found.
** Note that as of 2.8 scala does not assume use of the java 
classpath.
** For the old behavior pass -usejavacp to scala, or if using a 
Settings
** object programmatically, settings.usejavacp.value = true.

Failed to initialize compiler: object java.lang.Object in compiler 
mirror not found.
** Note that as of 2.8 scala does not assume use of the java 
classpath.
** For the old behavior pass -usejavacp to scala, or if using a 
Settings
** object programmatically, settings.usejavacp.value = true.
Exception in thread "main" java.lang.NullPointerException
at scala.reflect.internal.SymbolTable.exitingPhase(SymbolTable.scala:256)
at scala.tools.nsc.interpreter.IMain$Request.x$20$lzycompute(IMain.scala:896)
at scala.tools.nsc.interpreter.IMain$Request.x$20(IMain.scala:895)
at scala.tools.nsc.interpreter.IMain$Request.headerPreamble$lzycompute(IMain.scala:895)
at scala.tools.nsc.interpreter.IMain$Request.headerPreamble(IMain.scala:895)
at scala.tools.nsc.interpreter.IMain$Request$Wrapper.preamble(IMain.scala:918)
at scala.tools.nsc.interpreter.IMain$CodeAssembler$$anonfun$apply$23.apply(IMain.scala:1337)
at scala.tools.nsc.interpreter.IMain$CodeAssembler$$anonfun$apply$23.apply(IMain.scala:1336)
at scala.tools.nsc.util.package$.stringFromWriter(package.scala:64)
at scala.tools.nsc.interpreter.IMain$CodeAssembler$class.apply(IMain.scala:1336)
at scala.tools.nsc.interpreter.IMain$Request$Wrapper.apply(IMain.scala:908)
at scala.tools.nsc.interpreter.IMain$Request.compile$lzycompute(IMain.scala:1002)
at scala.tools.nsc.interpreter.IMain$Request.compile(IMain.scala:997)
at scala.tools.nsc.interpreter.IMain.compile(IMain.scala:579)
at scala.tools.nsc.interpreter.IMain.interpret(IMain.scala:567)
at scala.tools.nsc.interpreter.IMain.interpret(IMain.scala:565)
at scala.tools.nsc.interpreter.ILoop.interpretStartingWith(ILoop.scala:807)
at scala.tools.nsc.interpreter.ILoop.command(ILoop.scala:681)
at scala.tools.nsc.interpreter.ILoop.processLine(ILoop.scala:395)
at org.apache.spark.repl.SparkILoop$$anonfun$initializeSpark$1.apply$mcV$sp(SparkILoop.scala:38)
at org.apache.spark.repl.SparkILoop$$anonfun$initializeSpark$1.apply(SparkILoop.scala:37)
at org.apache.spark.repl.SparkILoop$$anonfun$initializeSpark$1.apply(SparkILoop.scala:37)
at scala.tools.nsc.interpreter.IMain.beQuietDuring(IMain.scala:214)
at org.apache.spark.repl.SparkILoop.initializeSpark(SparkILoop.scala:37)
at org.apache.spark.repl.SparkILoop.loadFiles(SparkILoop.scala:98)
at scala.tools.nsc.interpreter.ILoop$$anonfun$process$1.apply$mcZ$sp(ILoop.scala:920)
at scala.tools.nsc.interpreter.ILoop$$anonfun$process$1.apply(ILoop.scala:909)
at scala.tools.nsc.interpreter.ILoop$$anonfun$process$1.apply(ILoop.scala:909)
at scala.reflect.internal.util.ScalaClassLoader$.savingContextLoader(ScalaClassLoader.scala:97)
at scala.tools.nsc.interpreter.ILoop.process(ILoop.scala:909)
at org.apache.spark.repl.Main$.doMain(Main.scala:70)
at org.apache.spark.repl.Main$.main(Main.scala:53)
at org.apache.spark.repl.Main.main(Main.scala)
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:564)
at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:755)
at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:180)
at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:205)
at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:119)
at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)

編輯2017/12/31

[yago@CRISTINA-PC ~]$ export SPARK_PRINT_LAUNCH_COMMAND=1
[yago@CRISTINA-PC ~]$ spark-shell
/usr/bin/hadoop
WARNING: HADOOP_SLAVES has been replaced by HADOOP_WORKERS. Using 
value of HADOOP_SLAVES.
Spark Command: /usr/lib/jvm/default-runtime/bin/java -cp /opt/apache-
spark/conf/:/opt/apache-spark/jars/*:/etc/hadoop/:/usr/lib/hadoop-
3.0.0/share/hadoop/common/lib/*:/usr/lib/hadoop-
3.0.0/share/hadoop/common/*:/usr/lib/hadoop-
3.0.0/share/hadoop/hdfs/:/usr/lib/hadoop-
3.0.0/share/hadoop/hdfs/lib/*:/usr/lib/hadoop-
3.0.0/share/hadoop/hdfs/*:/usr/lib/hadoop-
3.0.0/share/hadoop/mapreduce/*:/usr/lib/hadoop-
3.0.0/share/hadoop/yarn/:/usr/lib/hadoop-
3.0.0/share/hadoop/yarn/lib/*:/usr/lib/hadoop-
3.0.0/share/hadoop/yarn/* -Dscala.usejavacp=true -Xmx1g 
org.apache.spark.deploy.SparkSubmit --class 
org.apache.spark.repl.Main --name Spark shell spark-shell
========================================
Setting default log level to "WARN".
To adjust logging level use sc.setLogLevel(newLevel). For SparkR, use 
setLogLevel(newLevel).

Failed to initialize compiler: object java.lang.Object in compiler 
mirror not found.
** Note that as of 2.8 scala does not assume use of the java 
classpath.
** For the old behavior pass -usejavacp to scala, or if using a 
Settings
** object programmatically, settings.usejavacp.value = true.

Failed to initialize compiler: object java.lang.Object in compiler 
mirror not found.
** Note that as of 2.8 scala does not assume use of the java 
classpath.
** For the old behavior pass -usejavacp to scala, or if using a 
Settings
** object programmatically, settings.usejavacp.value = true.
Exception in thread "main" java.lang.NullPointerException
at

...

(same exception as before)

...

[yago@CRISTINA-PC ~]$ /usr/lib/jvm/default-runtime/bin/java -version
java version "9.0.1"
Java(TM) SE Runtime Environment (build 9.0.1+11)
Java HotSpot(TM) 64-Bit Server VM (build 9.0.1+11, mixed mode)

TL; DR Spark支持Java 8(並且不支持Java 9)。


Spark高達2.3.0-SNAPSHOT (今天由master構建)支持Java 8。

使用PATH中的Java 9,您將獲得您所面臨的異常。

$ java -version
java version "9.0.1"
Java(TM) SE Runtime Environment (build 9.0.1+11)
Java HotSpot(TM) 64-Bit Server VM (build 9.0.1+11, mixed mode)

Failed to initialize compiler: object java.lang.Object in compiler mirror not found.
** Note that as of 2.8 scala does not assume use of the java classpath.
** For the old behavior pass -usejavacp to scala, or if using a Settings
** object programmatically, settings.usejavacp.value = true.

Failed to initialize compiler: object java.lang.Object in compiler mirror not found.
** Note that as of 2.8 scala does not assume use of the java classpath.
** For the old behavior pass -usejavacp to scala, or if using a Settings
** object programmatically, settings.usejavacp.value = true.
Exception in thread "main" java.lang.NullPointerException
    at scala.reflect.internal.SymbolTable.exitingPhase(SymbolTable.scala:256)
    at scala.tools.nsc.interpreter.IMain$Request.x$20$lzycompute(IMain.scala:896)
    at scala.tools.nsc.interpreter.IMain$Request.x$20(IMain.scala:895)
    at scala.tools.nsc.interpreter.IMain$Request.headerPreamble$lzycompute(IMain.scala:895)
    at scala.tools.nsc.interpreter.IMain$Request.headerPreamble(IMain.scala:895)
    at scala.tools.nsc.interpreter.IMain$Request$Wrapper.preamble(IMain.scala:918)
    at scala.tools.nsc.interpreter.IMain$CodeAssembler$$anonfun$apply$23.apply(IMain.scala:1337)
    at scala.tools.nsc.interpreter.IMain$CodeAssembler$$anonfun$apply$23.apply(IMain.scala:1336)
    at scala.tools.nsc.util.package$.stringFromWriter(package.scala:64)
    at scala.tools.nsc.interpreter.IMain$CodeAssembler$class.apply(IMain.scala:1336)
    at scala.tools.nsc.interpreter.IMain$Request$Wrapper.apply(IMain.scala:908)
    at scala.tools.nsc.interpreter.IMain$Request.compile$lzycompute(IMain.scala:1002)
    at scala.tools.nsc.interpreter.IMain$Request.compile(IMain.scala:997)
    at scala.tools.nsc.interpreter.IMain.compile(IMain.scala:579)
    at scala.tools.nsc.interpreter.IMain.interpret(IMain.scala:567)
    at scala.tools.nsc.interpreter.IMain.interpret(IMain.scala:565)
    at scala.tools.nsc.interpreter.ILoop.interpretStartingWith(ILoop.scala:807)
    at scala.tools.nsc.interpreter.ILoop.command(ILoop.scala:681)
    at scala.tools.nsc.interpreter.ILoop.processLine(ILoop.scala:395)
    at org.apache.spark.repl.SparkILoop$$anonfun$initializeSpark$1$$anonfun$apply$mcV$sp$1$$anonfun$apply$mcV$sp$2.apply(SparkILoop.scala:79)
    at org.apache.spark.repl.SparkILoop$$anonfun$initializeSpark$1$$anonfun$apply$mcV$sp$1$$anonfun$apply$mcV$sp$2.apply(SparkILoop.scala:79)
    at scala.collection.immutable.List.foreach(List.scala:381)
    at org.apache.spark.repl.SparkILoop$$anonfun$initializeSpark$1$$anonfun$apply$mcV$sp$1.apply$mcV$sp(SparkILoop.scala:79)
    at org.apache.spark.repl.SparkILoop$$anonfun$initializeSpark$1$$anonfun$apply$mcV$sp$1.apply(SparkILoop.scala:79)
    at org.apache.spark.repl.SparkILoop$$anonfun$initializeSpark$1$$anonfun$apply$mcV$sp$1.apply(SparkILoop.scala:79)
    at scala.tools.nsc.interpreter.ILoop.savingReplayStack(ILoop.scala:91)
    at org.apache.spark.repl.SparkILoop$$anonfun$initializeSpark$1.apply$mcV$sp(SparkILoop.scala:78)
    at org.apache.spark.repl.SparkILoop$$anonfun$initializeSpark$1.apply(SparkILoop.scala:78)
    at org.apache.spark.repl.SparkILoop$$anonfun$initializeSpark$1.apply(SparkILoop.scala:78)
    at scala.tools.nsc.interpreter.IMain.beQuietDuring(IMain.scala:214)
    at org.apache.spark.repl.SparkILoop.initializeSpark(SparkILoop.scala:77)
    at org.apache.spark.repl.SparkILoop.loadFiles(SparkILoop.scala:110)
    at scala.tools.nsc.interpreter.ILoop$$anonfun$process$1.apply$mcZ$sp(ILoop.scala:920)
    at scala.tools.nsc.interpreter.ILoop$$anonfun$process$1.apply(ILoop.scala:909)
    at scala.tools.nsc.interpreter.ILoop$$anonfun$process$1.apply(ILoop.scala:909)
    at scala.reflect.internal.util.ScalaClassLoader$.savingContextLoader(ScalaClassLoader.scala:97)
    at scala.tools.nsc.interpreter.ILoop.process(ILoop.scala:909)
    at org.apache.spark.repl.Main$.doMain(Main.scala:76)
    at org.apache.spark.repl.Main$.main(Main.scala:56)
    at org.apache.spark.repl.Main.main(Main.scala)
    at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
    at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.base/java.lang.reflect.Method.invoke(Method.java:564)
    at org.apache.spark.deploy.JavaMainApplication.start(SparkApplication.scala:52)
    at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:878)
    at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:197)
    at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:227)
    at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:136)
    at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)

在Java 8處於PATH ,Spark就可以了。

$ java -version
java version "1.8.0_152"
Java(TM) SE Runtime Environment (build 1.8.0_152-b16)
Java HotSpot(TM) 64-Bit Server VM (build 25.152-b16, mixed mode)

$ ./bin/spark-shell
17/12/30 20:15:07 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Setting default log level to "WARN".
To adjust logging level use sc.setLogLevel(newLevel). For SparkR, use setLogLevel(newLevel).
17/12/30 20:15:12 WARN Utils: Service 'SparkUI' could not bind on port 4040. Attempting port 4041.
Spark context Web UI available at http://192.168.1.2:4041
Spark context available as 'sc' (master = local[*], app id = local-1514661312813).
Spark session available as 'spark'.
Welcome to
      ____              __
     / __/__  ___ _____/ /__
    _\ \/ _ \/ _ `/ __/  '_/
   /___/ .__/\_,_/_/ /_/\_\   version 2.3.0-SNAPSHOT
      /_/

Using Scala version 2.11.8 (Java HotSpot(TM) 64-Bit Server VM, Java 1.8.0_152)
Type in expressions to have them evaluated.
Type :help for more information.

scala> spark.version
res0: String = 2.3.0-SNAPSHOT

由於OP在Arch Linux上並從AUR安裝了Apache Spark軟件包 ,因此Sources部分顯示了為Linux發行版自定義的7個文件,包括: spark-env.sh

spark-env.sh中有一個非常有趣的行設置JAVA_HOME

export JAVA_HOME=/usr/lib/jvm/default-runtime

無論spark-shell使用哪種PATH環境變量,都可以選擇Java 9。

PROTIP您可以使用SPARK_PRINT_LAUNCH_COMMAND環境變量來了解spark-shell啟動的命令和Java,例如SPARK_PRINT_LAUNCH_COMMAND=1 spark-shell 您還可以查看sh -x spark-shell ,這是調試shell腳本的更多Linux方式(如spark-shell )。

一個解決方案是將/usr/lib/jvm/default-runtime/usr/lib/jvm/default-runtime使用Java 8(而不是Java 9),但那......嗯......你的家庭練習。 快樂的火花!

只是為了澄清問題,正如你在第二次編輯中看到的那樣,spark正在使用java 9。 spark-env.sh將JAVA_HOME設置為:

export JAVA_HOME=/usr/lib/jvm/default-runtime 

要在arch linux中設置默認jdk,請檢查文件夾/ usr / lib / jvm以查看不同的jvm發行版。

要將分發標記為默認值,請執

archlinux-java set java-8-jdk

screenshoot

[yago@CRISTINA-PC ~]$ ls /usr/lib/jvm/
default/         default-runtime/ java-8-jdk/      java-8-openjdk/  
java-9-jdk/      
[yago@CRISTINA-PC ~]$ archlinux-java set java-8-jdk
This script must be run as root

[yago@CRISTINA-PC ~]$ sudo archlinux-java set java-8-jdk
[sudo] password for yago: 
[yago@CRISTINA-PC ~]$ sudo archlinux-java set java-8-jdk
[yago@CRISTINA-PC ~]$ spark-shell
/usr/bin/hadoop
WARNING: HADOOP_SLAVES has been replaced by HADOOP_WORKERS. Using 
value of HADOOP_SLAVES.
Setting default log level to "WARN".
To adjust logging level use sc.setLogLevel(newLevel). For SparkR, use 
setLogLevel(newLevel).
2017-12-31 10:47:13,050 WARN util.NativeCodeLoader: Unable to load 
native-hadoop library for your platform... using builtin-java classes 
where applicable
Spark context Web UI available at http://127.0.0.1:4040
Spark context available as 'sc' (master = local[*], app id = local-
1514717237307).
Spark session available as 'spark'.
Welcome to
     ____              __
    / __/__  ___ _____/ /__
   _   \ \/ _ \/ _ `/ __/  '_/
  /___/ .__/\_,_/_/ /_/\_\   version 2.2.0
     /_/

Using Scala version 2.11.8 (Java HotSpot(TM) 64-Bit Server VM, Java 
1.8.0_152)
Type in expressions to have them evaluated.
Type :help for more information.

scala> 

暫無
暫無

聲明:本站的技術帖子網頁,遵循CC BY-SA 4.0協議,如果您需要轉載,請注明本站網址或者原文地址。任何問題請咨詢:yoyou2525@163.com.

 
粵ICP備18138465號  © 2020-2024 STACKOOM.COM