简体   繁体   English

IllegalArgumentException:'不支持 class 文件主要版本 55'

[英]IllegalArgumentException: 'Unsupported class file major version 55'

I was doing the assignment of "advanced machine learning and signal processing" in Coursera.我在 Coursera 做“高级机器学习和信号处理”的作业。 I get the encountered with this error "Py4JavaEror".我遇到了这个错误“Py4JavaEror”。 This is the first assignment of this course.这是本课程的第一个作业。 It was said to be done in IBM Watson studio but doing it in Ibm Watson studio to too complex and I did in google Colab.据说是在 IBM Watson 工作室中完成的,但在 Ibm Watson 工作室中完成太复杂了,我在 google Colab 中完成了。 Here is my code:这是我的代码:

from IPython.display import Markdown, display
def printmd(string):
   display(Markdown('# <span style="color:red">'+string+'</span>'))

if ('sc' in locals() or 'sc' in globals()):
   printmd('<<<<<!!!!! It seems that you are running in a IBM Watson Studio!>>>>>')

!pip install pyspark==2.4.5
try:
    from pyspark import SparkContext, SparkConf
    from pyspark.sql import SparkSession
except ImportError as e:
    printmd('<<<<<!!!!! Please restart your kernel after installing Apache Spark !!!!!>>>>>')


sc = SparkContext.getOrCreate(SparkConf().setMaster("local[*]"))
spark = SparkSession \
           .builder \
           .getOrCreate()

df=spark.read.load('a2.parquet')

df.createOrReplaceTempView("df")
spark.sql("SELECT * from df").show()

Error is like:错误就像:

---------------------------------------------------------------------------
Py4JJavaError                             Traceback (most recent call last)
/usr/local/lib/python3.6/dist-packages/pyspark/sql/utils.py in deco(*a, **kw)
     62         try:
---> 63             return f(*a, **kw)
     64         except py4j.protocol.Py4JJavaError as e:

4 frames
Py4JJavaError: An error occurred while calling o530.load.
: java.lang.IllegalArgumentException: Unsupported class file major version 55
    at org.apache.xbean.asm6.ClassReader.<init>(ClassReader.java:166)
    at org.apache.xbean.asm6.ClassReader.<init>(ClassReader.java:148)
    at org.apache.xbean.asm6.ClassReader.<init>(ClassReader.java:136)
    at org.apache.xbean.asm6.ClassReader.<init>(ClassReader.java:237)
    at org.apache.spark.util.ClosureCleaner$.getClassReader(ClosureCleaner.scala:49)
    at org.apache.spark.util.FieldAccessFinder$$anon$3$$anonfun$visitMethodInsn$2.apply(ClosureCleaner.scala:517)
    at org.apache.spark.util.FieldAccessFinder$$anon$3$$anonfun$visitMethodInsn$2.apply(ClosureCleaner.scala:500)
    at scala.collection.TraversableLike$WithFilter$$anonfun$foreach$1.apply(TraversableLike.scala:733)
    at scala.collection.mutable.HashMap$$anon$1$$anonfun$foreach$2.apply(HashMap.scala:134)
    at scala.collection.mutable.HashMap$$anon$1$$anonfun$foreach$2.apply(HashMap.scala:134)
    at scala.collection.mutable.HashTable$class.foreachEntry(HashTable.scala:236)
    at scala.collection.mutable.HashMap.foreachEntry(HashMap.scala:40)
    at scala.collection.mutable.HashMap$$anon$1.foreach(HashMap.scala:134)
    at scala.collection.TraversableLike$WithFilter.foreach(TraversableLike.scala:732)
    at org.apache.spark.util.FieldAccessFinder$$anon$3.visitMethodInsn(ClosureCleaner.scala:500)
    at org.apache.xbean.asm6.ClassReader.readCode(ClassReader.java:2175)
    at org.apache.xbean.asm6.ClassReader.readMethod(ClassReader.java:1238)
    at org.apache.xbean.asm6.ClassReader.accept(ClassReader.java:631)
    at org.apache.xbean.asm6.ClassReader.accept(ClassReader.java:355)
    at org.apache.spark.util.ClosureCleaner$$anonfun$org$apache$spark$util$ClosureCleaner$$clean$14.apply(ClosureCleaner.scala:307)
    at org.apache.spark.util.ClosureCleaner$$anonfun$org$apache$spark$util$ClosureCleaner$$clean$14.apply(ClosureCleaner.scala:306)
    at scala.collection.immutable.List.foreach(List.scala:392)
    at org.apache.spark.util.ClosureCleaner$.org$apache$spark$util$ClosureCleaner$$clean(ClosureCleaner.scala:306)
    at org.apache.spark.util.ClosureCleaner$.clean(ClosureCleaner.scala:162)
    at org.apache.spark.SparkContext.clean(SparkContext.scala:2326)
    at org.apache.spark.SparkContext.runJob(SparkContext.scala:2100)
    at org.apache.spark.SparkContext.runJob(SparkContext.scala:2126)
    at org.apache.spark.rdd.RDD$$anonfun$collect$1.apply(RDD.scala:990)
    at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
    at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:112)
    at org.apache.spark.rdd.RDD.withScope(RDD.scala:385)
    at org.apache.spark.rdd.RDD.collect(RDD.scala:989)
    at org.apache.spark.sql.execution.datasources.parquet.ParquetFileFormat$.mergeSchemasInParallel(ParquetFileFormat.scala:633)
    at org.apache.spark.sql.execution.datasources.parquet.ParquetFileFormat.inferSchema(ParquetFileFormat.scala:241)
    at org.apache.spark.sql.execution.datasources.DataSource$$anonfun$6.apply(DataSource.scala:180)
    at org.apache.spark.sql.execution.datasources.DataSource$$anonfun$6.apply(DataSource.scala:180)
    at scala.Option.orElse(Option.scala:289)
    at org.apache.spark.sql.execution.datasources.DataSource.getOrInferFileFormatSchema(DataSource.scala:179)
    at org.apache.spark.sql.execution.datasources.DataSource.resolveRelation(DataSource.scala:373)
    at org.apache.spark.sql.DataFrameReader.loadV1Source(DataFrameReader.scala:223)
    at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:211)
    at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:178)
    at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
    at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.base/java.lang.reflect.Method.invoke(Method.java:566)
    at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244)
    at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357)
    at py4j.Gateway.invoke(Gateway.java:282)
    at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132)
    at py4j.commands.CallCommand.execute(CallCommand.java:79)
    at py4j.GatewayConnection.run(GatewayConnection.java:238)
    at java.base/java.lang.Thread.run(Thread.java:834)


During handling of the above exception, another exception occurred:

IllegalArgumentException                  Traceback (most recent call last)
/usr/local/lib/python3.6/dist-packages/pyspark/sql/utils.py in deco(*a, **kw)
     77                 raise QueryExecutionException(s.split(': ', 1)[1], stackTrace)
     78             if s.startswith('java.lang.IllegalArgumentException: '):
---> 79                 raise IllegalArgumentException(s.split(': ', 1)[1], stackTrace)
     80             raise
     81     return deco

IllegalArgumentException: 'Unsupported class file major version 55'

Here,这里,

That is because the run time version of your java is 11那是因为您的 java 的运行时版本是 11

Spark runs on Java 8, Python 2.7+/3.4+ and R 3.1+. Spark 在 Java 8、Python 2.7+/3.4+ 和 R 3.1+ 上运行。 For the Scala API, Spark 2.4.4 uses Scala 2.12.对于 Scala API,Spark 2.4.4 使用 Scala 2.12。 You will need to use a compatible Scala version (2.12.x)您将需要使用兼容的 Scala 版本 (2.12.x)

Try installing a different java with of version 8 and point your JAVA_HOME to the newly installed java.尝试使用版本 8 安装不同的 java 并将您的JAVA_HOME指向新安装的 java。

暂无
暂无

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

相关问题 带有 anaconda 插件的 Pychrm 中的 pyspark '不支持 class 文件主要版本 55' - pyspark 'Unsupported class file major version 55' in Pychrm with anaconda plungin java.lang.IllegalArgumentException:不支持的类文件主要版本 58 - java.lang.IllegalArgumentException: Unsupported class file major version 58 java.lang.IllegalArgumentException:不支持的类文件主要版本 59 - java.lang.IllegalArgumentException: Unsupported class file major version 59 在openjdk11下执行sonar-maven-plugin时,不支持的类文件主要版本55 - Unsupported class file major version 55 when executing sonar-maven-plugin under openjdk11 Caused by: java.lang.IllegalArgumentException: Unsupported class 文件主要版本 60 - Caused by: java.lang.IllegalArgumentException: Unsupported class file major version 60 构建失败并出现异常。 原因:IllegalArgumentException,消息:不支持 class 文件主要版本 57 - Build failed with an exception. Reason: IllegalArgumentException, message: Unsupported class file major version 57 Intellij Maven java.lang.IllegalArgumentException: Unsupported class 文件主要版本 61 并且 Entity 使用不支持的 JDK 编译 - Intellij Maven java.lang.IllegalArgumentException: Unsupported class file major version 61 and Entity was compiled with an unsuppported JDK 执行“org.apache.spark.sql.DataSet.collectAsList()”时如何修复“不受支持的类文件主要版本 55” - How to fix 'Unsupported class file major version 55' while executing 'org.apache.spark.sql.DataSet.collectAsList()' Spark 错误 - 不支持的类文件主要版本 - Spark Error - Unsupported class file major version 不支持 class 文件主要版本 61 - Unsupported class file major version 61
 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM