簡體   English   中英

在RDD上使用take方法時,Apache Spark投擲反序列化錯誤

[英]Apache Spark Throwing Deserialization Error when using take method on RDD

我是Spark的新手,並且正在使用Scala 2.12.8和Spark 2.4.0。 我正在嘗試在Spark MLLib中使用隨機森林分類器。 我可以構建和訓練分類器,並且分類器可以預測是否在生成的RDD上使用first()函數。 但是,如果嘗試使用take(n)函數,則會得到相當大且難看的堆棧跟蹤。 有人知道我在做什么錯嗎? 該錯誤發生在“ .take(3)”行中。 我知道這是我在RDD上執行的第一個有效操作,因此如果有人可以向我解釋為什么它會失敗以及如何解決它,我將非常感激。

object ItsABreeze {
  def main(args: Array[String]): Unit = {
    val spark: SparkSession = SparkSession
      .builder()
      .appName("test")
      .getOrCreate()

    //Do stuff to file
    val data: RDD[LabeledPoint] = MLUtils.loadLibSVMFile(spark.sparkContext, "file.svm")

    // Split the data into training and test sets (30% held out for testing)
    val splits: Array[RDD[LabeledPoint]] = data.randomSplit(Array(0.7, 0.3))
    val (trainingData, testData) = (splits(0), splits(1))

    // Train a RandomForest model.
    // Empty categoricalFeaturesInfo indicates all features are continuous
    val numClasses = 4
    val categoricaFeaturesInfo = Map[Int, Int]()
    val numTrees = 3
    val featureSubsetStrategy = "auto"
    val impurity = "gini"
    val maxDepth = 5
    val maxBins = 32

    val model: RandomForestModel = RandomForest.trainClassifier(
      trainingData,
      numClasses,
      categoricaFeaturesInfo,
      numTrees,
      featureSubsetStrategy,
      impurity,
      maxDepth,
      maxBins
    )

    testData
      .map((point: LabeledPoint) => model.predict(point.features))
      .take(3)
      .foreach(println)

    spark.stop()
  }
}

堆棧跟蹤的頂部如下:

java.io.IOException: unexpected exception type
    at java.io.ObjectStreamClass.throwMiscException(ObjectStreamClass.java:1736)
    at java.io.ObjectStreamClass.invokeReadResolve(ObjectStreamClass.java:1266)
    at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2078)
    at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1573)
    at java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:2287)
    at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:2211)
    at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2069)
    at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1573)
    at java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:2287)
    at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:2211)
    at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2069)
    at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1573)
    at java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:2287)
    at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:2211)
    at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2069)
    at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1573)
    at java.io.ObjectInputStream.readObject(ObjectInputStream.java:431)
    at org.apache.spark.serializer.JavaDeserializationStream.readObject(JavaSerializer.scala:75)
    at org.apache.spark.serializer.JavaSerializerInstance.deserialize(JavaSerializer.scala:114)
    at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:83)
    at org.apache.spark.scheduler.Task.run(Task.scala:121)
    at org.apache.spark.executor.Executor$TaskRunner$$anonfun$10.apply(Executor.scala:402)
    at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1360)
    at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:408)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
    at java.lang.Thread.run(Thread.java:748)
Caused by: java.lang.reflect.InvocationTargetException
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:498)
    at java.lang.invoke.SerializedLambda.readResolve(SerializedLambda.java:230)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:498)
    at java.io.ObjectStreamClass.invokeReadResolve(ObjectStreamClass.java:1260)
    ... 25 more
Caused by: java.lang.BootstrapMethodError: java.lang.NoClassDefFoundError: scala/runtime/LambdaDeserialize
    at ItsABreeze$.$deserializeLambda$(ItsABreeze.scala)
    ... 35 more
Caused by: java.lang.NoClassDefFoundError: scala/runtime/LambdaDeserialize
    ... 36 more
Caused by: java.lang.ClassNotFoundException: scala.runtime.LambdaDeserialize
    at java.net.URLClassLoader.findClass(URLClassLoader.java:382)
    at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
    at java.lang.ClassLoader.loadClass(ClassLoader.java:357)

我試圖運行的代碼是此頁面上分類示例的稍作修改的版本(來自Spark Machine Learning Library文檔)。

兩個關於我原始問題的評論者都是正確的:我將使用的Scala版本從2.12.8更改為2.11.12,並將Spark恢復為2.2.1,並且代碼按原樣運行。

對於觀看此問題有資格回答問題的任何人,這是一個后續問題:Spark 2.4.0聲稱對Scala 2.12.x提供了新的實驗性支持。 2.12.x支持有很多已知問題嗎?

暫無
暫無

聲明:本站的技術帖子網頁,遵循CC BY-SA 4.0協議,如果您需要轉載,請注明本站網址或者原文地址。任何問題請咨詢:yoyou2525@163.com.

 
粵ICP備18138465號  © 2020-2024 STACKOOM.COM