繁体   English   中英

Spark任务不可序列化

[英]Spark task not serializable

我已经尝试了所有在StackOverflow上发现的该问题的解决方案,但是尽管如此,我还是无法解决。 我有一个“ MainObj”对象,该对象实例化了“ Recommendation”对象。 当我调用“ recommendationProducts”方法时,总是出现错误。 这是该方法的代码:

def recommendationProducts(item: Int): Unit = {

val aMatrix = new DoubleMatrix(Array(1.0, 2.0, 3.0))

def cosineSimilarity(vec1: DoubleMatrix, vec2: DoubleMatrix): Double = {
  vec1.dot(vec2) / (vec1.norm2() * vec2.norm2())
}

val itemFactor = model.productFeatures.lookup(item).head
val itemVector = new DoubleMatrix(itemFactor)

//Here is where I get the error:
val sims = model.productFeatures.map { case (id, factor) =>
  val factorVector = new DoubleMatrix(factor)
  val sim = cosineSimilarity(factorVector, itemVector)
  (id, sim)
}

val sortedSims = sims.top(10)(Ordering.by[(Int, Double), Double] {
  case (id, similarity) => similarity
})

println("\nTop 10 products:")
sortedSims.map(x => (x._1, x._2)).foreach(println)

这是错误:

Exception in thread "main" org.apache.spark.SparkException: Task not serializable
at org.apache.spark.util.ClosureCleaner$.ensureSerializable(ClosureCleaner.scala:298)
at org.apache.spark.util.ClosureCleaner$.org$apache$spark$util$ClosureCleaner$$clean(ClosureCleaner.scala:288)
at org.apache.spark.util.ClosureCleaner$.clean(ClosureCleaner.scala:108)
at org.apache.spark.SparkContext.clean(SparkContext.scala:2094)
at org.apache.spark.rdd.RDD$$anonfun$map$1.apply(RDD.scala:370)
at org.apache.spark.rdd.RDD$$anonfun$map$1.apply(RDD.scala:369)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:112)
at org.apache.spark.rdd.RDD.withScope(RDD.scala:362)
at org.apache.spark.rdd.RDD.map(RDD.scala:369)
at RecommendationObj.recommendationProducts(RecommendationObj.scala:269)
at MainObj$.analisiIUNGO(MainObj.scala:257)
at MainObj$.menu(MainObj.scala:54)
at MainObj$.main(MainObj.scala:37)
at MainObj.main(MainObj.scala)
Caused by: java.io.NotSerializableException: org.apache.spark.SparkContext
Serialization stack:
- object not serializable (class: org.apache.spark.SparkContext, value: org.apache.spark.SparkContext@7c2312fa)
- field (class: RecommendationObj, name: sc, type: class org.apache.spark.SparkContext)
- object (class MainObj$$anon$1, MainObj$$anon$1@615bad16)
- field (class: RecommendationObj$$anonfun$37, name: $outer, type: class RecommendationObj)
- object (class RecommendationObj$$anonfun$37, <function1>)
at org.apache.spark.serializer.SerializationDebugger$.improveException(SerializationDebugger.scala:40)
at org.apache.spark.serializer.JavaSerializationStream.writeObject(JavaSerializer.scala:46)
at org.apache.spark.serializer.JavaSerializerInstance.serialize(JavaSerializer.scala:100)
at org.apache.spark.util.ClosureCleaner$.ensureSerializable(ClosureCleaner.scala:295)
... 14 more

我尝试添加:1)“将Serializable可扩展”(Scala)扩展到我的类2)“将java.io.Serializable扩展可扩展到我的类3)将“ @transient”扩展到某些部分4)获取模型(和其他功能)在这个类中(现在我从另一个对象获取它们,然后像​​参数一样将它们传递给Class)

我该如何解决? 我快疯了! 先感谢您!

关键在这里:

 field (class: RecommendationObj, name: sc, type: class org.apache.spark.SparkContext)

因此,您有一个名为sc的字段,类型为SparkContext。 Spark希望序列化该类,因此他还尝试序列化所有字段。

你应该:

  • 使用@transient批注并检查是否为null,然后重新创建
  • 不要从字段中使用SparkContext,而是将其放入method的参数中。 但是请记住,永远不要在地图,flatMap等的闭包内使用SparkContext。

暂无
暂无

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM