繁体   English   中英

Spark任务序列化和关闭

[英]Spark tasks serialization and closure

当传递给Spark RDD操作的Lambda引用其作用域之外的对象时,它将包括必要的上下文,以创建用于分布式执行的序列化任务。 在下面的简单示例中,为什么决定对整个OutClass实例而不是乘数进行序列化? 我怀疑乘数实际上是幕后的Scala getter方法,因此它必须包含该类的引用。 声明OuterClass扩展了Serializable可以使用,但是它引入了不必要的约束。 我真的很感谢一种无需声明OuterClass可序列化即可使其工作的方法。

object ClosureTest {
  def main(args: Array[String]): Unit = {
    val sc = SparkContext.getOrCreate(new SparkConf().setMaster("local[2]").setAppName("test"))
    println(new OuterClass(10).sparkSumProd(sc.parallelize(Seq(1,2,3))))
  }
  class OuterClass(multiplier: Int) {
    def sparkSumProd(data: RDD[Int]): Double = {
      data.map{
        v => v * multiplier
      }.sum()
    }
  }
}

这是Spark的SerializationDebugger的输出

Exception in thread "main" org.apache.spark.SparkException: Task not serializable
    at org.apache.spark.util.ClosureCleaner$.ensureSerializable(ClosureCleaner.scala:298)
    at org.apache.spark.util.ClosureCleaner$.org$apache$spark$util$ClosureCleaner$$clean(ClosureCleaner.scala:288)
    at org.apache.spark.util.ClosureCleaner$.clean(ClosureCleaner.scala:108)
    at org.apache.spark.SparkContext.clean(SparkContext.scala:2056)
    at org.apache.spark.rdd.RDD$$anonfun$map$1.apply(RDD.scala:366)
    at org.apache.spark.rdd.RDD$$anonfun$map$1.apply(RDD.scala:365)
    at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
    at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:112)
    at org.apache.spark.rdd.RDD.withScope(RDD.scala:358)
    at org.apache.spark.rdd.RDD.map(RDD.scala:365)
    at ClosureTest$OuterClass.sparkSumProd(ClosureTest.scala:14)
    at ClosureTest$.main(ClosureTest.scala:10)
    at ClosureTest.main(ClosureTest.scala)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:498)
    at com.intellij.rt.execution.application.AppMain.main(AppMain.java:147)
Caused by: java.io.NotSerializableException: ClosureTest$OuterClass
Serialization stack:
    - object not serializable (class: ClosureTest$OuterClass, value: ClosureTest$OuterClass@36a7abe1)
    - field (class: ClosureTest$OuterClass$$anonfun$sparkSumProd$1, name: $outer, type: class ClosureTest$OuterClass)
    - object (class ClosureTest$OuterClass$$anonfun$sparkSumProd$1, <function1>)
    at org.apache.spark.serializer.SerializationDebugger$.improveException(SerializationDebugger.scala:40)
    at org.apache.spark.serializer.JavaSerializationStream.writeObject(JavaSerializer.scala:46)
    at org.apache.spark.serializer.JavaSerializerInstance.serialize(JavaSerializer.scala:100)
    at org.apache.spark.util.ClosureCleaner$.ensureSerializable(ClosureCleaner.scala:295)
    ... 17 more

只需将类级别变量分配给局部变量即可使其工作。

object ClosureTest {
  def main(args: Array[String]): Unit = {
    val sc = SparkContext.getOrCreate(new SparkConf().setMaster("local[2]").setAppName("test"))
    println(new OuterClass(10).sparkSumProd(sc.parallelize(Seq(1,2,3))))
  }
  class OuterClass(multiplier: Int) {
    def sparkSumProd(data: RDD[Int]): Double = {
      val m = multiplier
      data.map{
        v => v * m
      }.sum()
    }
  }
}

暂无
暂无

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM