簡體   English   中英

關於aggegateByKey的任務無法序列化

[英]Task not serializable about aggegateByKey

環境:火花1.60。 我用scala。 我可以通過sbt編譯程序,但是當我提交程序時,遇到了錯誤。 我的完整錯誤如下:

238 17/01/21 18:32:24 INFO net.NetworkTopology: Adding a new node: /YH11070029/10.39.0.213:50010
17/01/21 18:32:24 INFO storage.BlockManagerMasterEndpoint: Registering block  manager 10.39.0.44:41961 with 2.7 GB RAM, BlockManagerId(349, 10.39.0.44, 41961)
17/01/21 18:32:24 INFO storage.BlockManagerMasterEndpoint: Registering block manager 10.39.2.178:48591 with 2.7 GB RAM, BlockManagerId(518, 10.39.2.178,  48591)
Exception in thread "main" org.apache.spark.SparkException: Task not     serializable
    at org.apache.spark.util.ClosureCleaner$.ensureSerializable(ClosureCleaner.scala:304)
    at org.apache.spark.util.ClosureCleaner$.org$apache$spark$util$ClosureCleaner$$clean(ClosureCleaner.scala:294)
    at org.apache.spark.util.ClosureCleaner$.clean(ClosureCleaner.scala:122)
    at org.apache.spark.SparkContext.clean(SparkContext.scala:2055)
    at org.apache.spark.rdd.PairRDDFunctions$$anonfun$combineByKeyWithClassTag$1.apply(PairRDDFunctions.scala:93)
    at org.apache.spark.rdd.PairRDDFunctions$$anonfun$combineByKeyWithClassTag$1.apply(PairRDDFunctions.scala:82)
    at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:150)
    at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:111)
    at org.apache.spark.rdd.RDD.withScope(RDD.scala:316)
    at org.apache.spark.rdd.PairRDDFunctions.combineByKeyWithClassTag(PairRDDFunctions.scala:82)
    at org.apache.spark.rdd.PairRDDFunctions$$anonfun$aggregateByKey$1.apply(PairRDDFunctions.scala:177)
    at org.apache.spark.rdd.PairRDDFunctions$$anonfun$aggregateByKey$1.apply(PairRDDFunctions.scala:166)
    at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:150)
    at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:111)
    at org.apache.spark.rdd.RDD.withScope(RDD.scala:316)
    at org.apache.spark.rdd.PairRDDFunctions.aggregateByKey(PairRDDFunctions.scala:166)
    at org.apache.spark.rdd.PairRDDFunctions$$anonfun$aggregateByKey$3.apply(PairRDDFunctions.scala:206)
    at org.apache.spark.rdd.PairRDDFunctions$$anonfun$aggregateByKey$3.apply(PairRDDFunctions.scala:206)
    at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:150)
    at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:111)
    at org.apache.spark.rdd.RDD.withScope(RDD.scala:316)
    at org.apache.spark.rdd.PairRDDFunctions.aggregateByKey(PairRDDFunctions.scala:205)
    at com.sina.adalgo.feature.ETL$$anonfun$13.apply(ETL.scala:190)
    at com.sina.adalgo.feature.ETL$$anonfun$13.apply(ETL.scala:102)
    at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:244)
    at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:244)
    at scala.collection.IndexedSeqOptimized$class.foreach(IndexedSeqOptimized.scala:33)

代碼的目的是統計分類特征的出現頻率。 主要代碼如下:

object ETL extends Serializable {
           ... ...


val cateList = featureData.map{v =>
    case (psid: String, label: String, cate_features: ParArray[String], media_features: String) =>
        val pair_feature = cate_features.zipWithIndex.map(x => (x._2, x._1))
        pair_feature
}.flatMap(_.toList)

def seqop(m: HashMap[String, Int] , s: String) : HashMap[String, Int]={
    var x = m.getOrElse(s, 0)
    x += 1
    m += s -> x
    m   
}   

def combop(m: HashMap[String, Int], n: HashMap[String, Int]) : HashMap[String, Int]={
    for (k <- n) {
        var x = m.getOrElse(k._1, 0)
        x += k._2
        m += k._1 -> x
    }   
    m   
}   

val hash = HashMap[String, Int]()
val feaFreq = cateList.aggregateByKey(hash)(seqop, combop)// (i, HashMap[String, Int]) i corresponded with categorical feature

該對象已繼承Serializable。 為什么? 你能幫我嗎?

對我來說,當我們使用閉包作為聚合函數時,通常會在Spark中發生此問題,該函數會非預期地關閉一些不需要的對象,並且/或者有時只是在我們的Spark驅動程序代碼的主類內部的函數。

我懷疑這里可能是這種情況,因為您的堆棧跟蹤涉及org.apache.spark.util.ClosureCleaner作為頂級元凶。

這是有問題的,因為在這種情況下,當Spark嘗試將功能轉發給工作人員以便他們可以進行實際的聚合時,它最終會比您實際想要的更多地進行序列化:函數itelf及其周圍的類。

另請參閱Erik Erlandson的這篇文章,其中很好地解釋了一些關於閉包序列化的邊界案例,以及關於閉包的Spark 1.6注釋

一個快速的解決方法可能是將您在aggregateByKey使用的函數的定義移至一個獨立的對象,該對象與代碼的其余部分完全獨立。

暫無
暫無

聲明:本站的技術帖子網頁,遵循CC BY-SA 4.0協議,如果您需要轉載,請注明本站網址或者原文地址。任何問題請咨詢:yoyou2525@163.com.

 
粵ICP備18138465號  © 2020-2024 STACKOOM.COM