简体   繁体   中英

spark scala broadcast variable not working: task not serializble

I have one dataframe like this:

val temp = sc.parallelize(Seq(Array(43,53,266),Array(69,160,166),Array(266)))
.toDF("value")

I want to select the row that intersects with the following array:

val goodValue = Array(231, 266)
val broadcastGood = sc.broadcast(goodValue)
val containGood = udf((array:scala.collection.mutable.WrappedArray[Int]) => 
broadcastGood.value.intersect(array).size>0)

And when I tried this udf,

display(temp.filter(containGood(col("value"))))

I got the infamous error: Task not Serializable

The weird thing is that this used to work well for me. I don't know what has changed.. Would really appreciate some help.

Edit: Actually the above code alone should work ok usually, plus the broadcast variable is not necessary here. Some of you mentioned that "one of values are within Scala class that is not Serializable", I agree that this should be the problem, however, I am not sure how to solve it..

Here is the background information: I am using Latent Dirichlet Allocation(LDA) to perform topic analysis on a corpus:

val ldaModel = lda.fit(dfVectorizer)

dfVectorizer is the vectorized version of my original dataset. With this lda model, I generate the following dataset:

val topic = ldaModel.describeTopics(50) //with three columns[topic:int, termIndices: array<Int>, termWeights: array<Double>]
val interestTerms = Seq(1,2,3,4,5,6,7)
val interestUDF = udf((terms:Seq[Int]) =>terms.filter(r=>interestTerms.contains(r)))
val topicTmp = topic.withColumn("InterestTerm",interestUDF(col("termIndices")))
val sumVec = udf((terms: Seq[Int]) => terms.sum)
val topicDF = topicTmp.select('topic,sumVec('InterestTerm).as('order)).sort('order.desc)

So the final data frame "topicDF" look like this:

Topic | Order 

111   | 7

69    | 7 

248   | 5

......

However, if I am trying to perform a simple filter like this:

display(topicDF.filter("order>3"))

It will give me the "task not Serializable" error. In the error message, it specifies very clearly that this is "caused by"

java.io.NotSerializableException: org.apache.spark.mllib.clustering.DistributedLDAModel.

The error message looks like this:

org.apache.spark.SparkException: Task not serializable
at org.apache.spark.util.ClosureCleaner$.ensureSerializable(ClosureCleaner.scala:298)
at org.apache.spark.util.ClosureCleaner$.org$apache$spark$util$ClosureCleaner$$clean(ClosureCleaner.scala:288)
at org.apache.spark.util.ClosureCleaner$.clean(ClosureCleaner.scala:108)
at org.apache.spark.SparkContext.clean(SparkContext.scala:2135)
at org.apache.spark.rdd.RDD$$anonfun$mapPartitionsWithIndex$1.apply(RDD.scala:841)
at org.apache.spark.rdd.RDD$$anonfun$mapPartitionsWithIndex$1.apply(RDD.scala:840)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:112)
at org.apache.spark.rdd.RDD.withScope(RDD.scala:362)
at org.apache.spark.rdd.RDD.mapPartitionsWithIndex(RDD.scala:840)
at org.apache.spark.sql.execution.WholeStageCodegenExec.doExecute(WholeStageCodegenExec.scala:371)
at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:114)
at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:114)
at org.apache.spark.sql.execution.SparkPlan$$anonfun$executeQuery$1.apply(SparkPlan.scala:135)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
at org.apache.spark.sql.execution.SparkPlan.executeQuery(SparkPlan.scala:132)
at org.apache.spark.sql.execution.SparkPlan.execute(SparkPlan.scala:113)
at org.apache.spark.sql.execution.TakeOrderedAndProjectExec.executeCollect(limit.scala:133)
at org.apache.spark.sql.Dataset.org$apache$spark$sql$Dataset$$collectFromPlan(Dataset.scala:2807)
at org.apache.spark.sql.Dataset$$anonfun$head$1.apply(Dataset.scala:2132)
at org.apache.spark.sql.Dataset$$anonfun$head$1.apply(Dataset.scala:2132)
at org.apache.spark.sql.Dataset$$anonfun$60.apply(Dataset.scala:2791)
at org.apache.spark.sql.execution.SQLExecution$$anonfun$withNewExecutionId$1.apply(SQLExecution.scala:87)
at org.apache.spark.sql.execution.SQLExecution$.withFileAccessAudit(SQLExecution.scala:53)
at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:70)
at org.apache.spark.sql.Dataset.withAction(Dataset.scala:2790)
at org.apache.spark.sql.Dataset.head(Dataset.scala:2132)
at org.apache.spark.sql.Dataset.take(Dataset.scala:2345)
at com.databricks.backend.daemon.driver.OutputAggregator$.withOutputAggregation0(OutputAggregator.scala:81)
at com.databricks.backend.daemon.driver.OutputAggregator$.withOutputAggregation(OutputAggregator.scala:42)
at com.databricks.backend.daemon.driver.ScalaDriverLocal$$anonfun$getResultBuffer$1$$anonfun$apply$1.apply(ScalaDriverLocal.scala:263)
at com.databricks.backend.daemon.driver.ScalaDriverLocal$$anonfun$getResultBuffer$1$$anonfun$apply$1.apply(ScalaDriverLocal.scala:254)
at scala.Option.map(Option.scala:145)
at com.databricks.backend.daemon.driver.ScalaDriverLocal$$anonfun$getResultBuffer$1.apply(ScalaDriverLocal.scala:254)
at com.databricks.backend.daemon.driver.ScalaDriverLocal$$anonfun$getResultBuffer$1.apply(ScalaDriverLocal.scala:228)
at scala.Option.map(Option.scala:145)
at com.databricks.backend.daemon.driver.ScalaDriverLocal.getResultBuffer(ScalaDriverLocal.scala:228)
at com.databricks.backend.daemon.driver.ScalaDriverLocal.repl(ScalaDriverLocal.scala:209)
at com.databricks.backend.daemon.driver.DriverLocal$$anonfun$execute$2.apply(DriverLocal.scala:230)
at com.databricks.backend.daemon.driver.DriverLocal$$anonfun$execute$2.apply(DriverLocal.scala:211)
at com.databricks.logging.UsageLogging$$anonfun$withAttributionContext$1.apply(UsageLogging.scala:173)
at scala.util.DynamicVariable.withValue(DynamicVariable.scala:57)
at com.databricks.logging.UsageLogging$class.withAttributionContext(UsageLogging.scala:168)
at com.databricks.backend.daemon.driver.DriverLocal.withAttributionContext(DriverLocal.scala:39)
at com.databricks.logging.UsageLogging$class.withAttributionTags(UsageLogging.scala:206)
at com.databricks.backend.daemon.driver.DriverLocal.withAttributionTags(DriverLocal.scala:39)
at com.databricks.backend.daemon.driver.DriverLocal.execute(DriverLocal.scala:211)
at com.databricks.backend.daemon.driver.DriverWrapper$$anonfun$tryExecutingCommand$2.apply(DriverWrapper.scala:589)
at com.databricks.backend.daemon.driver.DriverWrapper$$anonfun$tryExecutingCommand$2.apply(DriverWrapper.scala:589)
at scala.util.Try$.apply(Try.scala:161)
at com.databricks.backend.daemon.driver.DriverWrapper.tryExecutingCommand(DriverWrapper.scala:584)
at com.databricks.backend.daemon.driver.DriverWrapper.executeCommand(DriverWrapper.scala:488)
at com.databricks.backend.daemon.driver.DriverWrapper.runInnerLoop(DriverWrapper.scala:391)
at com.databricks.backend.daemon.driver.DriverWrapper.runInner(DriverWrapper.scala:348)
at com.databricks.backend.daemon.driver.DriverWrapper.run(DriverWrapper.scala:215)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.io.NotSerializableException: org.apache.spark.mllib.clustering.DistributedLDAModel
Serialization stack:
- object not serializable (class: 
org.apache.spark.mllib.clustering.DistributedLDAModel, value: 
org.apache.spark.mllib.clustering.DistributedLDAModel@ea5b214)
- writeObject data (class: scala.collection.mutable.HashMap)
- object (class scala.collection.mutable.HashMap, Map(lda_1da3e45afeaa__subsamplingRate -> 0.05, lda_1da3e45afeaa__k -> 320, lda_1da3e45afeaa__keepLastCheckpoint -> true, lda_1da3e45afeaa__maxIter -> 100, lda_1da3e45afeaa__optimizer -> em, lda_1da3e45afeaa__optimizeDocConcentration -> true, lda_1da3e45afeaa__learningDecay -> 0.51, lda_1da3e45afeaa__topicConcentration -> 1.1, lda_1da3e45afeaa__learningOffset -> 1024.0, lda_1da3e45afeaa__checkpointInterval -> 10, lda_1da3e45afeaa__featuresCol -> features, lda_1da3e45afeaa__seed -> 12345, lda_1da3e45afeaa__docConcentration -> [D@31af2961, lda_1da3e45afeaa__topicDistributionCol -> topicDistribution))
- field (class: org.apache.spark.ml.param.ParamMap, name: org$apache$spark$ml$param$ParamMap$$map, type: interface scala.collection.mutable.Map)

Thank you so much!

Most probably if you are using this inside a class and class undergoes map process then return from all map should be serialized. I am assuming you have not serialized your mapper class.

Class XYZ extends serializable

Try this with your class in which you have defined your udf.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM