[英]Task not serializable while using custom dataframe class in Spark Scala
我面对Scala / Spark(1.5)和Zeppelin的一个奇怪问题:
如果我运行以下Scala / Spark代码,它将正常运行:
// TEST NO PROBLEM SERIALIZATION
val rdd = sc.parallelize(Seq(1, 2, 3))
val testList = List[String]("a", "b")
rdd.map{a =>
val aa = testList(0)
None}
但是,在声明了自定义数据框类型后, 此处提出了建议
//DATAFRAME EXTENSION
import org.apache.spark.sql.DataFrame
object ExtraDataFrameOperations {
implicit class DFWithExtraOperations(df : DataFrame) {
//drop several columns
def drop(colToDrop:Seq[String]):DataFrame = {
var df_temp = df
colToDrop.foreach{ case (f: String) =>
df_temp = df_temp.drop(f)//can be improved with Spark 2.0
}
df_temp
}
}
}
并像下面这样使用它:
//READ ALL THE FILES INTO different DF and save into map
import ExtraDataFrameOperations._
val filename = "myInput.csv"
val delimiter = ","
val colToIgnore = Seq("c_9", "c_10")
val inputICFfolder = "hdfs:///group/project/TestSpark/"
val df = sqlContext.read
.format("com.databricks.spark.csv")
.option("header", "true") // Use first line of all files as header
.option("inferSchema", "false") // Automatically infer data types? => no cause we need to merge all df, with potential null values => keep string only
.option("delimiter", delimiter)
.option("charset", "UTF-8")
.load(inputICFfolder + filename)
.drop(colToIgnore)//call the customize dataframe
这样运行成功。
现在,如果我再次运行以下代码(与上面相同)
// TEST NO PROBLEM SERIALIZATION
val rdd = sc.parallelize(Seq(1, 2, 3))
val testList = List[String]("a", "b")
rdd.map{a =>
val aa = testList(0)
None}
我收到错误消息:
rdd:org.apache.spark.rdd.RDD [Int] = ParallelCollectionRDD [8]并行于:32 testList:List [String] = List(a,b)org.apache.spark.SparkException:任务无法在org上序列化org.apache.spark.util.ClosureCleaner $ .org $ apache $ spark $ util $ ClosureCleaner $$ clean(ClosureCleaner.scala:294)的.apache.spark.util.ClosureCleaner $ .ensureSerializable(ClosureCleaner.scala:304) org.apache.spark.SparkContext.clean(SparkContext.scala:2032)上的org.apache.spark.util.ClosureCleaner $ .clean(ClosureCleaner.scala:122)在org.apache.spark.rdd.RDD $ RDon $ RD map $ 1.apply(RDD.scala:314)...造成原因:java.io.NotSerializableException:$ iwC $$ iwC $$ iwC $$ iwC $$ iwC $$ iwC $$ iwC $$ iwC $$ iwC $ ExtraDataFrameOperations $序列化堆栈:-无法序列化的对象(类:$ iwC $$ iwC $$ iwC $$ iwC $$ iwC $$ iwC $$ iwC $$ iwC $$ iwC $ ExtraDataFrameOperations $,值:$ iwC $$ iwC $ $ iwC $$ iwC $$ iwC $$ iwC $ iwC $$ iwC $$ iwC $ ExtraDataFrameOperations $ @ 6c7e70e)-字段(类别:$ iwC $$ iwC $$ iwC $$ iwC $ iwC $$ iwC $$ iwC $$ iwC $$ iwC,名称:ExtraDataFrameOperations $ 模块,类型:类$ iwC $$ iwC $$ iwC $$ iwC $$ iwC $$ iwC $$ iwC $$ iwC $$ iwC $ ExtraDataFrameOperations $)-对象(类$ iwC $$ iwC $$ iwC $$ iwC $$ iwC $$ iwC $$ iwC $ iwC $$ iwC,$ iwC $$ iwC $ iwC $$ iwC $$ iwC $$ iwC $$ iwC $$ iwC $$ iwC @ 4c6d0802)-字段(class: $ iwC $$ iwC $$ iwC $$ iwC $ iwC $$ iwC $$ iwC $$ iwC,名称:$ iw,类型:class $ iwC $$ iwC $$ iwC $$ iwC $$ iwC $$ iwC $ $ iwC $$ iwC $$ iwC)...
我不明白:
更新:
尝试
@inline val testList = List[String]("a", "b")
没有帮助。
看起来spark尝试序列化testList
周围的所有范围。 尝试内联数据@inline val testList = List[String]("a", "b")
或使用其他对象来存储传递给驱动程序的函数/数据。
只需添加“ extends Serializable”即可为我工作
/**
* A wrapper around ProducerRecord RDD that allows to save RDD to Kafka.
*
* KafkaProducer is shared within all threads in one executor.
* Error handling strategy - remember "last" seen exception and rethrow it to allow task fail.
*/
implicit class DatasetKafkaSink(ds: Dataset[ProducerRecord[String, GenericRecord]]) extends Serializable {
class ExceptionRegisteringCallback extends Callback {
private[this] val lastRegisteredException = new AtomicReference[Option[Exception]](None)
override def onCompletion(metadata: RecordMetadata, exception: Exception): Unit = {
Option(exception) match {
case a @ Some(_) => lastRegisteredException.set(a) // (re)-register exception if send failed
case _ => // do nothing if encountered successful send
}
}
def rethrowException(): Unit = lastRegisteredException.getAndSet(None).foreach(e => throw e)
}
/**
* Save to Kafka reusing KafkaProducer from singleton holder.
* Returns back control only once all records were actually sent to Kafka, in case of error rethrows "last" seen
* exception in the same thread to allow Spark task to fail
*/
def saveToKafka(kafkaProducerConfigs: Map[String, AnyRef]): Unit = {
ds.foreachPartition { records =>
val callback = new ExceptionRegisteringCallback
val producer = KafkaProducerHolder.getInstance(kafkaProducerConfigs)
records.foreach(record => producer.send(record, callback))
producer.flush()
callback.rethrowException()
}
}
}'
声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.