繁体   English   中英

Scala(Zeppeline):任务不可序列化

[英]Scala (Zeppeline) : Task not serializable

我试图通过流媒体从Twitter获取数据。 我在twt varibale中获取数据。

val ssc = new StreamingContext(sc, Seconds(60))
val tweets = TwitterUtils.createStream(ssc, None, Array("#hadoop", "#bigdata", "#spark", "#hortonworks", "#HDP"))
//tweets.saveAsObjectFiles("/models/Twitter_files_", ".txt")
 case class Tweet(createdAt:Long, text:String, screenName:String)

val twt = tweets.window(Seconds(60))
//twt.foreach(status => println(status.text())

import sqlContext.implicits._

val temp = twt.map(status=>
  Tweet(status.getCreatedAt().getTime()/1000,status.getText(), status.getUser().getScreenName())
    ).foreachRDD(rdd=>
      rdd.toDF().registerTempTable("tweets")
    )
twt.print

ssc.start()

这是错误:

  org.apache.spark.SparkException: Task not serializable
        at org.apache.spark.util.ClosureCleaner$.ensureSerializable(ClosureCleaner.scala:304)
        at org.apache.spark.util.ClosureCleaner$.org$apache$spark$util$ClosureCleaner$$clean(ClosureCleaner.scala:294)
        at org.apache.spark.util.ClosureCleaner$.clean(ClosureCleaner.scala:122)
        at org.apache.spark.SparkContext.clean(SparkContext.scala:2032)
        at org.apache.spark.streaming.dstream.DStream$$anonfun$map$1.apply(DStream.scala:528)
        at org.apache.spark.streaming.dstream.DStream$$anonfun$map$1.apply(DStream.scala:528)
        at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:147)
        at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:108)
        at org.apache.spark.SparkContext.withScope(SparkContext.scala:709)
        at org.apache.spark.streaming.StreamingContext.withScope(StreamingContext.scala:266)

Caused by: java.io.NotSerializableException: org.apache.spark.streaming.StreamingContext

你的Tweet类不是Serializable ,所以扩展它。

这是一个常见的Spark问题,我相信,自从Spark 1.3以来,堆栈会告诉你究竟是什么尝试序列化

暂无
暂无

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM