繁体   English   中英

如何修复SparkStreaming中的任务不可序列化异常

[英]how to fix task not serializable exception in sparkstreaming

我想使用sparkstreaming总结Internet日志。 我已经将日志数据转换为地图。 计算处理中发生错误。

将spark序列化配置设置为avro。 但这是行不通的。

以下是代码:

...
val sc = new SparkContext(conf)
...
val lines = kafkaStream.map(_._2)
  .map { _.split("\\|") }
  .map { arr =>
    Map(
      ...
    )
  }

lines.print()  // this works

lines.map { clearMap =>  // the line exception point to  
    ...
    val filter = new RowFilter(CompareOp.EQUAL, new RegexStringComparator("^\\d+_" + uvid + "_.*$"))

    val r = HBaseUtils.queryFromHBase(sc, "flux", zerotime.getBytes, nowtime.getBytes,filter)
    val uv = if (r.count() == 0) 1 else 0

    val sscount = clearMap("sscount")
    val vv = if (sscount == "0") 1 else 0

    val cip = clearMap("cip")
    val filter2 = new RowFilter(CompareOp.EQUAL, new RegexStringComparator("^\\d+_\\d+_\\d+_" + cip + "_.*$"))

    val r2 = HBaseUtils.queryFromHBase(sc, "flux", zerotime.getBytes, nowtime.getBytes, filter2)
    val newip = if (r2.count() == 0) 1 else 0

    val filter3 = new RowFilter(CompareOp.EQUAL,new RegexStringComparator("^\\d+_"+uvid+"_.*$"))
    val r3 = HBaseUtils.queryFromHBase(sc, "flux", null, nowtime.getBytes, filter3)
    val newcust = if (r3.count() == 0) 1 else 0

    (nowtime, pv, uv, vv, newip, newcust)
  }
...

以下是异常消息:

Exception in thread "main" org.apache.spark.SparkException: Task not serializable at org.apache.spark.util.ClosureCleaner$.ensureSerializable(ClosureCleaner.scala:298)
    at org.apache.spark.util.ClosureCleaner$.org$apache$spark$util$ClosureCleaner$$clean(ClosureCleaner.scala:288)
    at org.apache.spark.util.ClosureCleaner$.clean(ClosureCleaner.scala:108)
    at org.apache.spark.SparkContext.clean(SparkContext.scala:2056)
    at org.apache.spark.streaming.dstream.DStream$$anonfun$map$1.apply(DStream.scala:546)
    at org.apache.spark.streaming.dstream.DStream$$anonfun$map$1.apply(DStream.scala:546)
    at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
    at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:112)
    at org.apache.spark.SparkContext.withScope(SparkContext.scala:679)
    at org.apache.spark.streaming.StreamingContext.withScope(StreamingContext.scala:264)
    at org.apache.spark.streaming.dstream.DStream.map(DStream.scala:545)
    at cn.tedu.flux.fluxdriver$.main(fluxdriver.scala:73)
    at cn.tedu.flux.fluxdriver.main(fluxdriver.scala)
Caused by: java.io.NotSerializableException: org.apache.spark.SparkContext
Serialization stack:
    - object not serializable (class: org.apache.spark.SparkContext, value: org.apache.spark.SparkContext@3fc08eec)
    - field (class: cn.tedu.flux.fluxdriver$$anonfun$main$2, name: sc$1, type: class org.apache.spark.SparkContext)
    - object (class cn.tedu.flux.fluxdriver$$anonfun$main$2, <function1>)
    at org.apache.spark.serializer.SerializationDebugger$.improveException(SerializationDebugger.scala:40)
    at org.apache.spark.serializer.JavaSerializationStream.writeObject(JavaSerializer.scala:46)
    at org.apache.spark.serializer.JavaSerializerInstance.serialize(JavaSerializer.scala:100)
    at org.apache.spark.util.ClosureCleaner$.ensureSerializable(ClosureCleaner.scala:295)
    ... 12 more

我已经解决了这个问题。当在函数中定义SparkContext时,它不能作为参数序列化。 因此,我尝试将其定义为这样的态度:

对象驱动程序{

var sc:SparkContext=null
def main(arg:Array[String]):Unit = {
    sc = new SparkContext();
....

}}

而且行得通!

之前,它是这样的:

对象驱动程序{

def main(arg:Array[String]):Unit = {

vla sc =新的SparkContext;

......

}}

暂无
暂无

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM