簡體   English   中英

Spark 任務無法使用滯后窗口功能進行序列化

[英]Spark Task not serializable with lag Window function

我注意到,在 DataFrame 上使用 Window 函數后,如果我用一個函數調用 map(),Spark 返回一個“Task not serializable”異常這是我的代碼:

val hc:org.apache.spark.sql.hive.HiveContext =
    new org.apache.spark.sql.hive.HiveContext(sc)

import hc.implicits._
import org.apache.spark.sql.expressions.Window
import org.apache.spark.sql.functions._

def f() : String = "test"
case class P(name: String, surname: String)
val lag_result: org.apache.spark.sql.Column = 
    lag($"name",1).over(Window.partitionBy($"surname"))
val lista: List[P] = List(P("N1","S1"), P("N2","S2"), P("N2","S2"))
val data_frame: org.apache.spark.sql.DataFrame = 
    hc.createDataFrame(sc.parallelize(lista))

df.withColumn("lag_result", lag_result).map(x => f)

// This works
// df.withColumn("lag_result", lag_result).map{ case x =>
//     def f():String = "test";f}.collect

這是堆棧跟蹤:

org.apache.spark.SparkException:任務無法在 org.apache.spark.util.ClosureCleaner$.ensureSerializable(ClosureCleaner.scala:304) 處序列化,在 org.apache.spark.util.ClosureCleaner$.org$apache$spark$util $ClosureCleaner$$clean(ClosureCleaner.scala:294) 在 org.apache.spark.util.ClosureCleaner$.clean(ClosureCleaner.scala:122) 在 org.apache.spark.SparkContext.clean(SparkContext.scala:2055) 在org.apache.spark.rdd.RDD$$anonfun$map$1.apply(RDD.scala:324) 在 org.apache.spark.rdd.RDD$$anonfun$map$1.apply(RDD.scala:323) 在......以及更多原因:java.io.NotSerializableException:org.apache.spark.sql.Column 序列化堆棧:

  • 對象不可序列化(類:org.apache.spark.sql.Column,值:'lag(name,1,null) windowspecdefinition(surname,UnspecifiedFrame))
  • 字段(類:$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$ iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC,名稱:lag_result,類型:class org.apache.spark.sql.Column) ...等等

lag返回不可序列化的oassql.Column 同樣的事情適用於WindowSpec 在交互模式下,這些對象可以作為map的閉包的一部分包含在內:

scala> import org.apache.spark.sql.expressions.Window
import org.apache.spark.sql.expressions.Window

scala> val df = Seq(("foo", 1), ("bar", 2)).toDF("x", "y")
df: org.apache.spark.sql.DataFrame = [x: string, y: int]

scala> val w = Window.partitionBy("x").orderBy("y")
w: org.apache.spark.sql.expressions.WindowSpec = org.apache.spark.sql.expressions.WindowSpec@307a0097

scala> val lag_y = lag(col("y"), 1).over(w)
lag_y: org.apache.spark.sql.Column = 'lag(y,1,null) windowspecdefinition(x,y ASC,UnspecifiedFrame)

scala> def f(x: Any) = x.toString
f: (x: Any)String

scala> df.select(lag_y).map(f _).first
org.apache.spark.SparkException: Task not serializable
    at org.apache.spark.util.ClosureCleaner$.ensureSerializable(ClosureCleaner.scala:304)
...
Caused by: java.io.NotSerializableException: org.apache.spark.sql.expressions.WindowSpec
Serialization stack:
    - object not serializable (class: org.apache.spark.sql.expressions.WindowSpec, value: org.apache.spark.sql.expressions.WindowSpec@307a0097)

一個簡單的解決方案是將兩者都標記為瞬態:

scala> @transient val w = Window.partitionBy("x").orderBy("y")
w: org.apache.spark.sql.expressions.WindowSpec = org.apache.spark.sql.expressions.WindowSpec@7dda1470

scala> @transient val lag_y = lag(col("y"), 1).over(w)
lag_y: org.apache.spark.sql.Column = 'lag(y,1,null) windowspecdefinition(x,y ASC,UnspecifiedFrame)

scala> df.select(lag_y).map(f _).first
res1: String = [null]     

暫無
暫無

聲明:本站的技術帖子網頁,遵循CC BY-SA 4.0協議,如果您需要轉載,請注明本站網址或者原文地址。任何問題請咨詢:yoyou2525@163.com.

 
粵ICP備18138465號  © 2020-2024 STACKOOM.COM