繁体   English   中英

Spark:java.lang.UnsupportedOperationException:找不到java.time.LocalDate的编码器

[英]Spark: java.lang.UnsupportedOperationException: No Encoder found for java.time.LocalDate

我正在使用2.1.1版编写Spark应用程序。 以下代码在调用带有LocalDate参数的方法时出现错误?

Exception in thread "main" java.lang.UnsupportedOperationException: No Encoder found for java.time.LocalDate
- field (class: "java.time.LocalDate", name: "_2")
- root class: "scala.Tuple2"
        at org.apache.spark.sql.catalyst.ScalaReflection$.org$apache$spark$sql$catalyst$ScalaReflection$$serializerFor(ScalaReflection.scala:602)
        at org.apache.spark.sql.catalyst.ScalaReflection$$anonfun$9.apply(ScalaReflection.scala:596)
        at org.apache.spark.sql.catalyst.ScalaReflection$$anonfun$9.apply(ScalaReflection.scala:587)
        at scala.collection.TraversableLike$$anonfun$flatMap$1.apply(TraversableLike.scala:241)
        at scala.collection.TraversableLike$$anonfun$flatMap$1.apply(TraversableLike.scala:241)
        at scala.collection.immutable.List.foreach(List.scala:381)
        at scala.collection.TraversableLike$class.flatMap(TraversableLike.scala:241)
        at scala.collection.immutable.List.flatMap(List.scala:344)
        at org.apache.spark.sql.catalyst.ScalaReflection$.org$apache$spark$sql$catalyst$ScalaReflection$$serializerFor(ScalaReflection.scala:587)
....
val date : LocalDate = ....
val conf = new SparkConf()
val sc = new SparkContext(conf.setAppName("Test").setMaster("local[*]"))
val sqlContext = new org.apache.spark.sql.SQLContext(sc)

val itemListJob = new ItemList(sqlContext, jdbcSqlConn)
import sqlContext.implicits._ 
val processed = itemListJob.run(rc, priority).select("id").map(d => {
  runJob.run(d, date) 
})

class ItemList(sqlContext: org.apache.spark.sql.SQLContext, jdbcSqlConn: String) {
  def run(date: LocalDate) = {
    import sqlContext.implicits._ 
    sqlContext.read.format("jdbc").options(Map(
      "driver" -> "com.microsoft.sqlserver.jdbc.SQLServerDriver",
      "url" -> jdbcSqlConn,
      "dbtable" -> s"dbo.GetList('$date')"
    )).load()
    .select("id") 
    .as[Int] 
  }
}

更新:我将runJob.run()的返回类型更改为元组(int, java.sql.Date) ,并将.map(...)的lambda中的代码更改为

val processed = itemListJob.run(rc, priority).select("id").map(d => {
  val (a,b) = runJob.run(d, date) 
  $"$a, $b"
})

现在错误变为

[error] C:\....\scala\main.scala:40: Unable to find encoder for type stored in a Dataset.  Primitive types (Int, String, etc) and Product types (case classes) are supported by importing spark.implicits._  Support for serializing other types will be added in future releases. 
[error]     val processed = itemListJob.run(rc, priority).map(d => { 
[error]                                                      ^ 
[error] one error found 
[error] (compile:compileIncremental) Compilation failed

对于自定义数据集类型,您可以使用Kyro serde框架,只要您的数据实际上是可序列化的(也称为实现Serializable)。 这是一个使用Kyro的示例: Spark在Map [String,java.io.Serializable]中找不到java.io.Serializable的编码器

始终建议使用Kyro,因为它速度更快,并且与Java serde框架兼容。 您当然可以选择Java本机Serde(ObjectWriter / ObjectReader),但速度要慢得多。

像上面的注释一样,SparkSQL在sqlContext.implicits._下附带了许多有用的编码器,但是并不能涵盖所有内容,因此您可能必须插入自己的编码器。

就像我说的那样,您的自定义数据必须可序列化,并且根据https://docs.oracle.com/javase/8/docs/api/java/time/LocalDate.html ,它实现了Serializable接口,因此您绝对可以好在这里。

暂无
暂无

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM