簡體   English   中英

如何創建地圖數據集?

[英]How to create a Dataset of Maps?

我正在使用 Spark 2.2 並且在嘗試在MapSeq上調用spark.createDataset時遇到了麻煩。

我的 Spark Shell 會話的代碼和輸出如下:

// createDataSet on Seq[T] where T = Int works
scala> spark.createDataset(Seq(1, 2, 3)).collect
res0: Array[Int] = Array(1, 2, 3)

scala> spark.createDataset(Seq(Map(1 -> 2))).collect
<console>:24: error: Unable to find encoder for type stored in a Dataset.  
Primitive types (Int, String, etc) and Product types (case classes) are 
supported by importing spark.implicits._
Support for serializing other types will be added in future releases.
       spark.createDataset(Seq(Map(1 -> 2))).collect
                          ^

// createDataSet on a custom case class containing Map works
scala> case class MapHolder(m: Map[Int, Int])
defined class MapHolder

scala> spark.createDataset(Seq(MapHolder(Map(1 -> 2)))).collect
res2: Array[MapHolder] = Array(MapHolder(Map(1 -> 2)))

我試過import spark.implicits._ ,但我相當確定它是由 Spark shell 會話隱式導入的。

這是當前編碼器未涵蓋的情況嗎?

它沒有在 2.2 中涵蓋,但可以很容易地解決。 您可以使用ExpressionEncoder添加所需的Encoder ,或者明確地:

import org.apache.spark.sql.catalyst.encoders.ExpressionEncoder  
import org.apache.spark.sql.Encoder

spark
  .createDataset(Seq(Map(1 -> 2)))(ExpressionEncoder(): Encoder[Map[Int, Int]])

implicitly

implicit def mapIntIntEncoder: Encoder[Map[Int, Int]] = ExpressionEncoder()
spark.createDataset(Seq(Map(1 -> 2)))

僅供參考,上述表達式僅適用於 Spark 2.3(如果我沒記錯的話,從這次提交開始)。

scala> spark.version
res0: String = 2.3.0

scala> spark.createDataset(Seq(Map(1 -> 2))).collect
res1: Array[scala.collection.immutable.Map[Int,Int]] = Array(Map(1 -> 2))

我認為這是因為newMapEncoder現在是spark.implicits一部分。

scala> :implicits
...
  implicit def newMapEncoder[T <: scala.collection.Map[_, _]](implicit evidence$3: reflect.runtime.universe.TypeTag[T]): org.apache.spark.sql.Encoder[T]

您可以使用以下技巧“禁用”隱式並嘗試上述表達式(這將導致錯誤)。

trait ThatWasABadIdea
implicit def newMapEncoder(ack: ThatWasABadIdea) = ack

scala> spark.createDataset(Seq(Map(1 -> 2))).collect
<console>:26: error: Unable to find encoder for type stored in a Dataset.  Primitive types (Int, String, etc) and Product types (case classes) are supported by importing spark.implicits._  Support for serializing other types will be added in future releases.
       spark.createDataset(Seq(Map(1 -> 2))).collect
                          ^

暫無
暫無

聲明:本站的技術帖子網頁,遵循CC BY-SA 4.0協議,如果您需要轉載,請注明本站網址或者原文地址。任何問題請咨詢:yoyou2525@163.com.

 
粵ICP備18138465號  © 2020-2024 STACKOOM.COM