簡體   English   中英

Scala.MatchError:[abc,cde,null,3](類org.apache.spark.sql.catalyst.expressions.GenericRowWithSchema)在Spark JSON中缺少字段

[英]scala.MatchError: [abc,cde,null,3] (of class org.apache.spark.sql.catalyst.expressions.GenericRowWithSchema) in Spark JSON with missing fields

我有JSON輸入文件:

{"a": "abc", "b": "bcd", "d": 3},
{"a": "ezx", "b": "hdg", "c": "ssa"},
...

每個對象的某些字段丟失而不是放置null值。

在使用Scala的Apache Spark中:

import SparkCommons.sparkSession.implicits._

private val inputJsonPath: String = "resources/input/input.json"

private val schema = StructType(Array(
  StructField("a", StringType, nullable = false),
  StructField("b", StringType, nullable = false),
  StructField("c", StringType, nullable = true),
  StructField("d", DoubleType, nullable = true)
))

private val inputDF: DataFrame = SparkCommons.sparkSession
  .read
  .schema(schema)
  .json(inputJsonPath)
  .cache()

inputDF.printSchema()

val dataRdd = inputDF.rdd
.map {
  case Row(a: String, b: String, c: String, d: Double) =>
    MyCaseClass(a, b, c, d)
}

val dataMap = dataRdd.collectAsMap()

MyCaseClass代碼:

case class MyCaseClass(
              a: String,
              b: String,
              c: String = null,
              d: Double = Predef.Double2double(null)
)

我得到以下架構作為輸出:

root
 |-- a: string (nullable = true)
 |-- b: string (nullable = true)
 |-- c: string (nullable = true)
 |-- d: double (nullable = true)

程序編譯但在運行時一旦Spark正在執行作業,我會得到以下異常:

[error] - org.apache.spark.executor.Executor - Exception in task 3.0 in stage 4.0 (TID 21)
scala.MatchError: [abc,bcd,null,3] (of class org.apache.spark.sql.catalyst.expressions.GenericRowWithSchema)
at com.matteoguarnerio.spark.SparkOperations$$anonfun$1.apply(SparkOperations.scala:62) ~[classes/:na]
at com.matteoguarnerio.spark.SparkOperations$$anonfun$1.apply(SparkOperations.scala:62) ~[classes/:na]
at scala.collection.Iterator$$anon$11.next(Iterator.scala:410) ~[scala-library-2.11.11.jar:na]
at scala.collection.Iterator$$anon$11.next(Iterator.scala:410) ~[scala-library-2.11.11.jar:na]
at scala.collection.Iterator$$anon$11.next(Iterator.scala:410) ~[scala-library-2.11.11.jar:na]
at org.apache.spark.util.random.SamplingUtils$.reservoirSampleAndCount(SamplingUtils.scala:42) ~[spark-core_2.11-2.0.2.jar:2.0.2]
at org.apache.spark.RangePartitioner$$anonfun$9.apply(Partitioner.scala:261) ~[spark-core_2.11-2.0.2.jar:2.0.2]
at org.apache.spark.RangePartitioner$$anonfun$9.apply(Partitioner.scala:259) ~[spark-core_2.11-2.0.2.jar:2.0.2]
at org.apache.spark.rdd.RDD$$anonfun$mapPartitionsWithIndex$1$$anonfun$apply$25.apply(RDD.scala:820) ~[spark-core_2.11-2.0.2.jar:2.0.2]
at org.apache.spark.rdd.RDD$$anonfun$mapPartitionsWithIndex$1$$anonfun$apply$25.apply(RDD.scala:820) ~[spark-core_2.11-2.0.2.jar:2.0.2]
at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38) ~[spark-core_2.11-2.0.2.jar:2.0.2]
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:319) ~[spark-core_2.11-2.0.2.jar:2.0.2]
at org.apache.spark.rdd.RDD.iterator(RDD.scala:283) ~[spark-core_2.11-2.0.2.jar:2.0.2]
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:70) ~[spark-core_2.11-2.0.2.jar:2.0.2]
at org.apache.spark.scheduler.Task.run(Task.scala:86) ~[spark-core_2.11-2.0.2.jar:2.0.2]
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:274) ~[spark-core_2.11-2.0.2.jar:2.0.2]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) [na:1.8.0_144]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) [na:1.8.0_144]
at java.lang.Thread.run(Thread.java:748) [na:1.8.0_144]

Spark版本:2.0.2

Scala版本:2.11.11

  • 如果在RDD匹配和創建對象中某些字段為null或缺失,如何解決此異常並進行迭代?
  • 為什么模式,即使我在某些字段上明確定義不可空和可空,也是一切可以為空的?

UPDATE

我只是在dataRdd上使用了一種解決方法來避免這個問題:

private val dataRdd = inputDF.rdd
.map {
  case r: GenericRowWithSchema => {
      val a = r.getAs("a").asInstanceOf[String]
      val b = r.getAs("b").asInstanceOf[String]

      var c: Option[String] = None
      var d: Option[Double] = None

      try {
        c = if (r.isNullAt(r.fieldIndex("c"))) None: Option[String] else Some(r.getAs("c").asInstanceOf[String])
        d = if (r.isNullAt(r.fieldIndex("d"))) None: Option[Double] else Some(r.getAs("d").asInstanceOf[Double])
      } catch {
        case _: Throwable => None
      }

      MyCaseClass(a, b, c, d)
  }
}

並以這種方式更改了MyCaseClass

case class MyCaseClass(
              a: String,
              b: String,
              c: Option[String],
              d: Option[Double]
)

問題出在input.json 它應該是這樣的:

{"a": "abc", "b": "bcd", "d": 3},
{"a": "ezx", "b": "hdg", "c": "ssa"},
...

有了這個input.json你的代碼工作正常。

暫無
暫無

聲明:本站的技術帖子網頁,遵循CC BY-SA 4.0協議,如果您需要轉載,請注明本站網址或者原文地址。任何問題請咨詢:yoyou2525@163.com.

 
粵ICP備18138465號  © 2020-2024 STACKOOM.COM