简体   繁体   中英

spark dataset from json with inner array

I'm trying to read json into dataset (spark 2.1.1). Unfortunately it doesn't work. And fails with:

Caused by: java.lang.NullPointerException: Null value appeared in non-
nullable field:
- field (class: "scala.Long", name: "age")

Any ideas what am I doing wrong ?

case class Owner(id: String, pets: Seq[Pet])
case class Pet(name: String, age: Long)

val sampleJson = """{"id":"kotek", "pets":[{"name":"miauczek", 
"age":18}, {"name":"miauczek2", "age":9}]}"""

val session = SparkSession.builder().master("local").getOrCreate()
import session.implicits._

val rdd = session.sparkContext.parallelize(Seq(sampleJson))
val ds = session.read.json(rdd).as[Owner].collect()

Usually, if some field can be missing use either Option :

case class Owner(id: String, pets: Seq[Pet])
case class Pet(name: String, age: Option[Long])

or nullable type:

case class Owner(id: String, pets: Seq[Pet])
case class Pet(name: String, age: java.lang.Long)

But this one indeed looks like a bug. I tested this ins Spark 2.2 and it has been resolved by now. I think that quick workaround is to keep fields sorted by name:

case class Owner(id: String, pets: Seq[Pet])
case class Pet(age: java.lang.Long, name: String)

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM