I am trying to parse simple json file contains list of [[1,"a"],[2,"b"]]
using Spark/Scala or using Scala Jackson
When I tried Spark it gives me the below error
//simple line of code
spark.read.json(filePath).show
//error
Since Spark 2.3, the queries from raw JSON/CSV files are disallowed when the
referenced columns only include the internal corrupt record column
(named _corrupt_record by default). For example:
spark.read.schema(schema).json(file).filter($"_corrupt_record".isNotNull).count()
and spark.read.schema(schema).json(file).select("_corrupt_record").show().
Instead, you can cache or save the parsed results and then send the same query.
For example, val df = spark.read.schema(schema).json(file).cache() and then
df.filter($"_corrupt_record".isNotNull).count().;
at org.apache.spark.sql.execution.datasources.json.JsonFileFormat.buildReader(JsonFileFormat.scala:118)
at org.apache.spark.sql.execution.datasources.FileFormat$class.buildReaderWithPartitionValues(FileFormat.scala:129)
at org.apache.spark.sql.execution.datasources.TextBasedFileFormat.buildReaderWithPartitionValues(FileFormat.scala:160)
at org.apache.spark.sql.execution.FileSourceScanExec.inputRDD$lzycompute(DataSourceScanExec.scala:295)
at org.apache.spark.sql.execution.FileSourceScanExec.inputRDD(DataSourceScanExec.scala:293)
at org.apache.spark.sql.execution.FileSourceScanExec.inputRDDs(DataSourceScanExec.scala:313)
at org.apache.spark.sql.execution.BaseLimitExec$class.inputRDDs(limit.scala:62)
at org.apache.spark.sql.execution.LocalLimitExec.inputRDDs(limit.scala:97)
at org.apache.spark.sql.execution.WholeStageCodegenExec.doExecute(WholeStageCodegenExec.scala:605)
at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:131)
at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:127)
at org.apache.spark.sql.execution.SparkPlan$$anonfun$executeQuery$1.apply(SparkPlan.scala:155)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
at org.apache.spark.sql.execution.SparkPlan.executeQuery(SparkPlan.scala:152)
at org.apache.spark.sql.execution.SparkPlan.execute(SparkPlan.scala:127)
at org.apache.spark.sql.execution.SparkPlan.getByteArrayRdd(SparkPlan.scala:247)
at org.apache.spark.sql.execution.SparkPlan.executeTake(SparkPlan.scala:337)
at org.apache.spark.sql.execution.CollectLimitExec.executeCollect(limit.scala:38)
at org.apache.spark.sql.Dataset.org$apache$spark$sql$Dataset$$collectFromPlan(Dataset.scala:3272)
at org.apache.spark.sql.Dataset$$anonfun$head$1.apply(Dataset.scala:2484)
at org.apache.spark.sql.Dataset$$anonfun$head$1.apply(Dataset.scala:2484)
at org.apache.spark.sql.Dataset$$anonfun$52.apply(Dataset.scala:3253)
at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:77)
at org.apache.spark.sql.Dataset.withAction(Dataset.scala:3252)
at org.apache.spark.sql.Dataset.head(Dataset.scala:2484)
at org.apache.spark.sql.Dataset.take(Dataset.scala:2698)
at org.apache.spark.sql.Dataset.showString(Dataset.scala:254)
at org.apache.spark.sql.Dataset.show(Dataset.scala:723)
at org.apache.spark.sql.Dataset.show(Dataset.scala:682)
at org.apache.spark.sql.Dataset.show(Dataset.scala:691)
I also tried to use Jackson
and parse it as case class but it gave me empty list
extractJsonFromStr[AppData]("""[[1,"a"],[2,"b"]]""")
case class AppData(apps :List[(Int,String)])
def extractJsonFromStr[T](jsonString: String)(implicit m: Manifest[T]): Try[T] = {
implicit val formats: DefaultFormats.type = DefaultFormats
Try {
parse(jsonString).extract[T]
}
}
I updated the solution to get Dataset[T]
where T is my case class we can parse this using Jackson (via json4s), GSON, and possibly other parsers (I've only tested these two) by instructing the parser to read a primitive type:
scala> import org.json4s.jackson.JsonMethods._
import org.json4s.jackson.JsonMethods._
scala> import org.json4s._
import org.json4s._
scala>
scala> val in: Dataset[String] = Seq("""[[1, "a"], [2, "b"]]""").toDS
in: org.apache.spark.sql.Dataset[String] = [value: string]
scala> case class InputData(id:Int,name:String)
scala> val parsed : Dataset[InputData] = in.map{x =>
| implicit val formats = org.json4s.DefaultFormats
| parse(x).extract[Seq[Seq[String]]] // Not a case class!
| }.flatMap(x => x).map(x => (x.head.toInt,x.tail.head)).toDF("id", "name").as[InputData]
scala> parsed.show(false)
+-----------+-----------------+
| id| name|
+-----------+-----------------+
| 1| a|
| 2| b|
+-----------+-----------------+
scala> parsed.map(_.id)
+-----------+
| id|
+-----------+
| 1|
| 2|
+-----------+
Note that the data type for the contents of the array is String
as Spark cannot conceptualize a JSON array of heterogeneous types, so further manipulation may be required to extract Int
values from your data.
The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.