繁体   English   中英

Spark/Scala Parse json 文件只包含原始值

[英]Spark/Scala Parse json file contains only primitive values

我正在尝试使用 Spark/Scala 或使用 Scala Jackson解析包含[[1,"a"],[2,"b"]]列表的简单 json 文件

当我尝试 Spark 时,它给了我以下错误

//simple line of code
spark.read.json(filePath).show
//error
 Since Spark 2.3, the queries from raw JSON/CSV files are disallowed when the
referenced columns only include the internal corrupt record column
(named _corrupt_record by default). For example:
spark.read.schema(schema).json(file).filter($"_corrupt_record".isNotNull).count()
and spark.read.schema(schema).json(file).select("_corrupt_record").show().
Instead, you can cache or save the parsed results and then send the same query.
For example, val df = spark.read.schema(schema).json(file).cache() and then
df.filter($"_corrupt_record".isNotNull).count().;
    at org.apache.spark.sql.execution.datasources.json.JsonFileFormat.buildReader(JsonFileFormat.scala:118)
    at org.apache.spark.sql.execution.datasources.FileFormat$class.buildReaderWithPartitionValues(FileFormat.scala:129)
    at org.apache.spark.sql.execution.datasources.TextBasedFileFormat.buildReaderWithPartitionValues(FileFormat.scala:160)
    at org.apache.spark.sql.execution.FileSourceScanExec.inputRDD$lzycompute(DataSourceScanExec.scala:295)
    at org.apache.spark.sql.execution.FileSourceScanExec.inputRDD(DataSourceScanExec.scala:293)
    at org.apache.spark.sql.execution.FileSourceScanExec.inputRDDs(DataSourceScanExec.scala:313)
    at org.apache.spark.sql.execution.BaseLimitExec$class.inputRDDs(limit.scala:62)
    at org.apache.spark.sql.execution.LocalLimitExec.inputRDDs(limit.scala:97)
    at org.apache.spark.sql.execution.WholeStageCodegenExec.doExecute(WholeStageCodegenExec.scala:605)
    at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:131)
    at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:127)
    at org.apache.spark.sql.execution.SparkPlan$$anonfun$executeQuery$1.apply(SparkPlan.scala:155)
    at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
    at org.apache.spark.sql.execution.SparkPlan.executeQuery(SparkPlan.scala:152)
    at org.apache.spark.sql.execution.SparkPlan.execute(SparkPlan.scala:127)
    at org.apache.spark.sql.execution.SparkPlan.getByteArrayRdd(SparkPlan.scala:247)
    at org.apache.spark.sql.execution.SparkPlan.executeTake(SparkPlan.scala:337)
    at org.apache.spark.sql.execution.CollectLimitExec.executeCollect(limit.scala:38)
    at org.apache.spark.sql.Dataset.org$apache$spark$sql$Dataset$$collectFromPlan(Dataset.scala:3272)
    at org.apache.spark.sql.Dataset$$anonfun$head$1.apply(Dataset.scala:2484)
    at org.apache.spark.sql.Dataset$$anonfun$head$1.apply(Dataset.scala:2484)
    at org.apache.spark.sql.Dataset$$anonfun$52.apply(Dataset.scala:3253)
    at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:77)
    at org.apache.spark.sql.Dataset.withAction(Dataset.scala:3252)
    at org.apache.spark.sql.Dataset.head(Dataset.scala:2484)
    at org.apache.spark.sql.Dataset.take(Dataset.scala:2698)
    at org.apache.spark.sql.Dataset.showString(Dataset.scala:254)
    at org.apache.spark.sql.Dataset.show(Dataset.scala:723)
    at org.apache.spark.sql.Dataset.show(Dataset.scala:682)
    at org.apache.spark.sql.Dataset.show(Dataset.scala:691)

我也尝试使用Jackson并将其解析为 case 类,但它给了我空列表

extractJsonFromStr[AppData]("""[[1,"a"],[2,"b"]]""")
case class AppData(apps :List[(Int,String)])
def extractJsonFromStr[T](jsonString: String)(implicit m: Manifest[T]): Try[T] = {
    implicit val formats: DefaultFormats.type = DefaultFormats
    Try {
      parse(jsonString).extract[T]
    }
  }

我更新了解决方案以获取Dataset[T] ,其中 T 是我的案例类,我们可以使用 Jackson(通过 json4s)、GSON 和可能的其他解析器(我只测试过这两个)通过指示解析器读取原语来解析它类型:

scala> import org.json4s.jackson.JsonMethods._
import org.json4s.jackson.JsonMethods._

scala> import org.json4s._
import org.json4s._

scala> 

scala> val in: Dataset[String] = Seq("""[[1, "a"], [2, "b"]]""").toDS
in: org.apache.spark.sql.Dataset[String] = [value: string]

scala> case class InputData(id:Int,name:String)

scala> val parsed : Dataset[InputData] = in.map{x => 
     | implicit val formats = org.json4s.DefaultFormats
     | parse(x).extract[Seq[Seq[String]]] // Not a case class!
     | }.flatMap(x => x).map(x => (x.head.toInt,x.tail.head)).toDF("id", "name").as[InputData]

scala> parsed.show(false)
+-----------+-----------------+
|         id|             name|
+-----------+-----------------+
|          1|                a|
|          2|                b|
+-----------+-----------------+
scala> parsed.map(_.id)
+-----------+
|         id|
+-----------+
|          1|
|          2|
+-----------+

请注意,数组内容的数据类型是String因为 Spark 无法概念化异构类型的 JSON 数组,因此可能需要进一步操作以从数据中提取Int值。

暂无
暂无

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM