[英]Parse & flatten JSON object in a text file using Spark & Scala into Dataframe
[英]Parse JSON file using Spark Scala
我有如下所示的JSON源数据文件,我将需要一种完全不同的格式的“ 预期结果” ,这也在下面显示,有没有一种方法可以使用Spark Scala实现。 感谢您的帮助
JSON源数据文件
{
"APP": [
{
"E": 1566799999225,
"V": 44.0
},
{
"E": 1566800002758,
"V": 61.0
}
],
"ASP": [
{
"E": 1566800009446,
"V": 23.399999618530273
}
],
"TT": 0,
"TVD": [
{
"E": 1566799964040,
"V": 50876515
}
],
"VIN": "FU74HZ501740XXXXX"
}
预期成绩:
JSON模式:
|-- APP: array (nullable = true)
| |-- element: struct (containsNull = true)
| | |-- E: long (nullable = true)
| | |-- V: double (nullable = true)
|-- ASP: array (nullable = true)
| |-- element: struct (containsNull = true)
| | |-- E: long (nullable = true)
| | |-- V: double (nullable = true)
|-- ATO: array (nullable = true)
| |-- element: struct (containsNull = true)
| | |-- E: long (nullable = true)
| | |-- V: double (nullable = true)
|-- MSG_TYPE: string (nullable = true)
|-- RPM: array (nullable = true)
| |-- element: struct (containsNull = true)
| | |-- E: long (nullable = true)
| | |-- V: double (nullable = true)
|-- TT: long (nullable = true)
|-- TVD: array (nullable = true)
| |-- element: struct (containsNull = true)
| | |-- E: long (nullable = true)
| | |-- V: long (nullable = true)
|-- VIN: string (nullable = true)
您可以先阅读json文件:
val inputDataFrame: DataFrame = sparkSession
.read
.option("multiline", true)
.json(yourJsonPath)
然后,您可以创建一个简单的规则来获取APP, ASP, ATO
,因为它是输入中唯一具有struct数据类型的字段:
val inputDataFrameFields: Array[StructField] = inputDataFrame.schema.fields
var snColumn = new Array[String](inputDataFrame.schema.length)
for( x <- 0 to (inputDataFrame.schema.length -1)) {
if(inputDataFrameFields.apply(x).dataType.isInstanceOf[ArrayType] && !inputDataFrameFields.apply(x).name.isEmpty) {
snColumn(x) = inputDataFrameFields.apply(x).name
}
}
然后,按照以下步骤创建空数据框并填充它:
val outputSchema = StructType(
List(
StructField("VIN", StringType, true),
StructField(
"EVENTS",
ArrayType(
StructType(Array(
StructField("SN", StringType, true),
StructField("E", IntegerType, true),
StructField("V", DoubleType, true)
)))),
StructField("TT", StringType, true)
)
)
val outputDataFrame = sparkSession.createDataFrame(sparkSession.sparkContext.emptyRDD[Row], outputSchema)
然后,您需要创建一些udf来解析您的输入并进行正确的映射。
希望这可以帮助
这是将json解析为适合您的数据的spark数据框的解决方案:
val input = "{\"APP\":[{\"E\":1566799999225,\"V\":44.0},{\"E\":1566800002758,\"V\":61.0}],\"ASP\":[{\"E\":1566800009446,\"V\":23.399999618530273}],\"TT\":0,\"TVD\":[{\"E\":1566799964040,\"V\":50876515}],\"VIN\":\"FU74HZ501740XXXXX\"}"
import sparkSession.implicits._
val outputDataFrame = sparkSession.read.option("multiline", true).option("mode","PERMISSIVE")
.json(Seq(input).toDS)
.withColumn("APP", explode(col("APP")))
.withColumn("ASP", explode(col("ASP")))
.withColumn("TVD", explode(col("TVD")))
.select(
col("VIN"),col("TT"),
col("APP").getItem("E").as("APP_E"),
col("APP").getItem("V").as("APP_V"),
col("ASP").getItem("E").as("ASP_E"),
col("ASP").getItem("V").as("ASP_E"),
col("TVD").getItem("E").as("TVD_E"),
col("TVD").getItem("V").as("TVD_E")
)
outputDataFrame.show(truncate = false)
/*
+-----------------+---+-------------+-----+-------------+------------------+-------------+--------+
|VIN |TT |APP_E |APP_V|ASP_E |ASP_E |TVD_E |TVD_E |
+-----------------+---+-------------+-----+-------------+------------------+-------------+--------+
|FU74HZ501740XXXXX|0 |1566799999225|44.0 |1566800009446|23.399999618530273|1566799964040|50876515|
|FU74HZ501740XXXXX|0 |1566800002758|61.0 |1566800009446|23.399999618530273|1566799964040|50876515|
+-----------------+---+-------------+-----+-------------+------------------+-------------+--------+
*/
声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.