简体   繁体   English

使用 Spark 和 Scala 展平 json 文件

[英]Flattening a json file using Spark and Scala

I have a json file like this:我有一个像这样的 json 文件:

{    
  "Item Version" : 1.0,    
  "Item Creation Time" : "2019-04-14 14:15:09",        
  "Trade Dictionary" : {    
    "Country" : "India",    
    "TradeNumber" : "1",    
    "action" : {    
      "Action1" : false,    
      "Action2" : true,    
      "Action3" : false    
    },    
    "Value" : "XXXXXXXXXXXXXXX",    
    "TradeRegion" : "Global"    
  },    
  "Prod" : {    
    "Type" : "Driver",    
    "Product Dic" : { },    
    "FX Legs" : [ {    
      "Spot Date" : "2019-04-16",        
      "Value" : true    
    } ]    
  },    
  "Payments" : {    
    "Payment Details" : [ {    
      "Payment Date" : "2019-04-11",    
      "Payment Type" : "Rej"
    } ]
  }
}

I need a table in below format:我需要以下格式的表格:

Version|Item Creation Time|Country|TradeNumber|Action1|Action2|Action3|Value |TradeRegion|Type|Product Dic|Spot Date |Value|Payment Date|Payment Type |
1 |2019-04-14 14:15 | India| 1 | false| true | false |xxxxxx|Global |Driver|{} |2019-04-16 |True |2019-11-14 |Rej

So it will just iterate each key value pair, put the key as column name and it's values to table values.所以它只会迭代每个键值对,把键作为列名,把它的值放到表值中。

My current code:我当前的代码:

val data2 = data.withColumn("vars",explode(array($"Product")))
  .withColumn("subs", explode($"vars.FX Legs"))
  .select($"vars.*",$"subs.*")

The problem here is that I have to provide the column names myself.这里的问题是我必须自己提供列名。 Is there any way to make this more generic?有什么方法可以让这个更通用吗?

Use explode function to flatten dataframes with arrays.使用explode function 以 arrays 展平数据帧。 Here is an example:这是一个例子:

val df = spark.read.json(Seq(json).toDS.rdd)
df.show(10, false)
df.printSchema

df: org.apache.spark.sql.DataFrame = [Item Creation Time: string, Item Version: double ... 3 more fields]
+-------------------+------------+--------------------------------+----------------------------------------+---------------------------------------------------+
|Item Creation Time |Item Version|Payments                        |Prod                                    |Trade Dictionary                                   |
+-------------------+------------+--------------------------------+----------------------------------------+---------------------------------------------------+
|2019-04-14 14:15:09|1.0         |[WrappedArray([2019-04-11,Rej])]|[WrappedArray([2019-04-16,true]),Driver]|[India,1,Global,XXXXXXXXXXXXXXX,[false,true,false]]|
+-------------------+------------+--------------------------------+----------------------------------------+---------------------------------------------------+
root
 |-- Item Creation Time: string (nullable = true)
 |-- Item Version: double (nullable = true)
 |-- Payments: struct (nullable = true)
 |    |-- Payment Details: array (nullable = true)
 |    |    |-- element: struct (containsNull = true)
 |    |    |    |-- Payment Date: string (nullable = true)
 |    |    |    |-- Payment Type: string (nullable = true)
 |-- Prod: struct (nullable = true)
 |    |-- FX Legs: array (nullable = true)
 |    |    |-- element: struct (containsNull = true)
 |    |    |    |-- Spot Date: string (nullable = true)
 |    |    |    |-- Value: boolean (nullable = true)
 |    |-- Type: string (nullable = true)
 |-- Trade Dictionary: struct (nullable = true)
 |    |-- Country: string (nullable = true)
 |    |-- TradeNumber: string (nullable = true)
 |    |-- TradeRegion: string (nullable = true)
 |    |-- Value: string (nullable = true)
 |    |-- action: struct (nullable = true)
 |    |    |-- Action1: boolean (nullable = true)
 |    |    |-- Action2: boolean (nullable = true)
 |    |    |-- Action3: boolean (nullable = true)


val flat = df
    .select($"Item Creation Time", $"Item Version", explode($"Payments.Payment Details") as "row")
    .select($"Item Creation Time", $"Item Version", $"row.*")
flat.show

flat: org.apache.spark.sql.DataFrame = [Item Creation Time: string, Item Version: double ... 2 more fields]
+-------------------+------------+------------+------------+
| Item Creation Time|Item Version|Payment Date|Payment Type|
+-------------------+------------+------------+------------+
|2019-04-14 14:15:09|         1.0|  2019-04-11|         Rej|
+-------------------+------------+------------+------------+

This Solution can be achieved very easily using a library named JFlat - https://github.com/opendevl/Json2Flat .使用名为 JFlat - https://github.com/opendevl/Json2Flat的库可以非常轻松地实现此解决方案。

String str = new String(Files.readAllBytes(Paths.get("/path/to/source/file.json")));

JFlat flatMe = new JFlat(str);

//get the 2D representation of JSON document
List<Object[]> json2csv = flatMe.json2Sheet().getJsonAsSheet();

//write the 2D representation in csv format
flatMe.write2csv("/path/to/destination/file.json");

Since you have both array and struct columns mixed together in multiple levels it is not that simple to create a general solution.由于您将数组和结构列在多个级别混合在一起,因此创建通用解决方案并不是那么简单。 The main problem is that the explode function must be executed on all array column which is an action.主要问题是必须在所有数组列上执行explode function ,这是一个动作。

The simplest solution I can come up with uses recursion to check for any struct or array columns.我能想出的最简单的解决方案是使用递归来检查任何结构或数组列。 If there are any then those will be flattened and then we check again (after flattening there will be additional columns which can be arrays or structs, hence the complexity).如果有,那么这些将被展平,然后我们再次检查(展平后会有额外的列,可以是 arrays 或结构,因此很复杂)。 The flattenStruct part is from here . flattenStruct部分来自这里

Code:代码:

def flattenStruct(schema: StructType, prefix: String = null) : Array[Column] = {
  schema.fields.flatMap(f => {
    val colName = if (prefix == null) f.name else (prefix + "." + f.name)   
    f.dataType match {
      case st: StructType => flattenStruct(st, colName)
      case _ => Array(col(colName))
    }
  })
}

def flattenSchema(df: DataFrame): DataFrame = {
    val structExists = df.schema.fields.filter(_.dataType.typeName == "struct").size > 0
    val arrayCols = df.schema.fields.filter(_.dataType.typeName == "array").map(_.name)

    if(structExists){
        flattenSchema(df.select(flattenStruct(df.schema):_*))
    } else if(arrayCols.size > 0) {
        val newDF = arrayCols.foldLeft(df){
          (tempDf, colName) => tempDf.withColumn(colName, explode(col(colName)))
        }
        flattenSchema(newDF)
    } else {
        df
    }
}

Running the above method on the input dataframe:在输入 dataframe 上运行上述方法:

flattenSchema(data)

will give a dataframe with the following schema:将给出具有以下架构的 dataframe:

root
 |-- Item Creation Time: string (nullable = true)
 |-- Item Version: double (nullable = true)
 |-- Payment Date: string (nullable = true)
 |-- Payment Type: string (nullable = true)
 |-- Spot Date: string (nullable = true)
 |-- Value: boolean (nullable = true)
 |-- Product Dic: string (nullable = true)
 |-- Type: string (nullable = true)
 |-- Country: string (nullable = true)
 |-- TradeNumber: string (nullable = true)
 |-- TradeRegion: string (nullable = true)
 |-- Value: string (nullable = true)
 |-- Action1: boolean (nullable = true)
 |-- Action2: boolean (nullable = true)
 |-- Action3: boolean (nullable = true)

To keep the prefix of the struct columns in the name of the new columns, you only need to adjust the last case in the flattenStruct function:要将结构列的前缀保留在新列的名称中,您只需调整flattenStruct function 中的最后一种情况:

case _ => Array(col(colName).as(colName.replace(".", "_")))

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM