简体   繁体   English

如何在Spark Scala中使用根元素读取多行json?

[英]How to read multiline json with root element in Spark Scala?

This is a Sample JSON file. 这是一个示例JSON文件。 I want to do it generally, like if I have root tags then how can I read the JSON data into Dataframe and print in the console. 我想一般地做,比如如果我有根标签,那我怎样才能将JSON数据读入Dataframe并在控制台中打印。

{
        "Crimes": [
    {
            "ID": 11034701,
            "Case Number": "JA366925",
            "Date": "01/01/2001 11:00:00 AM",
            "Block": "016XX E 86TH PL",
            "IUCR": "1153",
            "Primary Type": "DECEPTIVE PRACTICE",
            "Description": "FINANCIAL IDENTITY THEFT OVER $ 300",
            "Location Description": "RESIDENCE",
            "Arrest": false,
            "Domestic": false,
            "Beat": 412,
            "District": 4,
            "Ward": 8,
            "Community Area": 45,
            "FBI Code": "11",
            "Year": 2001,
            "Updated On": "08/05/2017 03:50:08 PM"
        },

        {
            "ID": 11162428,
            "Case Number": "JA529032",
            "Date": "11/28/2017 09:43:00 PM",
            "Block": "026XX S CALIFORNIA BLVD",
            "IUCR": "5131",
            "Primary Type": "OTHER OFFENSE",
            "Description": "VIOLENT OFFENDER: ANNUAL REGISTRATION",
            "Location Description": "JAIL / LOCK-UP FACILITY",
            "Arrest": true,
            "Domestic": false,
            "Beat": 1034,
            "District": 10,
            "Ward": 12,
            "Community Area": 30,
            "FBI Code": "26",
            "X Coordinate": 1158280,
            "Y Coordinate": 1886310,
            "Year": 2017,
            "Updated On": "02/11/2018 03:54:58 PM",
            "Latitude": 41.843778126,
            "Longitude": -87.694637678,
            "Location": "(41.843778126, -87.694637678)"
        }, {
            "ID": 4080525,
            "Case Number": "HL425503",
            "Date": "06/16/2005 09:40:00 PM",
            "Block": "062XX N KIRKWOOD AVE",
            "IUCR": "1365",
            "Primary Type": "CRIMINAL TRESPASS",
            "Description": "TO RESIDENCE",
            "Location Description": "RESIDENCE",
            "Arrest": false,
            "Domestic": false,
            "Beat": 1711,
            "District": 17,
            "Ward": 39,
            "Community Area": 12,
            "FBI Code": "26",
            "X Coordinate": 1145575,
            "Y Coordinate": 1941395,
            "Year": 2005,
            "Updated On": "02/28/2018 03:56:25 PM",
            "Latitude": 41.99518667,
            "Longitude": -87.739863972,
            "Location": "(41.99518667, -87.739863972)"
        }, {
            "ID": 4080539,
            "Case Number": "HL422433",
            "Date": "06/15/2005 12:55:00 PM",
            "Block": "042XX S ST LAWRENCE AVE",
            "IUCR": "0460",
            "Primary Type": "BATTERY",
            "Description": "SIMPLE",
            "Location Description": "SCHOOL, PUBLIC BUILDING",
            "Arrest": false,
            "Domestic": false,
            "Beat": 213,
            "District": 2,
            "Ward": 4,
            "Community Area": 38,
            "FBI Code": "08B",
            "X Coordinate": 1180964,
            "Y Coordinate": 1877123,
            "Year": 2005,
            "Updated On": "02/28/2018 03:56:25 PM",
            "Latitude": 41.818075262,
            "Longitude": -87.611675899,
            "Location": "(41.818075262, -87.611675899)"
        }
    ]
    }

I am using this code. 我正在使用此代码。

val conf = new SparkConf().setAppName("demo").setMaster("local"); 
    val sc = new SparkContext(conf);
    val spark = SparkSession.builder().master("local").appName("ValidationFrameWork").getOrCreate()
    val sqlContext = new SQLContext(sc)
    sc.hadoopConfiguration.set("mapreduce.fileoutputcommitter.marksuccessfuljobs", "false")
    sc.hadoopConfiguration.set("parquet.enable.summary-metadata", "false")  

    val jsonRDD = sc.wholeTextFiles("D:/FinalScripts/output/Crimes1.json").map(x=>x._2)
    val namesJson = sqlContext.read.json(jsonRDD)
    namesJson.printSchema
     namesJson.registerTempTable("JSONdata")
     val data=sqlContext.sql("select * from JSONdata")
    data.show()

For this code I am getting one column as crimes and in a single line entire data is coming. 对于此代码,我将犯罪列为一列,而一行中的所有数据都将出现。 How can I ignore the root element and get the original data only. 我如何才能忽略根元素而仅获取原始数据。

And how can I even read nested JSON into Dataframe and print in the console itself. 以及如何将嵌套的JSON读入Dataframe并在控制台本身中进行打印。

Try that: 试试看:

import org.apache.spark.sql.functions._
ds.select(explode($"Crimes") as "exploded").select("exploded.*")

where ds is your Dataset<Row> you created from the JSON record. 其中ds是您从JSON记录创建的Dataset<Row>

Please note that if your data is huge Spark will need to hold the entire data in memory before flattening it. 请注意,如果您的数据很大,Spark将需要先将整个数据保留在内存中,然后再展平。

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM