简体   繁体   English

从打印精美的文本文件中加载 spark 数据框

[英]Load spark data frame from a pretty-printed text file

I have a thousands of json files, and the content within each file is similar to the following:我有一千个 json 文件,每个文件中的内容类似于以下内容:

{
"1" : { "key":"key1", "val":"val1" },
"2" : { "key":"key2", "val":"val2" },
"3" : { "key":"key3", "val":"val3" }
.
.
.
}

What is the proper way to load those files into a spark dataframe so as a result i will have将这些文件加载到火花 dataframe 的正确方法是什么,因此我将拥有

+------+----------------------------------+
|id    | val.                             |
+------+----------------------------------+
|1     | { "key":"key1", "val":"val1" }   |
|2     | { "key":"key2", "val":"val2" }   |
|3     | { "key":"key2", "val":"val2" }   |
+------+----------------------------------+

I 'v tried to load the json as multiline我试图将 json 加载为多行

val df= spark.read.option("multiline","true").json(small_file)

but the result was one row and three columns但结果是一行三列

+------------------------+------------------------+----------------+
|1                       |2                       |3               |
+------------------------+------------------------+----------------+
|{ "key":"key1", "val..} ||{"key":"key2", "val..} |{"key":"key3"...|
+------------------------+------------------------+----------------+

What i did also was loading the files into a Map我所做的还将文件加载到 Map

 val keys = df.columns
 val values = df.collect().last.toSeq
 val myMap = keys.zip(values).toMap
 
 println(myMap)
 // output
 // Map(1-> [key1, val1], 2-> [key2, val2], 3-> [key3, val3])

But i did not figure how to create a dataframe from this Map但我不知道如何从这个 Map 创建一个 dataframe

That is a multiline JSON file, you can read such file specifying the multiline option like this:这是一个多行 JSON 文件,您可以像这样指定multiline选项来读取此类文件:

val spark = SparkSession
  .builder()
  .appName("JSONReader")
  .master("local")
  .getOrCreate()

val multiline_df = spark.read.option("multiline","true")
  .json("multiline-file.json")

multiline_df.show(false)

The result will be something like this:结果将是这样的:

[info] +------------+------------+------------+
[info] |1           |2           |3           |
[info] +------------+------------+------------+
[info] |[key1, val1]|[key2, val2]|[key3, val3]|
[info] +------------+------------+------------+
[info] 

I was able to achieve the result using the following steps:我能够使用以下步骤实现结果:

As mentioned in the question, the result df after loading will look like如问题中所述,加载后的结果 df 看起来像

+------------------------+------------------------+----------------+
|1                       |2                       |3               |
+------------------------+------------------------+----------------+
|{ "key":"key1", "val..} ||{"key":"key2", "val..} |{"key":"key3"...|
+------------------------+------------------------+----------------+

1- Cast the columns to string 1-将列转换为字符串

val cols=df.columns.map(x => col(s"${x}").cast("string").alias(s"${x}"))

2- Create a columns string 2-创建列字符串

val str_cols=df.columns.mkString(",")

3- Create a new df using the casted values in step1 3- 使用 step1 中的转换值创建一个新的 df

val df1 = df.withColumn("temp",
                        explode(arrays_zip(array(cols:_*),
                        split(lit(str_cols),","))))
                        .select("temp.*")
                        .toDF("vals","index")

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM