繁体   English   中英

将 JSON 字符串列拆分为多个无模式的列 - PySpark

[英]Split JSON string column to multiple columns without schema - PySpark

我有一个增量表,其中有一列包含 JSON 数据。 我没有它的模式,需要一种方法将 JSON 数据转换为列

|id | json_data
| 1 | {"name":"abc", "depts":["dep01", "dep02"]}
| 2 | {"name":"xyz", "depts":["dep03"],"sal":100}
| 3 | {"name":"pqr", "depts":["dep02"], "address":{"city":"SF"}}

预计 output

|id | name    | depts              | sal | address_city 
| 1 | "abc"   | ["dep01", "dep02"] | null| null         
| 2 | "xyz"   | ["dep03"]          | 100 | null         
| 3 | "pqr"   | ["dep02"]          | null| "SF"        

输入 Dataframe -

df = spark.createDataFrame(data = [(1 , """{"name":"abc", "depts":["dep01", "dep02"]}"""), (2 , """{"name":"xyz", "depts":["dep03"],"sal":100}"""), (3 , """{"name":"pqr", "depts":["dep02"], "address":{"city":"SF"}}""")], schema = ["id", "json_data"])
df.show(truncate=False)

+---+----------------------------------------------------------+
|id |json_data                                                 |
+---+----------------------------------------------------------+
|1  |{"name":"abc", "depts":["dep01", "dep02"]}                |
|2  |{"name":"xyz", "depts":["dep03"],"sal":100}               |
|3  |{"name":"pqr", "depts":["dep02"], "address":{"city":"SF"}}|
+---+----------------------------------------------------------+

json_data列转换为MapType ,如下所示 -

from pyspark.sql.functions import *
from pyspark.sql.types import *

df1 = df.withColumn("cols", from_json("json_data", MapType(StringType(), StringType()))).drop("json_data")
df1.show(truncate=False)

+---+-----------------------------------------------------------+
|id |cols                                                       |
+---+-----------------------------------------------------------+
|1  |{name -> abc, depts -> ["dep01","dep02"]}                  |
|2  |{name -> xyz, depts -> ["dep03"], sal -> 100}              |
|3  |{name -> pqr, depts -> ["dep02"], address -> {"city":"SF"}}|
+---+-----------------------------------------------------------+

现在,列cols需要分解如下 -

df2 = df1.select("id",explode("cols").alias("col_columns", "col_rows"))
df2.show(truncate=False)

+---+-----------+-----------------+
|id |col_columns|col_rows         |
+---+-----------+-----------------+
|1  |name       |abc              |
|1  |depts      |["dep01","dep02"]|
|2  |name       |xyz              |
|2  |depts      |["dep03"]        |
|2  |sal        |100              |
|3  |name       |pqr              |
|3  |depts      |["dep02"]        |
|3  |address    |{"city":"SF"}    |
+---+-----------+-----------------+

一旦,您将col_columnscol_rows作为单独的列,所有需要做的就是pivot col_columns并使用其相应的first col_rows聚合它,如下所示 -

df3 = df2.groupBy("id").pivot("col_columns").agg(first("col_rows"))
df3.show(truncate=False)

+---+-------------+-----------------+----+----+
|id |address      |depts            |name|sal |
+---+-------------+-----------------+----+----+
|1  |null         |["dep01","dep02"]|abc |null|
|2  |null         |["dep03"]        |xyz |100 |
|3  |{"city":"SF"}|["dep02"]        |pqr |null|
+---+-------------+-----------------+----+----+

最后,您需要再次重复上述步骤以将address转换为结构化格式,如下所示 -

df4 = df3.withColumn("address", from_json("address", MapType(StringType(), StringType())))
df4.select("id", "depts", "name", "sal",explode_outer("address").alias("key", "address_city")).drop("key").show(truncate=False)

+---+-----------------+----+----+------------+
|id |depts            |name|sal |address_city|
+---+-----------------+----+----+------------+
|1  |["dep01","dep02"]|abc |null|null        |
|2  |["dep03"]        |xyz |100 |null        |
|3  |["dep02"]        |pqr |null|SF          |
+---+-----------------+----+----+------------+

为了解决它,您可以使用 split function 作为下面的代码。

function 有两个参数,第一个是列本身,第二个是从列数组中拆分元素的模式。

可以在此处找到更多信息和示例:

https://sparkbyexamples.com/pyspark/pyspark-convert-string-to-array-column/#:~:text=PySpark%20SQL%20provides%20split(),and%20converting%20it%20into%20ArrayType

from pyspark.sql import functions as F

df.select(F.split(F.col('depts'), ','))

要在没有已知架构的情况下动态解析和提升 JSON 字符串列的属性,恐怕您不能使用 pyspark,可以使用 Scala 来完成。

例如,当你有一些由 Kafka 生成的 avro 文件时,你希望能够动态解析序列化 JSON 字符串的Value

var df = spark.read.format("avro").load("abfss://abc@def.dfs.core.windows.net/xyz.avro").select("Value")
var df_parsed = spark.read.json(df.as[String])
display(df_parsed)

关键是Scala里面的spark.read.json(df.as[String]) ,基本上

  1. 将该 DF(在这种情况下它只有一个我们感兴趣的列,您当然可以类似地处理多个感兴趣的列并合并任何您想要的列)转换为String
  2. 使用标准 spark 读取选项解析 JSON 字符串,这不需要模式。

到目前为止,据我所知,还没有暴露给 pyspark 的等效方法。

暂无
暂无

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM