![](/img/trans.png)
[英]Read a structure nested inside a JSON file into Spark Dataframe in Python using pyspark
[英]Structure a nested json in dataframe in pyspark
我在構建以下數據時遇到了一些困難,我希望得到有關此主題的專家的幫助
我需要在 pyspark 的 dataframe 中構造一個 json。我沒有它的完整架構,但它下面有這個嵌套結構,它不會改變:
import http.client conn = http.client.HTTPSConnection("xxx")
payload = ""
conn.request("GET", "xxx", payload)
res = conn.getresponse() data = res.read().decode("utf-8")
json_obj = json.loads(data)
df = json.dumps(json_obj, indent=2)
這是 Json:
{ "car": {
"top1": {
"cl": [
{
"nm": "Setor A",
"prc": "40,00 %",
"tv": [
{
"logo": "https://www.test.com/ddd.jpg",
"nm": "BDFG",
"lk1": "https://www.test.com/ddd/BDFG/",
"lk2": "https://www.test-ddd.com",
"dta": [
{
"nm": "PA",
"cp": "nl",
"vl": "$ 2,50"
},
{
"nm": "FVP",
"cp": "UV",
"vl": "No"
}
],
"prc": "30,00 %"
},
{
"logo": "https://www.test.com/ccc.jpg",
"nome": "BDFH",
"lk1": "https://www.test.com/ddd/BDFH/",
"lk2": "https://www.test-ddd.com",
"dta": [
{
"nm": "PA",
"cp": "nl",
"vl": "$ 2,50"
},
{
"nm": "FVP",
"cp": "UV",
"vl": "No"
}
],
"prc": "70,00 %"
}
]
},
{
"nm": "B",
"prc": "60,00 %",
"tv": [
{
"logo": "https://www.test.com/bomm.jpg",
"nm": "BOOM",
"lk1": "https://www.test.com/ddd/BDFH/",
"lk2": "https://www.test-ddd.com",
"dta": [
{
"nm": "PA",
"cp": "nl",
"vl": "$ 2,50"
},
{
"nm": "FVP",
"cp": "UV",
"vl": "No"
}
],
"prc": "100,00 %"
}
]
}
]
},
"top2": {
"cl": [{}]
"top3": {
"cl": [{}]
}
json 文件示例
我試圖以某種方式構建我的數據但沒有成功:
schema = StructType(
[
StructField("car", ArrayType(StructType([
StructField("top1", ArrayType(StructType([
StructField("cl", ArrayType(StructType([
StructField("nm", StringType(),True),
StructField("prc", StringType(),True),
StructField("tv", ArrayType(StructType([
StructField("logo", StringType(),True),
StructField("nm", StringType(),True),
StructField("lk1", StringType(),True),
StructField("lk2", StringType(),True),
StructField("dta", ArrayType(StructType([
StructField("nm", StringType(),True),
StructField("cp", StringType(),True),
StructField("vl", StringType(),True)]))),
StructField("prc", StringType(),True)])))])))]))),
StructField("top2", ArrayType(StructType([
StructField("cl", ArrayType(StructType([
StructField("nm", StringType(),True),
StructField("prc", StringType(),True),
StructField("tv", ArrayType(StructType([
StructField("logo", StringType(),True),
StructField("nm", StringType(),True),
StructField("lk1", StringType(),True),
StructField("lk2", StringType(),True),
StructField("dta", ArrayType(StructType([
StructField("nm", StringType(),True),
StructField("cp", StringType(),True),
StructField("vl", StringType(),True)]))),
StructField("prc", StringType(),True)])))])))]))),
StructField("top3", ArrayType(StructType([
StructField("cl", ArrayType(StructType([
StructField("nm", StringType(),True),
StructField("prc", StringType(),True),
StructField("tv", ArrayType(StructType([
StructField("logo", StringType(),True),
StructField("nm", StringType(),True),
StructField("lk1", StringType(),True),
StructField("lk2", StringType(),True),
StructField("dta", ArrayType(StructType([
StructField("nm", StringType(),True),
StructField("cp", StringType(),True),
StructField("vl", StringType(),True)]))),
StructField("prc", StringType(),True)])))])))])))])))])
df2 = sqlContext.read.json(df, schema)
df2.printSchema()
我想改變這樣的東西:
是否有任何 function 可以促進此中斷並構建此數據?
您可以將 JSON 文件路徑或 RDD 傳遞給 json() 方法。
您需要使用 parallelize() 從 JSON 字符串創建 RDD,然后將此 RDD 傳遞給 json()。
spark = SparkSession.builder.master("local[*]").getOrCreate()
rdd = spark.sparkContext.parallelize([json.dumps(json_obj,indent=2)])
# Schema will be inferred automatically. You can pass schema if you want.
json_df = spark.read.json(rdd)
聲明:本站的技術帖子網頁,遵循CC BY-SA 4.0協議,如果您需要轉載,請注明本站網址或者原文地址。任何問題請咨詢:yoyou2525@163.com.