conf = SparkConf().setAppName("PySpark").setMaster("local")
sc = SparkContext(conf=conf)
sqlContext = SQLContext(sc)
file = sqlContext.read.json(json_file_path)
file.show()
Outputs:
+--------------------+--------------------+
| data| schema|
+--------------------+--------------------+
|[[The battery is ...|[[[index, integer...|
+--------------------+--------------------+
How do I extract the data using my own created schema. My schema code is:
from pyspark.sql.types import ArrayType, StructField, StructType, StringType, IntegerType
schema = StructType([
StructField('index', IntegerType(), True),
StructField('content', StringType(), True),
StructField('label', IntegerType(), True),
StructField('label_1', StringType(), True ),
StructField('label_2', StringType(), True ),
StructField('label_3', IntegerType(), True ),
StructField('label_4', IntegerType(), True )])
I have tried:
file.withColumn("data", from_json("data", schema))\
.show()
But I receive the following error:
cannot resolve 'from_json(`data`)' due to data type mismatch: argument 1 requires string type, however, '`data`' is of array<struct<content:string,index:bigint,label:bigint,label_1:string,label_2:string,label_3:double,label_4:timestamp>> type.;;
The read
method already recognized the schema in the back.
Try running file.printSchema()
and it should show more-less the schema that you want.
The way unpack the data
is to run:
file = file.select(explode("data").as("exploded_data"))
If you want, you can take it to next level with:
file.select(file.col("exploded_data.*"))
This will flatten out the schema.
Disclaimer: This is scala code, python might need tiny adjustments
The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.