[英]Create a dataframe containing schema data for each file
I am trying to create a dataframe, then run a for loop which looks at a bunch of files.我正在尝试创建一个数据框,然后运行一个查看一堆文件的 for 循环。 Runs through each one and addeds a row to the dataframe for file.运行每一个并向文件的数据帧添加一行。 Containing the file name and the schema details?包含文件名和架构详细信息?
# Schema
schema = StructType([
StructField("filename", StringType(), True),
StructField("converteddate", StringType(), True),
StructField("eventdate", StringType(), True)
])
# Create empty dataframe
df = spark.createDataFrame(sc.emptyRDD(), schema)
for files in mvv_list:
loadName = files
videoData = spark.read\
.format('parquet')\
.options(header='true', inferSchema='true')\
.load(loadName)
dataTypeList = videoData.dtypes
two = dataTypeList[:2]
print(loadName)
print(two)
#mnt/master-video/year=2018/month=03/day=24/part-00004-tid-28948428924977-e0fc2-c85b-4296-8a05-94c5af6-2427-c000.snappy.parquet
#[('converteddate', 'timestamp'), ('eventdate', 'timestamp')]
#mnt/master-video/year=2017/month=05/day=12/part-00004-tid-2894842977-e0f21c2-c85b-4296-8a05-94c5af6-2427-c000.snappy.parquet
#[('converteddate', 'timestamp'), ('eventdate', 'date')]
#mnt/master-video/year=2016/month=03/day=24/part-00004-tid-2884924977-e0f2512-c8b-4296-8a05-945a6-2427-c000.snappy.parquet
#[('converteddate', 'timestamp'), ('eventdate', 'string')]
I am struggling to create a row and append it to the dataframe.我正在努力创建一行并将其附加到数据框。
Wanted output想要的输出
+-----------------------------+-----------------+---------------------+
|filename |converteddate |eventdate |
+-----------------------------+-----------------+---------------------+
|mnt/master-video/year=2018...|timestamp |timestamp |
|mnt/master-video/year=2017...|timestamp |date |
|mnt/master-video/year=2016...|timestamp |string |
+-----------------------------+-----------------+---------------------+
One way is to build your desired data as a list, and then create the DataFrame after (instead of trying to append rows)一种方法是将所需的数据构建为列表,然后在之后创建 DataFrame(而不是尝试附加行)
data = []
for files in mvv_list:
loadName = files
videoData = spark.read\
.format('parquet')\
.options(header='true', inferSchema='true')\
.load(loadName)
dataTypeDict = dict(videoData.dtypes)
data.append((loadName, dataTypeDict['converteddate'], dataTypeDict['eventdate']))
schema = StructType([
StructField("filename", StringType(), True),
StructField("converteddate", StringType(), True),
StructField("eventdate", StringType(), True)
])
df = spark.createDataFrame(data, schema)
声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.