I have a spark df with a schema like this:
print(df.schema)
StructType(List(StructField(column_info,ArrayType(StructType(List(StructField(column_datatype,StringType,true),StructField(column_description,StringType,true),StructField(column_length,StringType,true),StructField(column_name,StringType,true),StructField(column_personally_identifiable_information,StringType,true),StructField(column_precision,StringType,true),StructField(column_primary_key,StringType,true),StructField(column_scale,StringType,true),StructField(column_security_classifications,ArrayType(StringType,true),true),StructField(column_sequence_number,StringType,true))),true),true),StructField(file_code_page,StringType,true),StructField(file_delimiter,StringType,true),StructField(file_description,StringType,true),StructField(file_end_of_line_char,StringType,true),StructField(file_extension,StringType,true),StructField(file_footer_rows,StringType,true),StructField(file_header_rows,StringType,true),StructField(file_name,StringType,true),StructField(logs_id,StringType,true),StructField(metadata_version,StringType,true),StructField(oar_id,StringType,true),StructField(schema_version,StringType,true)))
I want to use this schema in another df. To do so, I adjust manually to have this format:
mdata_schema = StructType([\
StructField('column_info',ArrayType(StructType([\
StructField('column_datatype',StringType(),True),\
StructField('column_description',StringType(),True),\
StructField('column_length',StringType(),True),\
StructField('column_name',StringType(),True),\
StructField('column_personally_identifiable_information',StringType(),True),\
StructField('column_precision',StringType(),True),\
StructField('column_primary_key',StringType(),True),\
StructField('column_scale',StringType(),True),\
StructField('column_security_classifications',ArrayType(StringType(),True),True),\
StructField('column_sequence_number',StringType(),True)]),True),True),\
StructField('file_code_page',StringType(),True),\
StructField('file_delimiter',StringType(),True),\
StructField('file_description',StringType(),True),\
StructField('file_end_of_line_char',StringType(),True),\
StructField('file_extension',StringType(),True),\
StructField('file_footer_rows',StringType(),True),\
StructField('file_header_rows',StringType(),True),\
StructField('file_name',StringType(),True),\
StructField('logs_id',StringType(),True),\
StructField('metadata_version',StringType(),True),\
StructField('oar_id',StringType(),True),\
StructField('schema_version',StringType(),True)\
])
Is there a way to avoid this manual adjustment? Is there a build in method that I can extract the schema so I can use it automaticallin another df?
As others said if you have that dataframe accessible in your notebook then you can read the df.schema.fields in a structype and use that as schema, else you can use the below function to generate string for first data frame and use output string as schema for second dataframe
k=[]
for f in df.schema.fields:
x=StructField('"'+f.name+'"',f.dataType,f.nullable)
k.append(x)
print("StructType("+str(k).replace(",true","(),True").replace(",false","(),False")+")")
The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.