简体   繁体   中英

How do I ensure consistent file sizes in datasets built in Foundry Python Transforms?

My Foundry transform is producing different amount of data on different runs, but I want to have similar amount of rows in each file. I can use DataFrame.count() and then coalesce/repartition, but that requires computing the full dataset and then either caching or recomputing it again. Is there a way for Spark to take care of this?

You can use spark.sql.files.maxRecordsPerFile configuration option by setting it per output of @transform:

output.write_dataframe(
    output_df,
    options={"maxRecordsPerFile": "1000000"},
)

proggeo 's answer is useful if the only thing you care about is the number of records per file. However, sometimes it is useful to bucket your data so Foundry is able to optimize downstream operations like Contour Analysis or other transforms.

In those cases you can use something like:

bucket_column = 'equipment_number'
num_files = 8
output_df = output_df.repartition(num_files, bucket_column)
output.write_dataframe(
    output_df,
    bucket_cols=[bucket_column],
    bucket_count=num_files,
)

If your bucket column is well distributed this will work to keep a similar number of rows per dataset file.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM