简体   繁体   English

Pyspark 写入 minio (s3) 分区失败

[英]Pyspark writing to minio (s3) partitioned fails

I am writing files into Minio S3 using pyspark 3.1.2 with.我正在使用 pyspark 3.1.2 将文件写入 Minio S3。 I am using partitioning so data shall be stored in batch_id's eg:我正在使用分区,因此数据应存储在 batch_id 中,例如:

s3a://0001/transactions/batch_id=1 s3a://0001/transactions/batch_id=2 etc. s3a://0001/transactions/batch_id=1 s3a://0001/transactions/batch_id=2 等等。

Everything works perfectly fine when writing to local file system.写入本地文件系统时一切正常。

However when I am using S3 with partitioned commiter ( https://hadoop.apache.org/docs/r3.1.1/hadoop-aws/tools/hadoop-aws/committers.html )但是,当我将 S3 与分区提交程序一起使用时( https://hadoop.apache.org/docs/r3.1.1/hadoop-aws/tools/hadoop-aws/committers.ZFC35FDC70D5FC69DE38

With option: "partitionOverwriteMode" = "static" eg: data_frame.write.mode("overwrite").partitionBy("batch_id").orc(output_path)带选项:“partitionOverwriteMode”=“static” 例如: data_frame.write.mode("overwrite").partitionBy("batch_id").orc(output_path)

The whole path including " transactions " is being overwritten (instead overwriting only given partition).包括“事务”在内的整个路径都被覆盖(而不是只覆盖给定的分区)。

Settings:设置:

        spark_session.sparkContext._jsc.hadoopConfiguration().set(
            "fs.s3a.impl", "org.apache.hadoop.fs.s3a.S3AFileSystem"
        )
        spark_session.sparkContext._jsc.hadoopConfiguration().set(
            "fs.s3a.path.style.access", "true"
        )
        spark_session.sparkContext._jsc.hadoopConfiguration().set(
            "fs.s3a.committer.magic.enabled", "true"
        )
        spark_session.sparkContext._jsc.hadoopConfiguration().set(
            "fs.s3a.committer.name", "partitioned"
        )
        spark_session.sparkContext._jsc.hadoopConfiguration().set(
            "fs.s3a.committer.staging.conflict-mode", "replace"
        )
        spark_session.sparkContext._jsc.hadoopConfiguration().set(
            "fs.s3a.committer.staging.abort.pending.uploads", "true"
        )

So I have added more jars:所以我添加了更多的jars:

spark-hadoop-cloud_2.13-3.2.0.jar spark-hadoop-cloud_2.13-3.2.0.jar

And followed the spark cloud integration guides:[(https://spark.apache.org/docs/latest/cloud-integration.html)][1]并遵循火花云集成指南:[(https://spark.apache.org/docs/latest/cloud-integration.html)][1]

Which boiled down for adding the:归结为添加:

"spark.hadoop.mapreduce.fileoutputcommitter.algorithm.version", "2"
"spark.sql.sources.commitProtocolClass", "org.apache.spark.internal.io.cloud.PathOutputCommitProtocol"
"spark.sql.parquet.output.committer.class", "org.apache.spark.internal.io.cloud.BindingParquetOutputCommitter"

And switched back to parquet.并切换回镶木地板。 Now I am able to overwrite the partition without overwriting hole path.现在我可以在不覆盖孔路径的情况下覆盖分区。

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM