簡體   English   中英

AWS Glue,輸出一個帶分區的文件

[英]AWS Glue, output one file with partitions

我有一個 Glue ETL 腳本,它采用分區的 Athena 表並將其輸出到 CSV。 該表按兩個條件進行分區,即單元和站點。 當 Glue 作業運行時,它會為單元和站點分區的每個組合創建一個不同的 CSV 文件。 相反,我只想要一個包含所有分區的輸出文件,類似於 athena 表的結構

我對 "datasource0.toDF().repartition(1)" 有點玩弄,但我不確定它如何與 AWS 提供的腳本交互。 我已經用鑲木地板完成了這個,但這個腳本的結構不同

請注意下面的腳本我已經刪除了大部分標簽映射

from awsglue.transforms import *
from awsglue.utils import getResolvedOptions
from pyspark.context import SparkContext
from awsglue.context import GlueContext
from awsglue.job import Job

## @params: [JOB_NAME]
args = getResolvedOptions(sys.argv, ['JOB_NAME'])

sc = SparkContext()
glueContext = GlueContext(sc)
spark = glueContext.spark_session
job = Job(glueContext)
job.init(args['JOB_NAME'], args)
## @type: DataSource
## @args: [database = "testdata-2018-2019", table_name = "testdata", transformation_ctx = "datasource0"]
## @return: datasource0
## @inputs: []
datasource0 = glueContext.create_dynamic_frame.from_catalog(database = "formatted-test-2018-2019", table_name = "testdata", transformation_ctx = "datasource0")
datasource0.toDF().repartition(1)
## @type: ApplyMapping
## @args: [mapping = [("time", "string", "time", "string"), ("unit", "string", "unit", "string")], transformation_ctx = "applymapping1"]
## @return: applymapping1
## @inputs: [frame = datasource0]
applymapping1 = ApplyMapping.apply(frame = datasource0, mappings = [("time", "string", "time", "string"), ("`data.pv`", "double", ("site", "string", "site", "string"), ("unit", "string", "unit", "string")], transformation_ctx = "applymapping1")
## @type: DataSink
## @args: [connection_type = "s3", connection_options = {"path": "s3://testbucket/ParsedCSV-Data"}, format = "csv", transformation_ctx = "datasink2"]
## @return: datasink2
## @inputs: [frame = applymapping1]
datasink2 = glueContext.write_dynamic_frame.from_options(frame = applymapping1, connection_type = "s3", connection_options = {"path": "s3://buckettest/ParsedCSV-Data"}, format = "csv", transformation_ctx = "datasink2").repartition(1)
job.commit()

我想修改上面的腳本以僅輸出一個包含分區列的 CSV 文件。 我怎樣才能做到這一點?

您需要編寫 DynamicFrame之前重新分區。

repartitioned1 = applymapping1.repartition(1)
datasink2 = glueContext.write_dynamic_frame.from_options(frame = repartitioned1, connection_type = "s3", connection_options = {"path": "s3://20182019testdata/ParsedCSV-Data"}, format = "csv", transformation_ctx = "datasink2")

關於將分區列包含到輸出文件中,我認為這是不可能的。 作為一種解決方法,您可以將一列復制到具有不同名稱的新列中。

df = applymapping1.toDF
repartitioned_with_new_column_df = df.withColumn("_column1", df["column1"]).repartition(1)
dyf = DynamicFrame.fromDF(repartitioned_with_new_column_df, glueContext, "enriched")
datasink2 = glueContext.write_dynamic_frame.from_options(frame = dyf, connection_type = "s3", connection_options = {"path": "s3://20182019testdata/ParsedCSV-Data", , "partitionKeys": ["_column1"]}, format = "csv", transformation_ctx = "datasink2")

由於aws 支持. 您可以使用.coalesce(1) 像這樣的東西:

dynamic_Frame=applymapping1.coalesce(1)
datasink2 = glueContext.write_dynamic_frame.from_options(frame = dynamic_Frame, connection_type = "s3", connection_options = 

它適用於我的情況。

暫無
暫無

聲明:本站的技術帖子網頁,遵循CC BY-SA 4.0協議,如果您需要轉載,請注明本站網址或者原文地址。任何問題請咨詢:yoyou2525@163.com.

 
粵ICP備18138465號  © 2020-2024 STACKOOM.COM