繁体   English   中英

使用 Glue 将数据从 RDS 移动到 S3

[英]Moving data from RDS to S3 using Glue

我在 Amazon Arora Postgres 中有一张表。 我需要将该表以 csv 格式移动到 S3 存储桶。 我在 AWS glue 中创建了以下 pyspark 代码。 而不是在 S3 存储桶中存储为 csv 文件。 在 S3 存储桶中创建多个文件,如 run-XXX-part1。 有没有办法将 rds 表导出到 S3 中的 csv 文件中。 代码: import sys from awsglue.transforms import * from awsglue.utils import getResolvedOptions from pyspark.context import SparkContext from awsglue.context import GlueContext from awsglue.job import Job

## @params: [JOB_NAME]
args = getResolvedOptions(sys.argv, ['JOB_NAME'])

sc = SparkContext()
glueContext = GlueContext(sc)
spark = glueContext.spark_session
job = Job(glueContext)
job.init(args['JOB_NAME'], args)
## @type: DataSource
## @args: [database = "test1", table_name = "testdb_public_reports3", transformation_ctx = "datasource0"]
## @return: datasource0
## @inputs: []
## @type: ApplyMapping
## @args: [mapping = [("orderapprovedby", "string", "orderapprovedby", "string"), ("lname", "string", "lname", "string"), ("unitofmeasurement", "string", "unitofmeasurement", "string"), ("orderrequesteddtm", "timestamp", "orderrequesteddtm", "timestamp"), ("orderdeliverydtm", "timestamp", "orderdeliverydtm", "timestamp"), ("allowedqty", "decimal(10,2)", "allowedqty", "decimal(10,2)"), ("addressid", "int", "addressid", "int"), ("fname", "string", "fname", "string")], transformation_ctx = "applymapping1"]
## @return: applymapping1
## @inputs: [frame = datasource0]
applymapping1 = ApplyMapping.apply(frame = datasource0, mappings = [("mname", "string", "mname", "string"), ("lname", "string", "lname", "string"), ("designation", "string", "designation", "string"), ("joiningtime", "timestamp", "joiningtime", "timestamp"), ("leavingtime", "timestamp", "orderdeliverydtm", "leavingtime"),("fname", "string", "fname", "string")], transformation_ctx = "applymapping1")
## @type: DataSink
## @args: [connection_type = "s3", connection_options = {"path": "s3://deloitte-homefront-poc/PROCESSED"}, format = "csv", transformation_ctx = "datasink2"]
## @return: datasink2
## @inputs: [frame = applymapping1]
datasink2 = glueContext.write_dynamic_frame.from_options(frame = applymapping1, connection_type = "s3", connection_options = {"path": "s3://path"}, format = "csv", transformation_ctx = "datasink2")
job.commit()

使用 glue 和 pyspark 只是为了导出数据不是一个好的选择。 您可以按照 aws https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/postgresql-s3-export.html提供的分步指南进行操作

你仍然想使用 Glue 并想要单个 output 文件

#replace
datasink2 = glueContext.write_dynamic_frame.from_options(frame = applymapping1, connection_type = "s3", connection_options = {"path": "s3://path"}, format = "csv", transformation_ctx = "datasink2")

#with
df=applymapping1.toDF()
df.repartition(1).write.csv(path)

暂无
暂无

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM