簡體   English   中英

AWS Glue:如何將 S3 存儲桶分區為多個紅移表

[英]AWS Glue: How to partition S3 Bucket into multiple redshift tables

我有一個基本的 AWS Glue 作業設置,它從具有多個文件夾的 S3 存儲桶中讀取數據:

S3://mybucket/table1
S3://mybucket/table2
S3://mybucket/table3

等等。 這些文件夾中的所有文件都具有完全相同的格式,我希望將它們插入到同一數據庫(table1、table2、table3)中的不同 redshift 表中。 似乎有一種方法可以從 S3 存儲桶到 S3 存儲桶自動執行此操作,但我似乎無法找到有關如何將 S3 轉換為 Redshift 的文檔,這可能嗎?

我目前的代碼只是為此作業生成的基本 Glue 模板代碼,而 partition_0 包含文件夾名稱的字符串表示形式:

import sys
from awsglue.transforms import *
from awsglue.utils import getResolvedOptions
from pyspark.context import SparkContext
from awsglue.context import GlueContext
from awsglue.job import Job

args = getResolvedOptions(sys.argv, ['TempDir','JOB_NAME'])

sc = SparkContext()
glueContext = GlueContext(sc)
spark = glueContext.spark_session
job = Job(glueContext)
job.init(args['JOB_NAME'], args)

datasource0 = glueContext.create_dynamic_frame.from_catalog(database = "test", table_name = "all_data_bucket", transformation_ctx = "datasource0")
applymapping1 = ApplyMapping.apply(frame = datasource0, mappings = [("dataField1", "string", "dataField1", "string"), ("partition_0", "string", "partition_0", "string")], transformation_ctx = "applymapping1")

resolvechoice2 = ResolveChoice.apply(frame = applymapping1, choice = "make_cols", transformation_ctx = "resolvechoice2")

dropnullfields3 = DropNullFields.apply(frame = resolvechoice2, transformation_ctx = "dropnullfields3")
datasink4 = glueContext.write_dynamic_frame.from_jdbc_conf(frame = dropnullfields3, catalog_connection = "REDSHIFT", connection_options = {"dbtable": "all_data_table", "database": "dev"}, redshift_tmp_dir = args["TempDir"], transformation_ctx = "datasink4")
job.commit()

1) 將數據抓取為三個單獨的表 2) 使用 boto3 列出該數據庫中的表 3) 遍歷列表並應用您的膠水代碼將數據加載到 redshift

暫無
暫無

聲明:本站的技術帖子網頁,遵循CC BY-SA 4.0協議,如果您需要轉載,請注明本站網址或者原文地址。任何問題請咨詢:yoyou2525@163.com.

 
粵ICP備18138465號  © 2020-2024 STACKOOM.COM