[英]UNLOAD to a new file when running in Redshift
I am trying to UNLOAD the file to an S3 Bucket.我正在尝试将文件卸载到 S3 存储桶。 However, I DONT want to overwrite, but create a new file everytime I run the command.但是,我不想覆盖,而是在每次运行命令时创建一个新文件。 How can I achieve this?我怎样才能做到这一点?
unload ('select * from table1')
to 's3://bucket/file1/file2/file3/table1.csv'
iam_role 'arn:aws:iam::0934857378:role/RedshiftAccessRole,arn:aws:iam::435874575846546:role/RedshiftAccessRole'
DELIMITER ','
PARALLEL OFF
HEADER
Just change the destination path specified in the "TO" section.只需更改“TO”部分中指定的目标路径即可。
If you wish to do this programmatically, you could do it in whatever script/command sends the UNLOAD
command.如果您希望以编程方式执行此操作,您可以在发送UNLOAD
命令的任何脚本/命令中执行此操作。
You might be able to do it via a Stored Procedure by keeping a table with the last file number and writing code to retrieve and increment it.您可以通过存储过程来做到这一点,方法是保留一个包含最后一个文件编号的表并编写代码来检索和增加它。
Or, you could write an AWS Lambda function that is triggered upon creation of the file.或者,您可以编写一个在创建文件时触发的 AWS Lambda function。 The Lambda function could then copy the object to a different path/filename and delete the original object.然后 Lambda function 可以将 object 复制到不同的路径/文件名并删除原始 ZA8CFDE6331BD59EB2ACZ6B6661
声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.