简体   繁体   中英

Redshift Unload command with CSV extension

I'm using the following Unload command -

unload ('select * from '')to  's3://**summary.csv**'
CREDENTIALS 'aws_access_key_id='';aws_secret_access_key=''' parallel off allowoverwrite CSV HEADER;

The file created in S3 is summary.csv000

If I change and remove the file extension from the command like below

unload ('select * from '')to  's3://**summary**'
CREDENTIALS 'aws_access_key_id='';aws_secret_access_key=''' parallel off allowoverwrite CSV HEADER;

The file create in S3 is summary000

Is there a way to get summary.csv , so I don't have to change the file extension before importing it into excel?

Thanks.

actually a lot of folks asked the similar question, right now it's not possible to have an extension for the files. (but parquet files can have)

The reason behind this is, RedShift by default export it in parallel which is a good thing. Each slice will export its data. Also from the docs,

PARALLEL

By default, UNLOAD writes data in parallel to multiple files, according to the number of slices in the cluster. The default option is ON or TRUE. If PARALLEL is OFF or FALSE, UNLOAD writes to one or more data files serially, sorted absolutely according to the ORDER BY clause, if one is used. The maximum size for a data file is 6.2 GB. So, for example, if you unload 13.4 GB of data, UNLOAD creates the following three files.

So it has to create new files after 6GB that's why they are adding numbers as a suffix.

How do we solve this?

No native options from RedShift, but we can do some workaround with lambda.

  1. Create a new S3 bucket and a folder inside it specifically for this process.(eg: s3://unloadbucket/redshift-files/ )
  2. Your unload files should go to this folder.
  3. Lambda function should be triggered based on S3 put object event.
  4. Then the lambda function,
    1. Download the file(if it is large use EFS)
    2. Rename it with .csv
    3. Upload to the same bucket(or different bucket) into a different path (eg: s3://unloadbucket/csvfiles/ )

Or even more simple if you use shell/powershell script to do the following process

  1. Download the file
  2. Rename it with .csv

How do we download the file using shell/mobaxterm

As per AWS Documentation around UNLOAD command, it's possible to save data as CSV.

In your case, this is what your code would look like:

unload ('select * from '')
to  's3://summary/'
CREDENTIALS 'aws_access_key_id='';aws_secret_access_key=''' 
CSV <<<
parallel off 
allowoverwrite 
CSV HEADER;

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM