繁体   English   中英

如何从S3存储桶的各个文件夹中下载多个与文件名具有相同前缀的文件?

[英]How to download multiple files having same prefix as filename from various folders of S3 bucket?

假设我有一个名为bucketSample的S3存储桶。

而且我有不同的文件夹,如abcdefxyz

现在,在上述所有文件夹中,我都有多个带有前缀hij_文件。

我想下载所有带有前缀hij_的文件。 (例如, hij_qwe.txthij_rty.pdf等)

我经历了各种方式,但是对于GetObject我必须提供特定的对象名称,而我只知道前缀。

并使用TransferManager可以下载文件夹abc所有文件,但不能下载仅具有特定前缀的文件。

那么,有什么办法只能下载所有带有前缀hij_的文件吗?

public void getFiles(final Set<String> bucketName, final Set<String> keys, final Set<String> prefixes) {
    try {
        ObjectListing objectListing = s3Client.listObjects(bucketName); //lists all the objects in the bucket
        while (true) {
            for (Iterator<?> iterator = objectListing.getObjectSummaries().iterator();
                 iterator.hasNext(); ) {
                S3ObjectSummary summary = (S3ObjectSummary) iterator.next();
                for (String key : keys) {
                    for (String prefix : prefixes)
                        if (summary.getKey().startsWith(key + "/" prefix)) {
                            //HERE YOU CAN GET THE FULL KEY NAME AND HENCE DOWNLOAD IT IN NEW FILE USING THE TRANFER MANAGER
                        }
                    }
                }
            }
            if (objectListing.isTruncated()) {
                objectListing = s3Client.listNextBatchOfObjects(objectListing);
            } else {
                break;
            }
        }
    } catch (AmazonServiceException e) { }
}

在此处阅读有关AWS Directory结构的信息: AWS S3如何存储文件? (目录结构)

因此,对于您的用例,键+“ /” +前缀充当存储在S3存储桶中的对象的前缀。 通过比较前缀将S3存储桶中的所有对象,您可以获得完整的密钥名称。

使用python,您可以使用boto3库,我发现它对于解决类似情况非常有用。

样例代码:

import boto3
import os

KEY = ''
SECRET = ''
download_folder = os.path.join(os.path.expanduser('~'), 'Downloads')
bucket = 'bucketSample'
folders = ['abc', 'def', 'xyz']
prefixes = ['hij_']

try:
    # Needed for the pagination method in order to get objects with certain prefixes instead of iterating over all objects, you should get the aws_access_key_id and aws_secret_access_key for your bucket if available
    s3 = boto3.resource(
        's3',
        aws_access_key_id=KEY,
        aws_secret_access_key=SECRET)

    # Needed for the download method, you should get the aws_access_key_id and aws_secret_access_key for your bucket if available
    client = boto3.client(
        's3',
        aws_access_key_id=KEY,
        aws_secret_access_key=SECRET)

    # Get paginated objects
    paginator = client.get_paginator('list_objects')

    for folder in folders:
        for file_prefix in prefixes:
            prefix = folder + file_prefix
            page_iterator = paginator.paginate(Bucket=bucket, Prefix=prefix)

            if page_iterator:
                for page in page_iterator:
                    if 'Contents' in page:
                        for content in page['Contents']:
                            file_path = os.path.join(download_folder, content['Key'])
                            s3.meta.client.download_file(bucket, str(content['Key']), file_path)
except:
    print('An error occurred')

暂无
暂无

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM