简体   繁体   English

AWS s3下载特定文件夹中的所有文件 - 使用PHP SDK

[英]AWS s3 download all file inside a specific folder - Using PHP SDK

Can anyone help me for this one..?? 任何人都可以帮我这一个.. ??

I want to download all files from a folder which is inside my bucket's folder, to my computer's directory with the same name. 我想将所有文件从我的存储桶文件夹中的文件夹下载到我的计算机同名目录。

Let's say there is a bucket name "ABC" a folder is there inside it, is "DEF".. In which folder there are multiple files available.. 假设有一个存储桶名称“ABC”里面有一个文件夹,是“DEF”..在哪个文件夹中有多个文件可用..

Now I want to download it into my project folder "/opt/lampp/htdocs/porject/files/download/" here "DEF" folder is also available.. 现在我想将它下载到我的项目文件夹“/ opt / lampp / htdocs / porject / files / download /”这里“DEF”文件夹也可用..

So, anyone can help me, and give me the code for this..? 所以,任何人都可以帮助我,并为我提供代码..?

Thanks in advance.. 提前致谢..

============= =============

ERROR : 错误:

Fatal error: Uncaught exception 'UnexpectedValueException' with message 'RecursiveDirectoryIterator::_ construct() [recursivedirectoryiterator.--construct]: Unable to find the wrapper "s3" - did you forget to enable it when you configured PHP?' 致命错误:带有消息'RecursiveDirectoryIterator :: _ construct()[recursivedirectoryiterator .-- construct]的未捕获异常'UnexpectedValueException' :无法找到包装器“s3” - 您在配置PHP时是否忘记启用它? in /opt/lampp/htdocs/demo/amazon-s3/test.php:21 Stack trace: #0 /opt/lampp/htdocs/demo/amazon-s3/test.php(21): RecursiveDirectoryIterator-> _construct('s3://bucketname/folder...') #1 {main} thrown in /opt/lampp/htdocs/demo/amazon-s3/test.php on line 21 在/opt/lampp/htdocs/demo/amazon-s3/test.php:21堆栈跟踪:#0 /opt/lampp/htdocs/demo/amazon-s3/test.php(21):RecursiveDirectoryIterator-> _construct(' s3:// bucketname / folder ...')在第21行的/opt/lampp/htdocs/demo/amazon-s3/test.php中抛出#1 {main}

Mark's answer is totally valid, but there is also an even easier way to do this with the AWS SDK for PHP using the downloadBucket() method . Mark的答案完全有效,但使用downloadBucket()方法使用 AWS SDK for PHP还可以更轻松地完成此操作。 Here's an example (assuming $client is an instance of the S3 client): 这是一个例子(假设$client是S3客户$client的一个实例):

$bucket = 'YOUR_BUCKET_NAME';
$directory = 'YOUR_FOLDER_OR_KEY_PREFIX_IN_S3';
$basePath = 'YOUR_LOCAL_PATH/';

$client->downloadBucket($basePath . $directory, $bucket, $directory);

The cool thing about this method is that it queues up only the files that don't already exist (or haven't been modified) in the local directory, and attempts to download them in parallel, in order speed up the overall download time. 这种方法的一个很酷的事情是,它只排队本地目录中尚未存在(或尚未修改)的文件,并尝试并行下载它们,以加快整体下载时间。 There is a 4th argument to the method (see the link) that includes other options like setting how many parallel downloads you want to happen at a time. 该方法的第四个参数(请参阅链接)包含其他选项,例如设置您希望一次发生多少并行下载。

Pretty straightforward using the Amazon S3 stream wrapper: 使用Amazon S3流包装器非常简单:

include dirname(__FILE__) . '/aws.phar';
$baseDirectory = dirname(__FILE__) .'/'.$myDirectoryName;


$client = \Aws\S3\S3Client::factory(array(
    'key'    => "<my key>",
    'secret' => "<my secret>"
));

$client->registerStreamWrapper();


$bucket = 's3://mys3bucket/' . $myDirectoryName

$iterator = new RecursiveIteratorIterator(
    new RecursiveDirectoryIterator($bucket),
    RecursiveIteratorIterator::SELF_FIRST
);

foreach($iterator as $name => $object) {
    if ($object->getFileName() !== '.' && $object->getFileName() !== '..') {
        $relative = substr($name,strlen($bucket)+1);
        if (!file_exists($baseDirectory . '/' . $path . '/' . $relative)) {
            if ($object->isDir()) {
                mkdir($baseDirectory . '/' . $path . '/' . $relative, 0777, true);
            } else {
                file_put_contents(
                    $baseDirectory . '/' . $path . '/' . $relative,
                    file_get_contents($name)
                );
            }
        }
    }
}

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM