简体   繁体   English

AWS lambda function 从 S3 文件夹中删除文件

[英]AWS lambda function delete files from S3 folder

I import some data from Funnel to S3 bucket.我将一些数据从漏斗导入到 S3 存储桶。 After that, Lambda function copy data to table in Redshift and I tried to delete all copied object from bucket folder but I keep getting timeout.之后,Lambda function 将数据复制到 Redshift 中的表,我试图从存储桶文件夹中删除所有复制的 object,但我一直超时。

This is my code:这是我的代码:

const Promise = require('bluebird');
const {Pool} = require('pg');
const AWS = require('aws-sdk');

async function emptyS3Directory(bucket, dir) {
    const listParams = {
    Bucket: bucket,
     Prefix: dir
    };
    var s3 = new AWS.S3();
    s3.listObjectsV2(listParams, function(err, data)  // Here I always getting timeout{
    });
.....
}

EDIT.... This is code of the function.编辑....这是 function 的代码。

async function DeleteAllDataFromDir(bucket, dir) {

const listParams = {
    Bucket: bucket,
    Prefix: dir
};
var s3 = new AWS.S3();

 const listedObjects = await s3.listObjects(listParams).promise();
 console.log("reponse", listedObjects);
    if (listedObjects.Contents.length === 0) return;

    const deleteParams = {
        Bucket: bucket,
        Delete: { Objects: [] }
    };

    listedObjects.Contents.forEach(({ Key }) => {
        deleteParams.Delete.Objects.push({ Key });
    });

    await s3.deleteObjects(deleteParams).promise();

    if (listedObjects.IsTruncated) await DeleteAllDataFromDir(bucket, dir);
}

The first time I set the time out to 2 minutes, then I changed it to 10 minutes and I get the same error::我第一次将超时设置为 2 分钟,然后我将其更改为 10 分钟,我得到了同样的错误::

{
    "errorType": "NetworkingError",
    "errorMessage": "connect ETIMEDOUT IP:port",
    "code": "NetworkingError",
    "message": "connect ETIMEDOUT IP:port",
    "errno": "ETIMEDOUT",
    "syscall": "connect",
    "address": "IP",
    "port": port,
    "region": "eu-west-2",
    "hostname": "hostName",
    "retryable": true,
    "time": "2020-12-10T08:36:29.984Z",
    "stack": [
        "Error: connect ETIMEDOUT 52.95.148.74:443",
        "    at TCPConnectWrap.afterConnect [as oncomplete] (net.js:1107:14)"
    ]
}

It appears that your bucket may reside in a different region than your lambda function based on the nature of the error.根据错误的性质,您的存储桶似乎与您的 lambda function 位于不同的区域。

Provide the region hash as an option when constructing your S3 client.在构建 S3 客户端时,提供区域 hash作为选项。

var s3 = new AWS.S3({region: 'bucket-region-hash'});

To figure the region hash, go to S3 Management Console.将区域 hash、go 绘制到 S3 管理控制台。 Then from the sidebar, click "Buckets".然后从侧边栏中单击“存储桶”。 In the resulting view, you'll find the region hash.在结果视图中,您将找到区域 hash。 It's the one marked in gold as shown in the picture below.如下图所示,它是用金色标记的。

在此处输入图像描述

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM