![](/img/trans.png)
[英]AWS Lambda and S3 and Pandas - Load CSV into S3, trigger Lambda, load into pandas, put back in bucket?
[英]AWS Lambda trigger on S3 bucket to Neptune "Failed to start new load for the source"
I wrote a lambda function which will trigger python code when a create event happens in S3 and Python script is supposed to read files from S3 and post them to Neptune server.
當我測試它時,我收到以下錯誤。
{
"requestId":"xxxxxxxx-1234-5678-9012-xxxxxxxxxxxxx",
"code":"ThrottlingException",
"detailedMessage":"Failed to start new load for the source s3://my-s3-url/file.ttl.
Max concurrent load limit breached. Limit is 1"
}
代碼:
def lambda_handler(event, context):
file_names = ["a.ttl", "b.ttl", "c.ttl"]
source_url = "s3://my-s3.aws.com/"
role = "my-role"
neptune_url = "https://my-neptune-server.aws.com/loader"
headers = {"Content-Type": "application/json"}
for name in file_names:
file = source_url+name
data = {"source": file, "iamRoleArn": role, "region": "region-1", "failOnError": "FALSE", "format": "turtle"}
loop = asyncio.get_event_loop()
task = loop.create_task(post_async(neptune_url, json.dumps(data), headers))
resp = loop.run_until_complete(task)
print(resp)
async def post_async(neptune_url, data, headers):
async with aiohttp.ClientSession() as session:
async with session.post(neptune_url, data=data, headers=headers) as response:
result = await response.text()
return result
我嘗試了同步和異步方式。 我在 web 中獲得了有限的文檔。 有人能指出我正確的方向嗎?
根據文檔: 最大並發負載限制為一
因此,您可能需要在上傳過程中引入一些隊列。 或者它可能是
根據文檔https://docs.aws.amazon.com/neptune/latest/userguide/load-api-reference-load.html ,
Max concurrent load limit breached (HTTP 400)
If a load request is submitted without "queueRequest" : "TRUE",
and a load job is currently running, the request will fail with this error
您可以將以下字段添加到您的有效負載中,
data = {
"source": file,
"iamRoleArn": role,
"region": "region-1",
"failOnError": "FALSE",
"format": "turtle",
"queueRequest" : "TRUE"
}
聲明:本站的技術帖子網頁,遵循CC BY-SA 4.0協議,如果您需要轉載,請注明本站網址或者原文地址。任何問題請咨詢:yoyou2525@163.com.