简体   繁体   English

将 aws lambda 与 redis 连接时,任务在 23.02 秒错误后超时

[英]while connecting aws lambda with redis getting Task timed out after 23.02 seconds error

In my project, I want to connect lambda function to Redis storage but while making the connection I am getting the task time out error.在我的项目中,我想将 lambda function 连接到 Redis 存储,但在建立连接时出现任务超时错误。 Even though I have connected private su.net with a NAT gateway.即使我已将私有 su.net 连接到 NAT 网关。

Python code: Python 代码:

import json
import boto3
import math
import redis
# from sklearn.model_selection import train_test_split
redis = redis.Redis(host='redisconnection.sxxqwc.ng.0001.use1.cache.amazonaws.com', port=6379)

s3 = boto3.client('s3')

def lambda_handler(event, context):
    # bucket = event['Records'][0]['s3']['bucket']['name']              // if dynamic allocation
    # key = event['Records'][0]['s3']['object']['key']                  // if dynamic searching 
    bucket = "aws-trigger1"
    key = "unigram1.csv"
    
    response = s3.head_object(Bucket=bucket, Key=key)
    fileSize = response['ContentLength']
    fileSize = fileSize / 1048576
    print("FileSize = " + str(fileSize) + " MB")
    # redis.rpush(fileSize)
    redis.ping
    redis.set('foo','bar')
    
    
    
    obj = s3.get_object(Bucket= bucket, Key=key)
    file_content = obj["Body"].read().decode("utf-8")
    
    
    
    #Calculate the chunk size
    chunkSize = ''
    MAPPERNUMBER=2
    MINBLOCKSIZE= 1024
    chunkSize = int(fileSize/MAPPERNUMBER)
    numberMappers = MAPPERNUMBER
    if chunkSize < MINBLOCKSIZE:
        print("chunk size to small (" + str(chunkSize) + " bytes), changing to " + str(MINBLOCKSIZE) + " bytes")
        chunkSize = MINBLOCKSIZE
        numberMappers = int(fileSize/chunkSize)+1
    residualData = fileSize - (MAPPERNUMBER - 1)*chunkSize
    # print("numberMappers--",residualData)
    
    #Ensure that chunk size is smaller than lambda function memory
    MEMORY= 1536
    memoryLimit = 0.30
    secureMemorySize = int(MEMORY*memoryLimit)
    if chunkSize > secureMemorySize:
        print("chunk size to large (" + str(chunkSize) + " bytes), changing to " + str(secureMemorySize) + " bytes")
        chunkSize = secureMemorySize
        numberMappers = int(fileSize/chunkSize)+1
    
    # print("Using chunk size of " + str(chunkSize) + " bytes, and " + str(numberMappers) + " nodes")
    
    #remove 1st row from the data
    file_content=file_content.split('\n', 1)[-1]
    # print("after removing column name")
    # X_train,X_test,y_train,y_test=train_test_split(X,y,test_size=0.5, randomstate=42)
    train_pct_index = int(0.5 * len(file_content))  
    
    X_Map1, X_Map2 = file_content[:train_pct_index], file_content[train_pct_index:]
    # print("the size is--------------",X_Map1)
    # print("the size is--------------",X_Map2)

    
    linelen = file_content.find('\n')
    if linelen < 0:
        print("\ n not found in mapper chunk")
        return
    extraRange = 2*(linelen+20)
    initRange = fileSize + 1
    limitRange = fileSize + extraRange
    
    # chunkRange = 'bytes=' + str(initRange) + '-' + str(limitRange)
    # print(chunkRange)
    

     #invoke mappers
    invokeLam = boto3.client("lambda", region_name="us-east-1")
    payload = X_Map1
    payload2 = X_Map2
    print(X_Map1)
    # resp = invokeLam.invoke(FunctionName = "map1", InvocationType="RequestResponse", Payload = json.dumps(payload))
    # resp2 = invokeLam.invoke(FunctionName = "map2", InvocationType="RequestResponse", Payload = json.dumps(payload2))
    
    return file_conte

connection of VPC in lambda lambda中VPC的连接

You may receive a timeout when trying to retrieve an object from S3.尝试从 S3 检索 object 时,您可能会收到超时。 Check if Amazon S3 endpoint is configured in your VPC: https://docs.aws.amazon.com/glue/latest/dg/vpc-endpoints-s3.html检查您的 VPC 中是否配置了 Amazon S3 端点: https://docs.aws.amazon.com/glue/latest/dg/vpc-endpoints-s3.html

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM