简体   繁体   中英

AWS Lambda: Redis ElastiCache connection timeout error

I have a lambda function using Node 12.

I need to add a new connection to a Redis database hosted in AWS ElastiCache.

Both are in one private VPC and the security groups/subnets are configured properly.

Solution:

globals.js:

const redis = require('redis');
const redisClient = redis.createClient(
  `redis://${process.env.REDIS_HOST}:${process.env.REDIS_PORT}/${process.env.REDIS_DB}`,
);
redisClient.on('error', (err) => {
  console.log('REDIS CLIENT ERROR:' + err);
});
module.exports.globals = {
  REDIS: require('../helpers/redis')(redisClient),
};

index.js (outside handler):

const { globals } = require('./config/globals');
global.app = globals;

const lambda_handler = (event, context, callback) => { ... }
exports.handler = lambda_handler;

helpers/redis/index.js:

const get = require('./get');
module.exports = (redisClient) => {
  return {
    get:  get(redisClient)
  };
};

helpers/redis/get.js:

module.exports = (redisClient) => {
  return (key, cb) => {
    redisClient.get(key, (err, reply) => {
      if (err) {
        cb(err);
      } else {
        cb(null, reply);
      }
    });
  };
};

Function call:

app.REDIS.get(redisKey, (err, reply) => {
  console.log(`REDIS GET: ${err} ${reply}`);
});

Problem: When increasing lambda timeout to a value greater than Redis timeout, I get this error:

REDIS CLIENT ERROR:Error: Redis connection to... failed - connect ETIMEDOUT...

Addition:

I tried quiting/closing the connection after each transaction:

module.exports = (redisClient) => {

  return (cb) => {

    redisClient.quit((err, reply) => {
      if (err) {
        cb(err);
      } else {
        cb(null, reply);
      }
    });
  };
};
app.REDIS.get(redisKey, (err, reply) => {
  console.log(`REDIS GET: ${err} ${reply}`);
  if (err) {
    cb(err);
  } else {
    if (reply) {
      app.REDIS.quit(() => {
        cb()
      });
    }
  }
})

Error:

REDIS GET: AbortError: GET can't be processed. The connection is already closed.

Extra Notes:

  • I have to use callbacks, this is why I pass ones in the above examples
  • I'm using "redis": "^3.0.2"
  • It's not a configuration issue as the cache was accessed hundred of times in a small period of time but it then started giving the timeout errors.
  • Everything works normally locally

It's not a configuration issue as the cache was accessed hundred of times in a small period of time but it then started giving the timeout errors.

i think it is origin of issue, probably redis database size hit the size limit, and it cannot process new data?

Can you delete old data in it?

Also it is possible Elastic Cache has limits on new TCP clients' connections, and if its depleted, new connections are refused with similar error message you mentioned.

If redis client in aws lambda function cannot establish connection, aws lambda function fails - and new one is started. New lambda function makes one more connection to redis, redis cannot process it, and one more lambda function is started...

So, at one moment, we hit the limit on active redis connections, and system is in deadlock.

I think you can temporary stop all lambda functions, and scale up Elastic Cache redis database.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM