简体   繁体   中英

Redis default number of connections in pool in Python

Python - 3.7

Redis - 2.10.6


I am creating a connection pool for Redis using

redis_pool = redis.ConnectionPool(host=REDIS_URL, port=REDIS_PORT, decode_responses=True) 

I didn't specify max_connections . While looking at the source code for redis.ConnectionPool() ,

def __init__(self, connection_class=Connection, max_connections=None,
             **connection_kwargs):
    """
    Create a connection pool. If max_connections is set, then this
    object raises redis.ConnectionError when the pool's limit is reached.

    By default, TCP connections are created unless connection_class is
    specified. Use redis.UnixDomainSocketConnection for unix sockets.

    Any additional keyword arguments are passed to the constructor of
    connection_class.
    """
    max_connections = max_connections or 2 ** 31
    if not isinstance(max_connections, (int, long)) or max_connections < 0:
        raise ValueError('"max_connections" must be a positive integer')

    self.connection_class = connection_class
    self.connection_kwargs = connection_kwargs
    self.max_connections = max_connections

    self.reset()

I see max_connections are set to 2 ** 31 ie 2,147,483,648 ( if not set ). Which is weird to me.

What is the default number of connections Redis maintains in the pool? Max value is around 2 million. So, it means we must pass our own practical value for that.

The pool doesn't exist on the Redis side, this class is really just a fancy collection of self.connection_class instances on the Python side.

Agree with you though, 99+% of the time that 2**31 number is unnecessarily huge. Don't think it's too concerning though, because initializing the pool doesn't create any connections (or reserve space for them). max_connections only bounds the _available_connections array, which grows when a connection is demanded but the pool does not have an idle one available for immediate use.

Here's some more of the ConnectionPool class with a few notes.

https://github.com/andymccurdy/redis-py/blob/master/redis/connection.py#L967

def reset(self):
    self.pid = os.getpid()
    self._created_connections = 0
    self._available_connections = []  # <- starts empty
    self._in_use_connections = set()
    self._check_lock = threading.Lock()

https://github.com/andymccurdy/redis-py/blob/master/redis/connection.py#L983

def get_connection(self, command_name, *keys, **options):
    "Get a connection from the pool"
    self._checkpid()
    try:
        connection = self._available_connections.pop()
    except IndexError:
        connection = self.make_connection()  # <- make a new conn only if _available_connections is tapped
    self._in_use_connections.add(connection)
    try:
        # ensure this connection is connected to Redis
        connection.connect()
        # connections that the pool provides should be ready to send
        # a command. if not, the connection was either returned to the
        # pool before all data has been read or the socket has been
        # closed. either way, reconnect and verify everything is good.
        if not connection.is_ready_for_command():
            connection.disconnect()
            connection.connect()
            if not connection.is_ready_for_command():
                raise ConnectionError('Connection not ready')
    except:  # noqa: E722
        # release the connection back to the pool so that we don't leak it
        self.release(connection)
        raise

    return connection

https://github.com/andymccurdy/redis-py/blob/master/redis/connection.py#L1019

 def make_connection(self):
    "Create a new connection"
    if self._created_connections >= self.max_connections:  # <- where the bounding happens
        raise ConnectionError("Too many connections")
    self._created_connections += 1
    return self.connection_class(**self.connection_kwargs)

Anyway, I'd bet that particular value was chosen to reduce the likelihood of a developer fully depleting the pool to near 0. Note the connection objects are extremely lightweight, so an array of thousands or millions of them isn't likely to grind your app to halt. And practically it shouldn't make a difference: most Redis calls return so quickly you'd be hard-pressed to accidentally kick off millions of them in parallel anyway. (And if you're doing it on purpose, you probably know enough to tune everything to your exact needs. ;-)

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM