简体   繁体   中英

node-postgres : ECONNREFUSED when connections are too many

I've developed a simple resource for my nodejs express app . I'm using brianc's node-postgres client for handling connections to my postgres database. Following async/await style from the documentation , my users.js controllers should look like this:

code:

const { Pool } = require('pg');
const pool = new Pool();

module.exports = {
    index: async (req, res, next) => {
        try {
            const dbResult = await pool.query(`select * from users`, []);
            //do stuff with the data here and return response
        } catch (err) {
            next(err);
        }
    }
}

I believe the resource is pretty simple, and it works fairly. However I wanted to load test my server using this endpoint using loadtest . I tested it using concurrency of 100 at 200 requests/second using:

loadtest:

loadtest -H authorization:(sometoken) -c 100 --rps 200 http://localhost:3000/users

I would run my node app and look at the console while the loadtest is being conducted. The first couple of hundred requests would push through status 200 fine, while soon, i would get errors ECONNREFUSED in most of the requests. Error rate is usually at 62% of total requests (a lower rate would ofcourse be just fine).

在此处输入图片说明

initial findings:

I've done a lot of googling, trial and error. I've found that ECONNREFUSED is a result of connection limit being maxed out. But I thought connection pooling is supposed to handle it since it recycles/reuses client/conections.

I've also experimented with tinkering with postgresql.conf setting max_connections = n, and shared_buffers = nGB, tried different configs and none of them worked, error rate would be the same.

Tried different styles, promise-based, callback, and I still get the same error rate for the loadtest. Tried tinkering with connectionTimeoutMillis paramater, and no value gives any change.

I believe breaking at 200 rps with 100 concurrency is bad for a simple resource such as a simple select query. If it's a machine limitation, I'm using a fairly okay machine - Core i7 6700, 16GB DDR4. Did I miss something or am I doing something wrong? Thanks!

Check your pool configuration as described in node-pool . You should set max=100 to cope with your 100 concurrency assuming your database is able to accept 101 connections (concurrency + a query browser for the dev guy :) ).
For fine tuning to avoid waiting resources, probably acquireTimeoutMillis and maxWaitingClients are good values to look at.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM