简体   繁体   中英

Load balancing of network requests between replicas of services in docker swarm mode

I cannot find any docs on used algorithm of load balancing between replicas of a service of docker in swarm mode.

I created an image mynodeapp based from the Dockerfile:

FROM node:latest                                                                                                                                                                        
RUN mkdir -p /usr/src/app                                                                                                                                                               
WORKDIR /usr/src/app                                                                                                                                                                    
COPY package.json /usr/src/app/                                                                                                                                                         
RUN npm install                                                                                                                                                                         
COPY . /usr/src/app                                                                                                                                                                     
EXPOSE 8080                                                                                                                                                                             
CMD [ "npm", "start" ]      

npm starts the following server.js :

const LATENCY = 5000;
var app = require('express')();

app.get('/', (req,res)=>{
        console.log('Sending response');
        setTimeout( function() {
                res.send('All ok');
        }, LATENCY );
});
app.listen( 8080 );

The code just sends All ok after 5-secs delay. Also, it prints Sending response to console.

Now, I start the docker swarm mode:

docker swarm init --advertise-addr:eth0

and start service with two replicas:

docker service create mynodeapp --replicas 2 --publish 8080:8080

The result is

root@man1:~# docker service ls
ID            NAME             REPLICAS  IMAGE      COMMAND
233z44bz6sx0  amazing_hypatia  2/2       mynodeapp  



root@man1:~# docker ps
CONTAINER ID        PORTS               NAMES
1f36e0c9eb37        8080/tcp         amazing_hypatia.1.453u2upnyf2nvtwxouopv4olk

f0fb099a5154        8080/tcp            amazing_hypatia.2.8lbs461uhiv2qvh28433ayi0g

Now, i open two terminals and look on the logs of both containers:

 docker logs amazing_hypatia.2.8lbs461uhiv2qvh28433ayi0g -f

and

docker logs  amazing_hypatia.2.8lbs461uhiv2qvh28433ayi0g -f

When I run

curl localhost:8080

I get Sending response one time from one terminal and another time from another terminal. So it seems like round-robin load balancing is used.

But what is real load-balancing algorithm?

The algorithm is currently a simple round-robin. There have been suggestions to enable a fastest expected response time algorithm that would solve the problem of requests getting routed to another docker host with the service is also running locally, but this has not been implemented yet.


From docker's swarm networking docs :

The swarm load balancer automatically routes the HTTP request to the service's VIP to an active task. It distributes subsequent requests to other tasks using round-robin selection.

The comments about using fastest expected response are from some dockercon 2016 videos that I can't pull up right now.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM