简体   繁体   中英

Is this Docker / NGINX / Node setup actually load balancing as expected?

I am setting up a web server using Docker / Node / Nginx. I've been playing around with the setup in docker-compose and have come up with 2 working solutions - but in regards to load balancing, one of them may be too good to be true (as it seemingly allows me save space by not having to create additional images/containers). I am looking for verification if what I am seeing is actually legit, and that multiple images etc are not a requirement for load balancing.

Solution 1 (no additional images):

docker-compose.yml

version: '3'

volumes:
  node_deps:

services:
  nginx:
    build: ./nginx
    image: nginx_i
    container_name: nginx_c
    ports:
        - '80:80'
        - '443:443'
    links:
        - node
    restart: always
  node:
    build: ./node
    image: node_i
    container_name: node_c
    command: "npm start"
    ports:
      - '5000:5000'
      - '5001:5001'
      - '5500:5000'
      - '5501:5001' 
    volumes:
      - ./node:/src
      - node_deps:/src/node_modules

nginx.conf

http {
  ...

  upstream shopster-node {
    server node:5000 weight=10 max_fails=3 fail_timeout=30s;
    server node:5500 weight=10 max_fails=3 fail_timeout=30s;
    keepalive 64;
  }

  server {
    ...
  }

} 

Solution 2 (has additional images):

version: '3'

volumes:
  node_deps:

services:
  nginx:
    build: ./nginx
    image: nginx_i
    container_name: nginx_c
    ports:
        - '80:80'
        - '443:443'
    links:
        - node_one
        - node_two
    restart: always
  node_one:
    build: ./node
    image: node_one_i
    container_name: node_one_c
    command: "npm start"
    ports:
      - '5000:5000'
      - '5001:5001'
    volumes:
      - ./node:/src
      - node_deps:/src/node_modules
  node_two:
    build: ./node
    image: node_two_i
    container_name: node_two_c
    command: "npm start"
    ports:
      - '5500:5000'
      - '5501:5001'
    volumes:
      - ./node:/src
      - node_deps:/src/node_modules

nginx.conf

http {
  ...

  upstream shopster-node {
    server node_one:5000 weight=10 max_fails=3 fail_timeout=30s;
    server node_two:5500 weight=10 max_fails=3 fail_timeout=30s;
    keepalive 64;
  }

  server {
    ...
  }

} 

Both scenarios load the app perfectly, on localhost & on the specified ports. I am certain that scenario 2 is load balancing properly, as it mimics a traditional multi-server scenario.

Is there any way I can verify that scenario 1 is actually load balancing as expected? This would be my preferred approach, I just need to know I can trust it.

Yes, use the image jwilder/nginx-proxy , You can add many workers without extra configuration.

See the documentation: https://hub.docker.com/r/jwilder/nginx-proxy/

It´s very simple to configurate and you can scale with docker-compose scale [name_service]=[num]

The way to configurate is

version: '3'
services:
  nginx-proxy:
    image: jwilder/nginx-proxy
    container_name: nginx-proxy
    ports:
      - "80:80"
    volumes:
      - /var/run/docker.sock:/tmp/docker.sock:ro

  node:
    build: ./node
    image: node_i
    command: "npm start"
    ports:
      - '5000'
    volumes:
      - ./node:/src
      - node_deps:/src/node_modules
    environment:
      - VIRTUAL_HOST=whoami.local

To execute the container is

$ docker-compose up
$ docker-compose scale node=2
$ curl -H "Host: whoami.local" localhost

Run docker-compose up -d on scenario 1. Then use docker-compose scale to add additional node containers.

docker-compose scale node=5

This will spin up 4 additional node containers in addition to the existing container. If you then run:

docker-compose scale node=2

it will remove 3 of the node containers, leaving you with 2.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM