[英]Load balance docker swarm
I have a docker swarm mode with one HAProxy container, and 3 python web apps. 我有一个带有一个HAProxy容器和3个python Web应用程序的docker swarm模式。 The container with HAProxy is expose port 80 and should load balance the 3 containers of my app (by leastconn
). 带有HAProxy的容器的暴露端口为80,并应在我的应用程序的3个容器之间保持负载平衡(by leastconn
)。
Here is my docker-compose.yml
file: 这是我docker-compose.yml
文件:
version: '3'
services:
scraper-node:
image: scraper
ports:
- 5000
volumes:
- /profiles:/profiles
command: >
bash -c "
cd src;
gunicorn src.interface:app \
--bind=0.0.0.0:5000 \
--workers=1 \
--threads=1 \
--timeout 500 \
--log-level=debug \
"
environment:
- SERVICE_PORTS=5000
deploy:
replicas: 3
update_config:
parallelism: 5
delay: 10s
restart_policy:
condition: on-failure
max_attempts: 3
window: 120s
networks:
- web
proxy:
image: dockercloud/haproxy
depends_on:
- scraper-node
environment:
- BALANCE=leastconn
volumes:
- /var/run/docker.sock:/var/run/docker.sock
ports:
- 80:80
networks:
- web
networks:
web:
driver: overlay
When I deploy this swarm ( docker stack deploy --compose-file=docker-compose.yml scraper
) I get all of my containers: 当我部署这个docker stack deploy --compose-file=docker-compose.yml scraper
( docker stack deploy --compose-file=docker-compose.yml scraper
)时,我得到了所有容器:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
245f4bfd1299 scraper:latest "/docker-entrypoin..." 21 hours ago Up 19 minutes 80/tcp, 5000/tcp, 8000/tcp scraper_scraper-node.3.iyi33hv9tikmf6m2wna0cypgp
995aefdb9346 scraper:latest "/docker-entrypoin..." 21 hours ago Up 19 minutes 80/tcp, 5000/tcp, 8000/tcp scraper_scraper-node.2.wem9v2nug8wqos7d97zknuvqb
a51474322583 scraper:latest "/docker-entrypoin..." 21 hours ago Up 19 minutes 80/tcp, 5000/tcp, 8000/tcp scraper_scraper-node.1.0u8q4zn432n7p5gl93ohqio8e
3f97f34678d1 dockercloud/haproxy "/sbin/tini -- doc..." 21 hours ago Up 19 minutes 80/tcp, 443/tcp, 1936/tcp scraper_proxy.1.rng5ysn8v48cs4nxb1atkrz73
And when I display the haproxy
container log it looks like he recognize the 3 python containers: 当我显示haproxy
容器日志时,看起来他可以识别3个python容器:
INFO:haproxy:dockercloud/haproxy 1.6.6 is running outside Docker Cloud
INFO:haproxy:Haproxy is running in SwarmMode, loading HAProxy definition through docker api
INFO:haproxy:dockercloud/haproxy PID: 6
INFO:haproxy:=> Add task: Initial start - Swarm Mode
INFO:haproxy:=> Executing task: Initial start - Swarm Mode
INFO:haproxy:==========BEGIN==========
INFO:haproxy:Linked service: scraper_scraper-node
INFO:haproxy:Linked container: scraper_scraper-node.1.0u8q4zn432n7p5gl93ohqio8e, scraper_scraper-node.2.wem9v2nug8wqos7d97zknuvqb, scraper_scraper-node.3.iyi33hv9tikmf6m2wna0cypgp
INFO:haproxy:HAProxy configuration:
global
log 127.0.0.1 local0
log 127.0.0.1 local1 notice
log-send-hostname
maxconn 4096
pidfile /var/run/haproxy.pid
user haproxy
group haproxy
daemon
stats socket /var/run/haproxy.stats level admin
ssl-default-bind-options no-sslv3
ssl-default-bind-ciphers ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA256:ECDHE-ECDSA-AES128-SHA:ECDHE-RSA-AES128-SHA:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA384:ECDHE-RSA-AES256-SHA:ECDHE-ECDSA-AES256-SHA:AES128-GCM-SHA256:AES128-SHA256:AES128-SHA:AES256-GCM-SHA384:AES256-SHA256:AES256-SHA:DHE-DSS-AES128-SHA:DES-CBC3-SHA
defaults
balance leastconn
log global
mode http
option redispatch
option httplog
option dontlognull
option forwardfor
timeout connect 5000
timeout client 50000
timeout server 50000
listen stats
bind :1936
mode http
stats enable
timeout connect 10s
timeout client 1m
timeout server 1m
stats hide-version
stats realm Haproxy\ Statistics
stats uri /
stats auth stats:stats
frontend default_port_80
bind :80
reqadd X-Forwarded-Proto:\ http
maxconn 4096
default_backend default_service
backend default_service
server scraper_scraper-node.1.0u8q4zn432n7p5gl93ohqio8e 10.0.0.5:5000 check inter 2000 rise 2 fall 3
server scraper_scraper-node.2.wem9v2nug8wqos7d97zknuvqb 10.0.0.6:5000 check inter 2000 rise 2 fall 3
server scraper_scraper-node.3.iyi33hv9tikmf6m2wna0cypgp 10.0.0.7:5000 check inter 2000 rise 2 fall 3
INFO:haproxy:Launching HAProxy
INFO:haproxy:HAProxy has been launched(PID: 12)
INFO:haproxy:===========END===========
But when I try to GET
to http://localhost
I get an error message: 但是,当我尝试GET
http://localhost
,会收到错误消息:
<html>
<body>
<h1>503 Service Unavailable</h1>
No server is available to handle this request.
</body>
</html>
There was two problems: 有两个问题:
command
in docker-compose.yml
file should be one line. docker-compose.yml
文件中的command
应为一行。 scraper
image should expose port 5000 (in his Dockerfile
). scraper
映像应公开端口5000(在其Dockerfile
)。 Once I fix those, I deploy this swarm the same way (with stack
) and the proxy
container recognize the python containers and was able to load balance between them. 修复这些问题后,我将以相同的方式(通过stack
)部署此群集,并且proxy
容器可以识别python容器并能够在它们之间进行负载平衡。
A 503 error usually means a failed health check to the backend server. 503错误通常表示后端服务器的健康检查失败。
Your stats page might be helpful here: if you mouse over the LastChk
column of one of your DOWN backend servers, HAProxy will give you a vague summary of why that server is DOWN: 您的统计信息页面在这里可能会有所帮助:如果将鼠标悬停在其中一台DOWN后端服务器的LastChk
列上,HAProxy将为您提供该服务器为何关闭的模糊摘要:
It does not look like you configured the health check ( option httpchk
) for your default_service
backend: can you reach any of your backend servers directly (eg curl --head 10.0.0.5:5000
)? 好像您没有为default_service
后端配置运行状况检查( option httpchk
):您可以直接访问任何后端服务器(例如curl --head 10.0.0.5:5000
)吗? From the HAProxy documentation : 从HAProxy文档中 :
[R]esponses 2xx and 3xx are considered valid, while all other ones indicate a server failure, including the lack of any response. [R]响应2xx和3xx被视为有效,而所有其他响应均指示服务器故障,包括缺少任何响应。
声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.