[英]Best way to manage docker containers with supervisord
I have to setup "dockerized" environments (integration, qa and production) on the same server (client's requirement).我必须在同一台服务器(客户要求)上设置“dockerized”环境(集成、质量保证和生产)。 Each environment will be composed as follow:
每个环境将组成如下:
Over them, jenkins will handle the deployment based on CI.在它们之上,jenkins 将处理基于 CI 的部署。
Using set of containers per environment sounds like the best approach.每个环境使用一组容器听起来是最好的方法。
But now I need, process manager to run and supervise all of them:但现在我需要流程管理器来运行和监督所有这些:
Supervisord seem to be the best choice, but during my tests, i'm not able to "properly" restart a container. Supervisord 似乎是最佳选择,但在我的测试期间,我无法“正确”重新启动容器。 Here a snippet of the supervisord.conf
这是 supervisord.conf 的片段
[program:docker-rabbit]
command=/usr/bin/docker run -p 5672:5672 -p 15672:15672 tutum/rabbitmq
startsecs=20
autorestart=unexpected
exitcodes=0,1
stopsignal=KILL
So I wonder what is the best way to separate each environment and be able to manage and supervise each service (a container).所以我想知道分离每个环境并能够管理和监督每个服务(一个容器)的最佳方法是什么。
[EDIT My solution inspired by Thomas response] [编辑我的解决方案受 Thomas 响应的启发]
each container is run by a .sh script that looking like每个容器都由一个看起来像的 .sh 脚本运行
rabbit-integration.py兔子集成.py
#!/bin/bash
#set -x
SERVICE="rabbitmq"
SH_S = "/path/to_shs"
export MY_ENV="integration"
. $SH_S/env_.sh
. $SH_S/utils.sh
SERVICE_ENV=$SERVICE-$MY_ENV
ID_FILE=/tmp/$SERVICE_ENV.name # pid file
trap stop SIGHUP SIGINT SIGTERM # trap signal for calling the stop function
run_rabbitmq
$SH_S/env_.sh is looking like: $SH_S/env_.sh看起来像:
# set env variable
...
case $MONARCH_ENV in
$INTEGRATION)
AMQP_PORT="5672"
AMQP_IP="172.17.42.1"
...
;;
$PREPRODUCTION)
AMQP_PORT="5673"
AMQP_IP="172.17.42.1"
...
;;
$PRODUCTION)
AMQP_PORT="5674"
REDIS_IP="172.17.42.1"
...
esac
$SH_S/utils.sh is looking like: $SH_S/utils.sh看起来像:
#!/bin/bash
function random_name(){
echo "$SERVICE_ENV-$(cat /proc/sys/kernel/random/uuid)"
}
function stop (){
echo "stopping docker container..."
/usr/bin/docker stop `cat $ID_FILE`
}
function run_rabbitmq (){
# do no daemonize and use stdout
NAME="$(random_name)"
echo $NAME > $ID_FILE
/usr/bin/docker run -i --name "$NAME" -p $AMQP_IP:$AMQP_PORT:5672 -p $AMQP_ADMIN_PORT:15672 -e RABBITMQ_PASS="$AMQP_PASSWORD" myimage-rabbitmq &
PID=$!
wait $PID
}
At least myconfig.intergration.conf is looking like:至少myconfig.intergration.conf看起来像:
[program:rabbit-integration]
command=/path/sh_s/rabbit-integration.sh
startsecs=20
priority=90
autorestart=unexpected
exitcodes=0,1
stopsignal=TERM
In the case i want use the same container the startup function is looking like:在我想使用同一个容器的情况下,启动功能看起来像:
function _run_my_container () {
NAME="my_container"
/usr/bin/docker start -i $NAME &
PID=$!
wait $PID
rc=$?
if [[ $rc != 0 ]]; then
_run_my_container
fi
}
where在哪里
function _run_my_container (){
/usr/bin/docker run -p{} -v{} --name "$NAME" myimage &
PID=$!
wait $PID
}
Supervisor requires that the processes it manages do not daemonize, as per its documentation :根据其文档,Supervisor 要求其管理的进程不要守护进程:
Programs meant to be run under supervisor should not daemonize themselves.
打算在主管下运行的程序不应自行守护进程。 Instead, they should run in the foreground.
相反,它们应该在前台运行。 They should not detach from the terminal from which they are started.
它们不应与它们启动的终端分离。
This is largely incompatible with Docker, where the containers are subprocesses of the Docker process itself (ie and hence are not subprocesses of Supervisor).这在很大程度上与 Docker 不兼容,其中容器是Docker 进程本身的子进程(即,因此不是 Supervisor 的子进程)。
To be able to use Docker with Supervisor, you could write an equivalent of the pidproxy
program that works with Docker.为了能够将 Docker 与 Supervisor 一起使用,您可以编写与 Docker 一起使用的
pidproxy
程序的等效程序。
But really, the two tools aren't really architected to work together, so you should consider changing one or the other:但实际上,这两种工具的架构并不能真正协同工作,因此您应该考虑更改其中一个:
You need to make sure you use stopsignal=INT in your supervisor config, then exec docker run
normally.您需要确保在您的主管配置中使用 stopsignal=INT,然后 exec
docker run
正常运行。
[program:foo]
stopsignal=INT
command=docker -rm run whatever
At least this seems to work for me with docker version 1.9.1.至少这对我来说似乎适用于 docker 1.9.1 版。
If you run docker from inside a shell script, it is very important that you have exec
in front of the docker run command, so that docker run
replaces the shell process and thus receives the SIGINT directly from supervisord.如果您从 shell 脚本中运行 docker,那么在 docker run 命令前面有
exec
非常重要,以便docker run
替换 shell 进程,从而直接从 supervisord 接收 SIGINT。
You can have Docker just not detach and then things work fine.您可以让 Docker 不分离,然后一切正常。 We manage our Docker containers in this way through supervisor.
我们通过 supervisor 以这种方式管理我们的 Docker 容器。 Docker compose is great, but if you're already using Supervisor to manage non-docker things as well, it's nice to keep using it to have all your management in one place.
Docker compose 很棒,但是如果你已经在使用 Supervisor 来管理非 docker 的东西,那么继续使用它来将你的所有管理集中在一个地方是很好的。 We'll wrap our docker run in a bash script like the following and have supervisor track that, and everything works fine:
我们将把我们的 docker run 包装在一个 bash 脚本中,如下所示,并让主管跟踪它,一切正常:
#!/bin/bash¬
TO_STOP=docker ps | grep $SERVICE_NAME | awk '{ print $1 }'¬
if [$TO_STOP != '']; then¬
docker stop $SERVICE_NAME¬
fi¬
TO_REMOVE=docker ps -a | grep $SERVICE_NAME | awk '{ print $1 }'¬
if [$TO_REMOVE != '']; then¬
docker rm $SERVICE_NAME¬
fi¬
¬
docker run -a stdout -a stderr --name="$SERVICE_NAME" \
--rm $DOCKER_IMAGE:$DOCKER_TAG
I found that executing docker run
via supervisor actually works just fine, with a few precautions.我发现通过主管执行
docker run
实际上工作得很好,有一些预防措施。 The main thing one needs to avoid is allowing supervisord to send a SIGKILL
to the docker run
process, which will kill off that process but not the container itself.需要避免的主要事情是允许 supervisord 向
docker run
进程发送SIGKILL
,这将杀死该进程而不是容器本身。
For the most part, this can be handled by following the instructions in Why Your Dockerized Application Isn't Receiving Signals .在大多数情况下,这可以通过遵循为什么你的 Dockerized 应用程序没有接收到信号中的说明来处理。 In short, one needs to:
简而言之,需要:
CMD ["/path/to/myapp"]
form (same for ENTRYPOINT
) instead of the shell form ( CMD /path/to/myapp
).CMD ["/path/to/myapp"]
形式(与ENTRYPOINT
相同)而不是 shell 形式( CMD /path/to/myapp
)。--init
to docker run
. --init
传递给 docker docker run
。ENTRYPOINT
, ensure its last line calls exec
, so as to avoid spawning a new process.ENTRYPOINT
,请确保其最后一行调用exec
,以避免产生新进程。STOPSIGNAL
to your Dockerfile
.STOPSIGNAL
添加到您的Dockerfile
。 Additionally, you'll want to make sure that your stopwaitsecs
setting in supervisor is greater than the time your process might take to shutdown gracefully when it receives a SIGTERM
(eg, graceful_timeout
if using gunicorn).此外,您需要确保您在 supervisor 中的
stopwaitsecs
设置大于您的进程在收到SIGTERM
时可能需要正常关闭的时间(例如,如果使用 gunicorn,则为graceful_timeout
)。
Here's a sample config to run a gunicorn container:这是运行 gunicorn 容器的示例配置:
[program:gunicorn]
command=/usr/bin/docker run --init --rm -i -p 8000:8000 gunicorn
redirect_stderr=true
stopwaitsecs=31
声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.