简体   繁体   English

芹菜多在码头集装箱内

[英]Celery multi inside docker container

I have python app with celery in docker containers.我在 docker 容器中有带有芹菜的 python 应用程序。 I want have few workers with different queue.我想要几个不同队列的工人。 For example:例如:

celery worker -c 3 -Q queue1
celery worker -c 7 -Q queue2,queue3

But I don't do this in docker compose.但我不在 docker compose 中这样做。 I found out about celery multi.我发现了芹菜多。 I tried use it.我尝试使用它。

version: '3.2'
services:
  app:
    image: "app"
    build:
      context: .
    networks:
      - net
    ports:
      - 5004:5000
    stdin_open: true
    tty: true
    environment:
      FLASK_APP: app/app.py
      FLASK_DEBUG: 1
    volumes:
      - .:/home/app
  app__celery:
    image: "app"
    build:
      context: .
    command: sh -c 'celery multi start 2 -l INFO -c:1 3 -c:2 7 -Q:1 queue1 -Q:2 queue2,queue3'

But I get it...但我明白了...

app__celery_1  |    > celery1@1ab37081acb9: OK
app__celery_1  |    > celery2@1ab37081acb9: OK
app__celery_1 exited with code 0

And my container with celery closes.我的芹菜容器关闭了。 How not to let him close and get his logs from him?如何不让他关闭并从他那里得到他的日志?

UPD: Celery multi created background processes. UPD:芹菜多创建后台进程。 How to start celery multi in foreground?如何在前台启动 celery multi?

I did this task so.我是这样完成这个任务的。 I used supervisord instead celery multi.我用 supervisord 代替 celery multi。 Supervisord start in foreground and my container not closed. Supervisord 在前台开始,我的容器没有关闭。

command: supervisord -c supervisord.conf

And I added all queues to supervisord.con我将所有队列添加到 supervisord.con

[program:celery]
command = celery worker -A app.celery.celery -l INFO -c 3 -Q q1
directory = %(here)s
startsecs = 5
autostart = true
autorestart = true
stopwaitsecs = 300
stderr_logfile = /dev/stderr
stderr_logfile_maxbytes = 0
stdout_logfile = /dev/stdout
stdout_logfile_maxbytes = 0

[program:beat]
command = celery -A app.celery.celery beat -l INFO --pidfile=/tmp/beat.pid
directory = %(here)s
startsecs = 5
autostart = true
autorestart = true
stopwaitsecs = 300
stderr_logfile = /dev/stderr
stderr_logfile_maxbytes = 0
stdout_logfile = /dev/stdout
stdout_logfile_maxbytes = 0

[supervisord]
loglevel = info
nodaemon = true
pidfile = /tmp/supervisord.pid
logfile = /dev/null
logfile_maxbytes = 0

Depending on your application needs and design, you may actually want to separate the workers in different container for different tasks.根据您的应用程序需求和设计,您可能实际上希望将不同容器中的工作人员分开以执行不同的任务。

However, if there's low resource usage and it makes sense to combine multiple workers in a single container, you can do it via an entrypoint script.但是,如果资源使用率低,并且在单个容器中组合多个工作人员是有意义的,则可以通过入口点脚本来完成。

Edit 2019-12-05 : After running this for a while.编辑 2019-12-05 :运行一段时间后。 It's not a good idea for production use.对于生产用途来说,这不是一个好主意。 2 caveats: 2个注意事项:

  1. There is a risk of background worker silently exiting but not captured in the foreground.存在后台工作人员默默退出但未被前台捕获的风险。 The tail -f will continue to run, but docker will not know the background worker stopped. tail -f将继续运行,但 docker 不会知道后台工作程序已停止。 Depending on your celery debug level settings, the logs may show some indication, but it's unknown to docker when you do docker ps .根据您的 celery 调试级别设置,日志可能会显示一些指示,但是当您执行docker psdocker ps不知道。 To be reliable, the workers need to restart on failure, which brings us to the suggestions of using supervisord .为了可靠,worker 需要在失败时重新启动,这给我们带来了使用supervisord的建议。

  2. As a container is started and stopped (but not removed) the docker container state is kept.随着容器的启动和停止(但未删除),docker 容器状态将保持不变。 This means that if your celery workers does depend on a pidfile for identification, and yet there is an ungraceful shutdown, there is a chance that the pidfile is kept, and the worker will not restart cleanly even with a docker stop; docker start这意味着,如果您的 celery 工作人员确实依赖 pidfile 进行识别,但又出现了不正常的关闭,则有可能保留 pidfile,即使docker stop; docker start工作人员也不会干净地重新启动docker stop; docker start docker stop; docker start . docker stop; docker start This is due to celery startup detecting the existence of the leftover PIDfile from the previous unclean shutdown.这是因为 celery 启动检测到上次不正常关闭的剩余 PID 文件的存在。 To prevent multiple instances, the restarted worker stops itself with "PIDfile found, celery is already running?".为了防止多个实例,重新启动的 worker 会用“找到 PIDfile,celery 已经在运行?”来停止自己。 The whole container must be removed with a docker rm , or docker-compose down; docker-compose up必须使用docker rmdocker-compose down; docker-compose up删除整个容器docker-compose down; docker-compose up docker-compose down; docker-compose up . docker-compose down; docker-compose up A few ways of dealing with this:处理这种情况的几种方法:

    a. a. the container must be run with --rm flag to remove container once the container is stopped.一旦容器停止,容器必须使用--rm标志run以删除容器。

    b.perhaps not including the --pidfile parameter in the celery multi or celery worker command would work better.也许不在celery multicelery worker命令中包含--pidfile参数会更好。

Summary Recommendation: It is probably better to use supervisord .总结建议:使用supervisord可能更好。

Now, on to the details:现在,进入细节:

Docker containers need a foreground task to be running, or the container will exit. Docker 容器需要运行一个前台任务,否则容器将退出。 This will be addressed further down.这将进一步解决。

In addition, celery workers may run long-running tasks, and need to respond to docker's shutdown (SIGTERM) signal to gracefully shutdown ie finish up long-running tasks before shutdown or restart.此外,celery worker 可能会运行长时间运行的任务,需要响应 docker 的关闭(SIGTERM)信号来优雅地关闭,即在关闭或重启之前完成长时间运行的任务。

To achieve docker signal propagation and handling, it is best to declare the entrypoint within a dockerfile in docker's exec form , you may also do this in docker-compose file为了实现docker信号的传播和处理,最好以docker的exec形式在dockerfile中声明entrypoint ,你也可以在docker-compose文件中这样做

In addition, since celery multi works in the background, docker can't see any logs.另外,由于celery multi在后台工作,docker看不到任何日志。 You'll need to be able to show the logs in the foreground to let docker logs be able to see what is happening.您需要能够在前台显示日志,以便docker logs能够看到正在发生的事情。 We'll do this by setting logfile for the celery multi workers and display in console foreground with tail -f <logfile_pattern> to run indefinitely.我们将通过为 celery multi worker 设置日志文件并在控制台前景中使用tail -f <logfile_pattern>无限期运行来完成此操作。

We need to achieve three objectives:我们需要实现三个目标:

  1. Run the docker container with a foreground task使用前台任务运行 docker 容器
  2. Receive, trap , and handle docker shutdown signals接收、 trap和处理 docker 关闭信号
  3. Shutdown the workers gracefully优雅地关闭工人

For #1, we will run tail -f & and then wait on it as the foreground task.对于 #1,我们将运行tail -f &然后将其作为前台任务wait

For #2, this is achieved by setting the trap function, and trapping the signal.对于#2,这是通过设置trap功能并捕获信号来实现的。 To receive and handle signals with the trap function, wait have to be the running foreground task, achieved in #1.要使用陷阱函数接收和处理信号, wait必须是正在运行的前台任务,在 #1 中实现。

For #3, we will run celery multi stop <number_of_workers_in_start_command> and other argument parameters during startup in celery multi start .对于 #3,我们将在celery multi start期间运行celery multi stop <number_of_workers_in_start_command>和其他参数参数。

Here's the gist I have wrote, copied here:这是我写的 要点,复制到这里:

#!/bin/sh

# safety switch, exit script if there's error. Full command of shortcut `set -e`
set -o errexit
# safety switch, uninitialized variables will stop script. Full command of shortcut `set -u`
set -o nounset

# tear down function
teardown()
{
    echo " Signal caught..."
    echo "Stopping celery multi gracefully..."

    # send shutdown signal to celery workser via `celery multi`
    # command must mirror some of `celery multi start` arguments
    celery -A config.celery_app multi stop 3 --pidfile=./celery-%n.pid --logfile=./celery-%n%I.log

    echo "Stopped celery multi..."
    echo "Stopping last waited process"
    kill -s TERM "$child" 2> /dev/null
    echo "Stopped last waited process. Exiting..."
    exit 1
}

# start 3 celery worker via `celery multi` with declared logfile for `tail -f`
celery -A config.celery_app multi start 3 -l INFO -Q:1 queue1 -Q:2 queue1 -Q:3 queue3,celery -c:1-2 1 \
    --pidfile=./celery-%n.pid \
    --logfile=./celery-%n%I.log

# start trapping signals (docker sends `SIGTERM` for shudown)
trap teardown SIGINT SIGTERM

# tail all the logs continuously to console for `docker logs` to see
tail -f ./celery*.log &

# capture process id of `tail` for tear down
child=$!

# waits for `tail -f` indefinitely and allows external signals,
# including docker stop signals, to be captured by `trap`
wait "$child"

Use the code above as the contents of the entrypoint script file, and modify it accordingly to your needs.使用上面的代码作为入口点脚本文件的内容,并根据需要进行相应的修改。

Declare it in the dockerfile or docker-compose file in exec form:在 dockerfile 或 docker-compose 文件中以exec形式声明它:

ENTRYPOINT ["entrypoint_file"]

The celery workers can then run in the docker container and can also be gracefully stopped.然后 celery 工作人员可以在 docker 容器中运行,也可以优雅地停止。

First, I don't understand the advantage of using multi & docker.首先,我不明白使用 multi & docker 的好处。 As I see it, you want each worker in a separate container.在我看来,您希望每个工人都在一个单独的容器中。 That way you have flexibility and micro-services environment.这样你就有了灵活性和微服务环境。

If you still want to have multiple workers in the same container, I can suggest workaround to keep your container open by adding while true; do sleep 2; done如果您仍然希望在同一个容器中有多个工作人员,我可以建议通过添加while true; do sleep 2; done来保持容器打开的解决方法while true; do sleep 2; done while true; do sleep 2; done while true; do sleep 2; done to the end of your command: celery multi start 2 -l INFO -c:1 3 -c:2 7 -Q:1 queue1 -Q:2 queue2,queue3 && while true; do sleep 2; done while true; do sleep 2; done您的命令: celery multi start 2 -l INFO -c:1 3 -c:2 7 -Q:1 queue1 -Q:2 queue2,queue3 && while true; do sleep 2; done celery multi start 2 -l INFO -c:1 3 -c:2 7 -Q:1 queue1 -Q:2 queue2,queue3 && while true; do sleep 2; done celery multi start 2 -l INFO -c:1 3 -c:2 7 -Q:1 queue1 -Q:2 queue2,queue3 && while true; do sleep 2; done . celery multi start 2 -l INFO -c:1 3 -c:2 7 -Q:1 queue1 -Q:2 queue2,queue3 && while true; do sleep 2; done

Alternatively, wrap it in a short script:或者,将其包装在一个简短的脚本中:

#!/bin/bash
celery multi start 2 -l INFO -c:1 3 -c:2 7 -Q:1 queue1 -Q:2 queue2,queue3
while true; do sleep 2; done

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM