简体   繁体   English

Python Celery任务重新启动celery worker

[英]Python Celery task to restart celery worker

In celery, is there a simple way to create a (series of) task(s) that I could use to automagically restart a worker? 在celery中,是否有一种简单的方法来创建一个(一系列)任务,我可以用来自动重启工作程序?

The goal is to have my deployment automagically restart all the child celery workers every time it gets a new source from github. 目标是让我的部署每次从github获得新的源时就自动重启所有子芹菜工作者。 So I could then send out a restartWorkers() task to my management celery instance on that machine that would kill (actually stopwait) all the celery worker processes on that machine, and restart them with the new modules. 这样一来,我可以向该计算机上的管理celery实例发送一个restartWorkers()任务,该任务将终止(实际上是stopwait)该计算机上的所有celery worker进程,并使用新模块重新启动它们。

The plan is for each machine to have: 该计划是为每台机器配备:

  • Management node [Queues: Management, machine-specific] - Responsible for managing the rest of the workers on the machine, bringing up new nodes and killing old ones as necessary 管理节点[队列:管理,特定于机器]-负责管理机器上其余的工作进程,启动新节点并根据需要杀死旧节点
  • Worker nodes [Queues: git revision specific, worker specific, machine specific] - Actually responsible for doing the work. 辅助节点[队列:特定于git修订版,特定于辅助程序,特定于计算机]-实际上负责执行工作。

It looks like the code I need is somewhere in dist_packages/celery/bin/celeryd_multi.py, but the source is rather opaque for starting workers, and I can't tell how it's supposed to work or where it's actually starting the nodes. 看起来我需要的代码在dist_packages / celery / bin / celeryd_multi.py中的某个位置,但是对于启动工作人员而言,源代码是相当不透明的,而且我无法说出它应该如何工作或实际上在何处启动节点。 (It looks like shutdown_nodes is the correct code to be calling for killing the processes, and I'm slowly debugging my way through it to figure out what my arguments should be) (看起来shutdown_nodes是用来终止进程的正确代码,我正在逐步调试它,以弄清楚我的论点是什么)

Is there a function/functions restart_nodes(self, nodes) somewhere that I could call or am I going to be running shell scripts from within python? 我可以在某处调用某个函数或函数restart_nodes(自身,节点),还是要从python内部运行Shell脚本?

/Also, is there a simpler way to reload the source into Python than killing and restarting the processes? /此外,有比杀死并重新启动进程更简单的方法将源重新加载到Python中吗? If I knew that reloading the module actually worked(Experiments say that it doesn't. Changes to functions do not percolate until I restart the process), I'd just do that instead of the indirection with management nodes. 如果我知道重新加载模块确实可以工作(实验说没有效果。对功能的更改在重新启动过程之前不会渗透),那么我将这样做,而不是使用管理节点进行间接操作。

EDIT: I can now shutdown, thanks to broadcast(Thank you mihael. If I had more rep, I'd upvote). 编辑:由于广播,我现在可以关机(谢谢mihael。如果我有更多代表,我会投票)。 Any way to broadcast a restart? 有什么办法可以播放重启? There's pool_restart, but that doesn't kill the node, which means that it won't update the source. 有pool_restart,但这不会杀死该节点,这意味着它不会更新源。

I've been looking into some of the behind the scenes source in celery.bin.celeryd:WorkerCommand().run(), but there's some weird stuff going on before and after the run call, so I can't just call that function and be done because it crashes. 我一直在celery.bin.celeryd:WorkerCommand()。run()中寻找一些幕后资源,但是在run调用之前和之后都有一些奇怪的事情发生,所以我不能只是调用它功能并完成,因为它崩溃了。 It just makes 0 sense to call a shell command from a python script to run another python script, and I can't believe that I'm the first one to want to do this. 从python脚本调用shell命令以运行另一个python脚本只是有意义0,而且我不敢相信我是第一个想要这样做的人。

You can try to use broadcast functionality of Celery. 您可以尝试使用Celery的广播功能

Here you can see some good examples: https://github.com/mher/flower/blob/master/flower/api/control.py 在这里您可以看到一些很好的示例: https : //github.com/mher/flower/blob/master/flower/api/control.py

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM