[英]Compiling celery worker with pyinstaller
I am trying to compile a script that starts a celery worker pool using Pyinstaller. 我正在尝试编译一个使用Pyinstaller启动芹菜工作池的脚本。 The issue seems to be related to spawning new processes. 这个问题似乎与产生新流程有关。
My code: 我的代码:
from multiprocessing import freeze_support
import app
if __name__ == "__main__":
freeze_support()
app.celery.worker_main(
argv=['--broker=redis://localhost:6379', '--loglevel=DEBUG']
)
The script compiles and launches but then I get a RuntimeError: 脚本编译并启动但后来我得到一个RuntimeError:
Traceback (most recent call last):
File "c:\users\nbrown\documents\trach_and_trace\lib\site-packages\celery\worker\__init__.py", line 206, in start
self.blueprint.start(self)
File "c:\users\nbrown\documents\trach_and_trace\lib\site-packages\celery\bootsteps.py", line 123, in start
step.start(parent)
File "c:\users\nbrown\documents\track_and_trace\lib\site-packages\celery\bootsteps.py", line 374, in start
return self.obj.start()
File "c:\users\nbrown\documents\track_and_trace\lib\site-packages\celery\concurrency\base.py", line 131, in start
self.on_start()
File "c:\users\nbrown\documents\track_and_trace\lib\site-packages\celery\concurrency\prefork.py", line 117, in on_start
**self .options)
File "c:\users\nbrown\documents\track_and_trace\lib\site-packages\billiard\pool.py", line 968, in __init__
self._create_worker_process(i)
File "c:\users\nbrown\documents\track_and_trace\lib\site-packages\billiard\pool.py", line 1064, in _create_worker_process
w.start()
File "c:\users\nbrown\documents\track_and_trace\lib\site-packages\billiard\process.py", line 137, in start
self._popen = Popen(self)
File "c:\users\nbrown\documents\track_and_trace\lib\site-packages\billiard\forking.py", line 242, in __init__
cmd = get_command_line() + [rhandle]
File "c:\users\nbrown\documents\track_and_trace\lib\site-packages\billiard\forking.py", line 356, in get_command_line
is not going to be frozen to produce a Windows executable.'")
RuntimeError: Attempt to start a new process before the current process has finished its bootstrapping phase. This probably means that have forgotten to use the proper idiom in the main module:
if __name__ == __main__:
freeze_support()
The "freeze_support()" line can be omitted if the program is not going to be frozen to produce a Windows executable.
These errors occur in some sort of infinite loop. 这些错误发生在某种无限循环中。 My best guess about how to solve this issue is to put a freeze_support
call somewhere else, but after looking through all files in the traceback I see no obvious places to do this. 我对如何解决这个问题的最好的猜测是在其他地方放置一个freeze_support
调用,但是在查看了回溯中的所有文件后,我看不到明显的地方这样做。 Any suggestions about alternate ways to invoke the worker pool that might work would also be much appreciated. 关于调用可能有效的工作池的替代方法的任何建议也将非常感激。
I encountered the same problem. 我遇到了同样的问题。 After several hours of research, I got the solution as following: 经过几个小时的研究,我得到了如下解决方案:
from billiard import freeze_support
# define your celery app here
# ...
if __name__ == "__main__":
freeze_support()
app.celery.worker_main(
argv=['--broker=redis://localhost:6379', '--loglevel=DEBUG']
)
Don't use freeze_support of multiprocessing! 不要使用多处理的freeze_support! Because worker instances of celery is preforked by biliard module, which is a forked module from multiprocessing. 因为芹菜的工人实例是由biliard模块预先执行的,biliard模块是多处理的分叉模块。 My working enviroment is as following, 1) windows 10, 32 bit 2) python 3.6.5 3) celery version: 3.1.25 我的工作环境如下,1)windows 10,32 bit 2)python 3.6.5 3)芹菜版:3.1.25
After above settings, I got packaged exe which can be run directly. 经过上面的设置,我得到了可以直接运行的打包exe。 Hope you guys good luck! 希望你们好运!
声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.