简体   繁体   中英

Compiling celery worker with pyinstaller

I am trying to compile a script that starts a celery worker pool using Pyinstaller. The issue seems to be related to spawning new processes.

My code:

from multiprocessing import freeze_support
import app

if __name__ == "__main__":
    freeze_support()
    app.celery.worker_main(
        argv=['--broker=redis://localhost:6379', '--loglevel=DEBUG']
    )

The script compiles and launches but then I get a RuntimeError:

Traceback (most recent call last): 
File "c:\users\nbrown\documents\trach_and_trace\lib\site-packages\celery\worker\__init__.py", line 206, in start 
    self.blueprint.start(self) 
File "c:\users\nbrown\documents\trach_and_trace\lib\site-packages\celery\bootsteps.py", line 123, in start 
    step.start(parent) 
File "c:\users\nbrown\documents\track_and_trace\lib\site-packages\celery\bootsteps.py", line 374, in start 
    return self.obj.start() 
File "c:\users\nbrown\documents\track_and_trace\lib\site-packages\celery\concurrency\base.py", line 131, in start 
    self.on_start()
File "c:\users\nbrown\documents\track_and_trace\lib\site-packages\celery\concurrency\prefork.py", line 117, in on_start 
    **self .options) 
File "c:\users\nbrown\documents\track_and_trace\lib\site-packages\billiard\pool.py", line 968, in __init__ 
    self._create_worker_process(i) 
File "c:\users\nbrown\documents\track_and_trace\lib\site-packages\billiard\pool.py", line 1064, in _create_worker_process 
    w.start()
File "c:\users\nbrown\documents\track_and_trace\lib\site-packages\billiard\process.py", line 137, in start 
    self._popen = Popen(self) 
File "c:\users\nbrown\documents\track_and_trace\lib\site-packages\billiard\forking.py", line 242, in __init__
    cmd = get_command_line() + [rhandle] 
File "c:\users\nbrown\documents\track_and_trace\lib\site-packages\billiard\forking.py", line 356, in get_command_line
    is not going to be frozen to produce a Windows executable.'") 
RuntimeError: Attempt to start a new process before the current process has finished its bootstrapping phase. This probably means that have forgotten to use the proper idiom in the main module: 
    if __name__ == __main__: 
        freeze_support() 
    The "freeze_support()" line can be omitted if the program is not going to be frozen to produce a Windows executable.

These errors occur in some sort of infinite loop. My best guess about how to solve this issue is to put a freeze_support call somewhere else, but after looking through all files in the traceback I see no obvious places to do this. Any suggestions about alternate ways to invoke the worker pool that might work would also be much appreciated.

I encountered the same problem. After several hours of research, I got the solution as following:

from billiard import freeze_support
# define your celery app here
# ...
if __name__ == "__main__":
    freeze_support()
    app.celery.worker_main(
        argv=['--broker=redis://localhost:6379', '--loglevel=DEBUG']
    )

Don't use freeze_support of multiprocessing! Because worker instances of celery is preforked by biliard module, which is a forked module from multiprocessing. My working enviroment is as following, 1) windows 10, 32 bit 2) python 3.6.5 3) celery version: 3.1.25

After above settings, I got packaged exe which can be run directly. Hope you guys good luck!

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM