简体   繁体   中英

Gunicorn shared memory between multiprocessing processes and workers

I have an python application that uses a dictionary as a shared memory between multiple processes:

from multiprocessing import Manager
manager = Manager()
shared_dict = manager.dict()

REST API is implemented using Flask. While using pywsgi or simply Flask.run to initialise the Flask server everything was working fine. I decided to throw in the mix gunicorn. Now, when I access this shared dict from any of the workers (even when only one is running) I get the error:

message = connection.recv_bytes(256) # reject large message
IOError: [Errno 35] Resource temporarily unavailable

I have been looking into mmap, multiprocessing Listener and Client and they all looked like a lot of overhead.

I don't know about the specific error, but I think the most probable cause is that when you add the web server, processes are initialized on demand, so the manager_dict is lost within calls. If the dict is not big enough and you can pay the serialization/de-serialization penalty, using redis in-memory data structure store with the py-redis library is rather straightforward.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM