简体   繁体   English

Gunicorn在多处理流程和工作人员之间共享内存

[英]Gunicorn shared memory between multiprocessing processes and workers

I have an python application that uses a dictionary as a shared memory between multiple processes: 我有一个使用字典作为多个进程之间的共享内存的python应用程序:

from multiprocessing import Manager
manager = Manager()
shared_dict = manager.dict()

REST API is implemented using Flask. REST API是使用Flask实现的。 While using pywsgi or simply Flask.run to initialise the Flask server everything was working fine. 使用pywsgi或仅使用Flask.run初始化Flask服务器时,一切工作正常。 I decided to throw in the mix gunicorn. 我决定放进混合粗粮。 Now, when I access this shared dict from any of the workers (even when only one is running) I get the error: 现在,当我从任何一个工作程序访问此共享字典时(即使只有一个正在运行),我也会收到错误消息:

message = connection.recv_bytes(256) # reject large message message = connection.recv_bytes(256)#拒绝大消息
IOError: [Errno 35] Resource temporarily unavailable IOError:[Errno 35]资源暂时不可用

I have been looking into mmap, multiprocessing Listener and Client and they all looked like a lot of overhead. 我一直在研究mmap,多处理侦听器和客户端,它们看起来都很多开销。

I don't know about the specific error, but I think the most probable cause is that when you add the web server, processes are initialized on demand, so the manager_dict is lost within calls. 我不知道具体的错误,但是我认为最可能的原因是,当您添加Web服务器时,进程是按需初始化的,因此manager_dict在调用中丢失了。 If the dict is not big enough and you can pay the serialization/de-serialization penalty, using redis in-memory data structure store with the py-redis library is rather straightforward. 如果字典不够大,你可以支付序列化/反序列化点球,使用Redis的内存中数据结构存储与该PY-Redis的库是相当简单的。

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM