简体   繁体   中英

Share dynamic data in memory between gunicorn workers

I have a web app written in Django/Celery/Postgres/Gunicorn/Nginx.

The app allows performing scientific simulations to the user. These simulations can take from 5 seconds to 5 minutes. Regular requests and quick simulations are done with the standard blocking paradigm while long simulations are run in the background (some are even submitted to several AWS Lambda instances in parallel) by celery and then the client is updated by a WebSocket.

When a client logs in and opens one of his projects, a Simulation object is initialized and stored in a dict as {user:Simulation}. Initializing this Simulation object can take about 10 seconds so it is only done at the beginning. Every time the user interacts with his simulation on the client side, a particular view queries the Simulation object to the global dict and applies any changes, retrieves data, saves the simulation, runs the simulation, etc.

The problem with this approach is that it only works with 1 gunicorn worker since additional workers do not have access to the Simulation objects inside the global dict. Moreover, it is not possible to pre-load the objects since they are constantly changed by the user.

What is the best approach to work with such global dynamic object, that is too expensive to be re-initialized with every request ?

I think you want a memcache here:

https://docs.djangoproject.com/en/2.1/topics/cache/#memcached

The basic interface is set(key, value, timeout) and get(key):

>>> from django.core.cache import cache
>>> cache.set('my_key', 'hello, world!', 30)
>>> cache.get('my_key')
'hello, world!'

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM