简体   繁体   中英

Django update settings in Celery's async task

I am using celery to load Neural Network models and would like to store the loaded model in settings for fast prediction.

so in django.conf.settings I have:

MODELS = {}

and in celery task, I have the following snippet:

@app.task
def load_nn_models(model_name):
     from django.conf import settings
     ...    
     settings.MODELS[model_name] = {'model': net, 'graph': sess}

However, I noticed that the tasks are running in another thread that launches different Django Environment and any changes in the settings will not be reflected back to the main thread.

Is there a workaround for this?

EDIT

The parameters I am storing in settings are:

Django settings are not the right place for this, obviously. First because the settings object is not a shared resource (there's one instance per process), then because the doc explicitely mentions that this object is to be considered as immutable .

If your point is to have a celery task computing those objects so that other tasks and / or the front can use them, you will have to find a way to serialize them and store the serialized version in a shared resource (database, cache, etc).

you can try to use the configparser import configparser

def dict_from_file():
    config = configparser.ConfigParser()
    config.read("config.ini")
    models = config['models']
    for x in models.values():
        print(x)

set file config.ini :

[models]
var_a: home
var_b: car
var_c: Next

call dict_from_file the output is:

home
car
Next

update the file config.ini :

[models]
var_a: home
var_c: New

call dict_from_file the output is:

home
New

ypu can read more for the supported-datatypes

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM