简体   繁体   English

nginx + uwsgi +烧瓶和多进程

[英]nginx + uwsgi + flask and multiprocess

I have I python-flask script running under uwsgi + nginx deployment configuration. 我在uwsgi + nginx部署配置下运行了python-flask脚本。 My uwsgi.ini file: 我的uwsgi.ini文件:

[uwsgi]
pythonpath=/usr/bin/python3
socket=/tmp/grace.sock
chmod-socket = 666
vacuum = true
uid = www-data 
gid = www-data
plugin= python3
chdir= /home/grace/pyRep/beta_grace
module= app:app
enable-threads= true
master= true
processes= 3
#cheaper= 1
logto = /home/grace/pyRep/beta_grace/uwsgi.log
lazy-apps = true
single-interpreter=true

now, inside my script I have a function like this: 现在,在脚本中我有一个像这样的函数:

from uwsgidecorators import *

@timer(60)    
def foo():
    global_var += 1
    print(global_var)

Looking at my logs i find: global_var: 1 global_var: 1 global_var: 1 查看我的日志,我发现:global_var:1 global_var:1 global_var:1

In my opinion this is due to the lazy-apps options enabled, so after fork i have three copies of this task running and after some time I find: 我认为这是由于启用了lazy-apps选项,所以在fork之后,我运行了该任务的三个副本,一段时间后,我发现:

global_var: 34 global_var: 32 global_var: 32 global_var:34 global_var:32 global_var:32

I tried with the @lock and @postfork decorator before @timer decorator but nothing changes. 我在@timer装饰器之前尝试了@lock和@postfork装饰器,但没有任何变化。 If I take out lazy-apps option I have problems connecting to the mongoDB engine and other weird behaviours so I think that I have to keep it. 如果我选择了lazy-apps选项,那么在连接到mongoDB引擎和其他怪异行为时会遇到问题,因此我认为我必须保留它。 The only solution I found is to limit processes to 1 but this obiouvsly decreases performances. 我发现的唯一解决方案是将进程限制为1,但这明显降低了性能。 Any advice?! 有什么建议吗?

The key point is that you can not share/use simple variable(even global) to communicate between multiple processes. 关键是您不能共享/使用简单变量(甚至全局变量)在多个进程之间进行通信。

IPC methods can be found here IPC方法可以在这里找到

For your case, I think redis is one solution, remember to use distributed lock 对于您的情况,我认为redis是一种解决方案,请记住使用分布式锁

I think I found a less invasive solution using a single process and a mule so now my uwsg.ini: 我想我发现了使用单个过程和一个ule子的侵入性较小的解决方案,所以现在我的uwsg.ini:

processes=1
mules=1

and my python script: 和我的python脚本:

@timer(60,target='mule')

This way I offloaded my main process binding the timer to the mule and other tasks on the main process. 这样,我就卸载了将计时器绑定到ule子和主进程上其他任务的主进程。 I thought of using 2 processes+1 mule but 我想到使用2个进程+ 1 ule子
also with only one process the speed is ok! 也只有一个过程,速度还可以!

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM