[英]Raw query in Django very much slower than the same in Postgres
I face the problem of an extremely slow (raw) query in my Django app. 我在我的Django应用中遇到了非常慢的(原始)查询的问题。 Strangely enough, it's not slow when I launch the isolated query from the shell (ex: python manage.py my_code_query) but it's slow when I run the whole program that contains all my querys (it "blocks" always at the same query; actually it eventually completes but it's something like 100x slower).
奇怪的是,当我从外壳启动隔离查询时(例如:python manage.py my_code_query),它并不慢,但是当我运行包含所有查询的整个程序时,它却很慢(它总是在同一查询中“阻塞”);实际上最终完成,但速度慢了约100倍)。 It's like if all the queries that are before the problematic one are consuming memory and there is not enough memory left when my query starts.
就像问题之前的所有查询都在消耗内存,而当我的查询开始时没有足够的内存。 The query ran directly from Postgres has no problem at all.
直接从Postgres运行的查询完全没有问题。
I read somewhere ( Django cursor.execute(QUERY) much slower than running the query in the postgres database ) that it can be the work_mem setting in Postgres that causes the problem but they are not very clear about the way they set it from Django. 我在某处读到了( Django cursor.execute(QUERY)比在postgres数据库中运行查询要慢得多 )可能是导致问题的Postgres中的work_mem设置,但是他们对从Django设置它的方式并不清楚。 Do I have to make a call from my connection.cursor.execute() to set the work_mem parameter?
我是否必须从connection.cursor.execute()进行调用以设置work_mem参数? Once only?
只有一次?
Could it be another problem than the work_mem setting? 可能是work_mem设置以外的另一个问题吗?
Any hint will be very appreciated. 任何提示将不胜感激。 Thanks, Patrick
谢谢,帕特里克
Inspired by that post ( How can I tell Django to execute all queries using 10MB statement mem? ), I made this call before executing my cursor: 受该帖子的启发( 我如何告诉Django使用10MB语句mem执行所有查询? ),我在执行游标之前进行了此调用:
cursor.execute("set work_mem='100MB'") #set statement_mem does not work
It's running timely now. 现在运行及时。
--EDIT: Well, that was yesterday. -编辑:恩,那是昨天。 Today it's not running timely anymore.
今天,它不再及时运行。 Don't know why.
不知道为什么。
声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.