[英]sql statements run much slower in python than commandline sqlite3
I have a sql script that creates 2 temp tables, then creates 2 permanent tables based on the temp tables (4 steps). 我有一个sql脚本,该脚本创建2个临时表,然后基于该临时表创建2个永久表(4个步骤)。 No rows are returned. 不返回任何行。 When that script is run from the command line using $ sqlite3 MyDB.db -init Summarize.sql
it takes less than a minute. 使用$ sqlite3 MyDB.db -init Summarize.sql
从命令行运行该脚本时,只需不到一分钟的时间。
When the sql is run as a big string in python (via the sqlite3 standard module) connection.executescript("""<contents of Summarize.sql here>""")
it takes 15 minutes. 当sql作为大字符串在python中(通过sqlite3标准模块)运行时, connection.executescript("""<contents of Summarize.sql here>""")
将花费15分钟。 (The first 3 steps take 1 minute, the last step takes the remaining 14.) (前3个步骤需要1分钟,最后一个步骤需要其余14分钟。)
Given that it's the exact same sql, what could python be doing that would slow it down so much? 鉴于它是完全相同的sql,python会怎么做会使它变慢呢? How can I speed this up? 我怎样才能加快速度? (The last step is a drop table if exists xxx; create table xxx as select ...
It's a join, but it's the same join that the command line runs quickly.) A friend suggested the apsw module, but I'm loathe to switch without strong cause. ( drop table if exists xxx; create table xxx as select ...
最后一步是drop table if exists xxx; create table xxx as select ...
这是一个联接,但与命令行快速运行是相同的联接。)一位朋友建议使用apsw模块,但我讨厌切换时没有充分的理由。
Calling SQLite from Python is about 100 times slower versus a command line call in my tests. 在我的测试中,从Python调用SQLite的速度比命令行调用慢大约100倍。 Here is a solution. 这是一个解决方案。 I just output my query to a text file sql.txt and invoke a command line SQLite call from Python: 我只是将查询输出到文本文件sql.txt并从Python调用命令行SQLite调用:
import subprocess
import sys
print subprocess.Popen("sqlite3 reviews.db < sql.txt", shell=True, stdout=subprocess.PIPE).stdout.read()
You can add multiple queries to sql.txt and call other files by using .read filename
command 您可以使用.read filename
命令将多个查询添加到sql.txt并调用其他文件
声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.