簡體   English   中英

列數是否會影響sqlalchemy的速度?

[英]Does the number of columns affect the speed of sqlalchemy?

我用sqlalchemy(python 2.7)創建了兩個表,數據庫是mysql 5.5。 以下是我的代碼:

engine = create_engine('mysql://root:123@localhost/test')

metadata = MetaData()

conn = engin.connect()

# For table 1:

columns = []

for i in xrange(100):

    columns.append(Column('c%d' % i, TINYINT, nullable = False, server_default = '0'))
    columns.append(Column('d%d' % i, SmallInteger, nullable = False, server_default = '0'))

user = Table('user', metadata, *columns)
# So user has 100 tinyint columns and 100 smallint columns.

# For table 2:

user2 = Table('user2', metadata,

        Column('c0', BINARY(100), nullable = False, server_default='\0'*100),
        Column('d0', BINARY(200), nullable = False, server_default='\0'*200),
)

# user2 has two columns contains 100 bytes and 200 bytes respectively. 

I then inserted 4000 rows into each table. Since these two tables have same row length, I
expect the select speed will be almost the same. I ran the following test code:

s1 = select([user]).compile(engine)

s2 = select([user2]).compile(engine)

t1 = time()

result = conn.execute(s1).fetchall()

print 't1:', time() - t1 

t2 = time()

result = conn.execute(s2).fetchall()

print 't2', time() - t2 

The result is :

t1: 0.5120000

t2: 0.0149999

這是否意味着表中的列數會顯着影響SQLAlchemy的性能? 先感謝您!

這是否意味着表中的列數會顯着影響SQLAlchemy的性能?

這是一個艱難的,它可能更多地取決於底層SQL引擎,在這種情況下是MySQL ,然后實際上是sqlalchemy ,這只不過是一種在使用相同界面時與不同數據庫引擎交互的方式。

SQLAlchemy是Python SQL工具包和Object Relational Mapper,它為應用程序開發人員提供了SQL的全部功能和靈活性。

它提供了一整套眾所周知的企業級持久性模式,旨在實現高效,高性能的數據庫訪問,並采用簡單的Pythonic域語言。

雖然我可能錯了,但您可以嘗試使用常規SQL對其進行基准測試。

我實際上做了一些測試......

import timeit

setup = """
from sqlalchemy import create_engine, MetaData, select, Table, Column
from sqlalchemy.dialects.sqlite import BOOLEAN, SMALLINT, VARCHAR
engine = create_engine('sqlite://', echo = False)
metadata = MetaData()
conn = engine.connect()
columns = []

for i in xrange(100):
    columns.append(Column('c%d' % i, VARCHAR(1), nullable = False, server_default = '0'))
    columns.append(Column('d%d' % i, VARCHAR(2), nullable = False, server_default = '00'))  


user = Table('user', metadata, *columns)
user.create(engine)
conn.execute(user.insert(), [{}] * 4000)

user2 = Table('user2', metadata, Column('c0', VARCHAR(100), nullable = False, server_default = '0' * 100),  \
                                 Column('d0', VARCHAR(200), nullable = False, server_default = '0' * 200))
user2.create(engine)
conn.execute(user2.insert(), [{}] * 4000)
"""

many_columns = """
s1 = select([user]).compile(engine)
result = conn.execute(s1).fetchall()
"""

two_columns = """
s2 = select([user2]).compile(engine)
result = conn.execute(s2).fetchall()
"""

raw_many_columns = "res = conn.execute('SELECT * FROM user').fetchall()"
raw_two_columns = "res = conn.execute('SELECT * FROM user2').fetchall()"

timeit.Timer(two_columns, setup).timeit(number = 1)
timeit.Timer(raw_two_columns, setup).timeit(number = 1)
timeit.Timer(many_columns, setup).timeit(number = 1)
timeit.Timer(raw_many_columns, setup).timeit(number = 1)

>>> timeit.Timer(two_columns, setup).timeit(number = 1)
0.010751008987426758
>>> timeit.Timer(raw_two_columns, setup).timeit(number = 1)
0.0099620819091796875
>>> timeit.Timer(many_columns, setup).timeit(number = 1)
0.23563408851623535
>>> timeit.Timer(raw_many_columns, setup).timeit(number = 1)
0.21881699562072754

我確實發現了這個:
http://www.mysqlperformanceblog.com/2009/09/28/how-number-of-columns-affects-performance/

盡管他使用max進行測試,但這有點有趣......

我真的很喜歡sqlalchemy,所以我決定使用pythons自己的sqlite3模塊進行比較

import timeit
setup = """
import sqlite3
conn = sqlite3.connect(':memory:')
c = conn.cursor()

c.execute('CREATE TABLE user (%s)' %\
          ("".join(("c%i VARCHAR(1) DEFAULT '0' NOT NULL, d%i VARCHAR(2) DEFAULT '00' NOT NULL," % (index, index) for index in xrange(99))) +\
           "c99 VARCHAR(1) DEFAULT '0' NOT NULL, d99 VARCHAR(2) DEFAULT '0' NOT NULL"))

c.execute("CREATE TABLE user2 (c0 VARCHAR(100) DEFAULT '%s' NOT NULL, d0 VARCHAR(200) DEFAULT '%s' NOT NULL)" % ('0'* 100, '0'*200))

conn.commit()
c.executemany('INSERT INTO user VALUES (%s)' % ('?,' * 199 + '?'), [('0',) * 200] * 4000)
c.executemany('INSERT INTO user2 VALUES (?,?)', [('0'*100, '0'*200)] * 4000)
conn.commit()
"""

many_columns = """
r = c.execute('SELECT * FROM user')
all = r.fetchall()
"""

two_columns = """
r2 = c.execute('SELECT * FROM user2')
all = r2.fetchall()
"""

timeit.Timer(many_columns, setup).timeit(number = 1)
timeit.Timer(two_columns, setup).timeit(number = 1)

>>> timeit.Timer(many_columns, setup).timeit(number = 1)
0.21009302139282227
>>> timeit.Timer(two_columns, setup).timeit(number = 1)
0.0083379745483398438

並得出了相同的結果,所以我確實認為它的數據庫實現不是sqlalchemy問題。

默認插入

import timeit

setup = """
from sqlalchemy import create_engine, MetaData, select, Table, Column
from sqlalchemy.dialects.sqlite import BOOLEAN, SMALLINT, VARCHAR
engine = create_engine('sqlite://', echo = False)
metadata = MetaData()
conn = engine.connect()
columns = []

for i in xrange(100):
    columns.append(Column('c%d' % i, VARCHAR(1), nullable = False, server_default = '0'))
    columns.append(Column('d%d' % i, VARCHAR(2), nullable = False, server_default = '00'))


user = Table('user', metadata, *columns)
user.create(engine)

user2 = Table('user2', metadata, Column('c0', VARCHAR(100), nullable = False, server_default = '0' * 100),  \
                                 Column('d0', VARCHAR(200), nullable = False, server_default = '0' * 200))
user2.create(engine)
"""

many_columns = """
conn.execute(user.insert(), [{}] * 4000)
"""

two_columns = """
conn.execute(user2.insert(), [{}] * 4000)
"""

>>> timeit.Timer(two_columns, setup).timeit(number = 1)
0.017949104309082031
>>> timeit.Timer(many_columns, setup).timeit(number = 1)
0.047809123992919922

用sqlite3模塊測試。

import timeit
setup = """
import sqlite3
conn = sqlite3.connect(':memory:')
c = conn.cursor()

c.execute('CREATE TABLE user (%s)' %\
    ("".join(("c%i VARCHAR(1) DEFAULT '0' NOT NULL, d%i VARCHAR(2) DEFAULT '00' NOT NULL," % (index, index) for index in xrange(99))) +\
            "c99 VARCHAR(1) DEFAULT '0' NOT NULL, d99 VARCHAR(2) DEFAULT '0' NOT NULL"))

c.execute("CREATE TABLE user2 (c0 VARCHAR(100) DEFAULT '%s' NOT NULL, d0 VARCHAR(200) DEFAULT '%s' NOT NULL)" % ('0'* 100, '0'*200))
conn.commit()
"""

many_columns = """
c.executemany('INSERT INTO user VALUES (%s)' % ('?,' * 199 + '?'), [('0', '00') * 100] * 4000)
conn.commit()
"""

two_columns = """
c.executemany('INSERT INTO user2 VALUES (?,?)', [('0'*100, '0'*200)] * 4000)
conn.commit()
"""

timeit.Timer(many_columns, setup).timeit(number = 1)
timeit.Timer(two_columns, setup).timeit(number = 1)

>>> timeit.Timer(many_columns, setup).timeit(number = 1)
0.14044189453125
>>> timeit.Timer(two_columns, setup).timeit(number = 1)
0.014360189437866211
>>>    

Samy.vilar的答案非常好。 但要記住的一個關鍵事項是列數會對任何數據庫和任何ORM的性能產生影響。 您擁有的列越多,從磁盤訪問和傳輸的數據就越多。

此外,根據查詢和表結構,添加更多列可能會將查詢從索引覆蓋變為強制訪問基表,這可能會在某些數據庫和某些情況下增加大量時間。

我只和SQLAlchemy玩過一點,但作為一名DBA,我通常建議我使用的開發人員只查詢他們需要的列,並避免在生產代碼中使用“select *”,因為它很可能包含的列數多於所需的列數,因為它會使代碼在添加到表/視圖中的潛在列時更加脆弱。

暫無
暫無

聲明:本站的技術帖子網頁,遵循CC BY-SA 4.0協議,如果您需要轉載,請注明本站網址或者原文地址。任何問題請咨詢:yoyou2525@163.com.

 
粵ICP備18138465號  © 2020-2024 STACKOOM.COM