简体   繁体   中英

Why are dictionaries faster than lists in Python?

>>> timeit.timeit('test.append("test")', setup='test = []')
0.09363977164165221
>>> timeit.timeit('test[0] = ("test")', setup='test = {}')
0.04957961010914147

I even tried again with a loop, and same thing:

>>> timeit.timeit('for i in range(10): test.append(i)', setup='test = []')
1.3737744340367612
>>> timeit.timeit('for i in range(10): test[i] = i', setup='test = {}')
0.8633718070233272

Why is the list slower?

First of all, list.append and dict.__setitem__ are both O(1) average case. Of course they will have different coefficients, but there is not really any blanket reason to say that one or the other will be the faster. The coefficients may change depending on implementation detail, too.

Secondly, a more fair comparison would be to remove the attribute resolution overhead:

>>> timeit.timeit('test[0] = ("test")', setup='test = {}')
0.0813908576965332
>>> timeit.timeit('test_append("test")', setup='test = []; test_append = test.append')
0.06907820701599121

The lookup of the method name on the instance is relatively expensive, when you are looking at an extremely cheap operation such as append .

I also see lists being consistently a little faster, once there is some data inside. This example is python 3.5.2:

>>> dict_setup = 'import random; test = {random.random(): None for _ in range(1000)}'
>>> list_setup = 'import random; test = [random.random() for _ in range(1000)]; test_append=test.append'
>>> timeit.timeit('test[0] = "test"', setup=dict_setup)
0.06155529400166415
>>> timeit.timeit('test_append("test")', setup=list_setup)
0.057089386998995906

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM