简体   繁体   中英

Is array.count() orders of magnitude slower than list.count() in Python?

Currently I am playing with Python performance, trying to speed up my programs (usually those which compute heuristics). I always used lists, trying not to get into numpy arrays.

But recently I've heard that Python has 8.7. array — Efficient arrays of numeric values 8.7. array — Efficient arrays of numeric values so I thought I would try that one.

I wrote a piece of code to measure an array.count() vs. a list.count() , as I use it in many places in my code:

from timeit import timeit
import array

a = array.array('i', range(10000))
l = [range(10000)]


def lst():
    return l.count(0)


def arr():
    return a.count(0)


print(timeit('lst()', "from __main__ import lst", number=100000))
print(timeit('arr()', "from __main__ import arr", number=100000))

I was expecting a slight performance improvement when using array . Well, this is what happened:

> python main.py
0.03699162653848456
74.46420751473268

So, according to timeit the list.count() is 2013x faster than the array.count() . I definitely didn't expect that. So I've searched through SO, python docs etc. and the only thing I found was that the objects in array have to be first wrapped into int -s, so this could slow things down, but I was expecting this to happen when creating an array.array -instance, not when random accessing it (which I believe is what .count() does).

So where's the catch?

Am I doing something wrong?

Or maybe I shouldn't use standard arrays and go straight to numpy.array s?

where's the catch?

The initial test, as proposed above does not compare apples to apples:

not mentioning the , where range() creates indeed a RAM-allocated data-structure, whereas xrange() resembles a re-formulated object ( as seen below ) a generator -will never be comparable to whatever smart RAM-allocated data-structure.

>>> L_above_InCACHE_computing = [ range( int( 1E24 ) ) ]    # list is created
>>> L_above_InCACHE_computing.count( 0 )                    # list has no 0 in
0
>>> L_above_InCACHE_computing.count( range( int( 1E24 ) )  )# list has this in
1

The generator's object intrinsic .__len__() spits out the length, where still no counting takes place, does it? ( glad it did not, it would not fit into even ~ 10^20 [TB] of RAM ... , yet it can "live" in py3+ as an object )

>>> print( L[0].__doc__ )
range(stop) -> range object
range(start, stop[, step]) -> range object

Return an object that produces a sequence of integers from start (inclusive)
to stop (exclusive) by step.  range(i, j) produces i, i+1, i+2, ..., j-1.
start defaults to 0, and stop is omitted!  range(4) produces 0, 1, 2, 3.
These are exactly the valid indices for a list of 4 elements.
When step is given, it specifies the increment (or decrement).

Quantitatively fair testing? Better test engineering details are needed:

Go well above a few tens of MB, so as to avoid false expectations from InCACHE-computing artifacts, that will never scale-out to real-world problem sizes:

>>> L_above_InCACHE_computing = [ range( int( 1E24 ) ) ]
>>> L_above_InCACHE_computing[0]
range(0, 999999999999999983222784)

>>> print( L_above_InCACHE_computing[0].__len__() )
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
OverflowError: Python int too large to convert to C ssize_t

Go into RAM-feasible, yet above InCACHE-horizon sizings:

# L_aRunABLE_above_InCACHE_computing = [ range( int( 1E9 ) ) ] # ~8+GB ->array
# would have no sense to test 
# a benchmark on an array.array().count( something ) within an InCACHE horizon

go straight to numpy arrays ?

Definitely a wise step to test either. Vectorised internalities may surprise, and often do a lot :o)

Depends a lot on your other code, if numpy-strengths may even boost some other parts of your code-base. Last but not least, beware of premature optimisations and scaling. Some [TIME] -domain traps could be coped with if can spend more in [SPACE] -domain, yet most dangerous are lost InCACHE-locality, where no tradeoffs may help. So, better do not prematurely lock on promising detail, at a cost of loosing a global scale performance target.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM