简体   繁体   English

在Python中,array.count()的数量级是否比list.count()慢几个?

[英]Is array.count() orders of magnitude slower than list.count() in Python?

Currently I am playing with Python performance, trying to speed up my programs (usually those which compute heuristics). 目前我正在玩Python性能,试图加速我的程序(通常是计算启发式的程序)。 I always used lists, trying not to get into numpy arrays. 我总是使用列表,试图不进入numpy数组。

But recently I've heard that Python has 8.7. array — Efficient arrays of numeric values 但最近我听说Python有8.7. array — Efficient arrays of numeric values 8.7. array — Efficient arrays of numeric values so I thought I would try that one. 8.7. array — Efficient arrays of numeric values所以我想我会尝试那个。

I wrote a piece of code to measure an array.count() vs. a list.count() , as I use it in many places in my code: 我编写了一段代码来测量array.count()list.count() ,因为我在代码的很多地方使用它:

from timeit import timeit
import array

a = array.array('i', range(10000))
l = [range(10000)]


def lst():
    return l.count(0)


def arr():
    return a.count(0)


print(timeit('lst()', "from __main__ import lst", number=100000))
print(timeit('arr()', "from __main__ import arr", number=100000))

I was expecting a slight performance improvement when using array . 使用array时,我期待性能略有提升。 Well, this is what happened: 嗯,这就是发生的事情:

> python main.py
0.03699162653848456
74.46420751473268

So, according to timeit the list.count() is 2013x faster than the array.count() . 因此,根据timeitlist.count()array.count()array.count() I definitely didn't expect that. 我绝对没想到。 So I've searched through SO, python docs etc. and the only thing I found was that the objects in array have to be first wrapped into int -s, so this could slow things down, but I was expecting this to happen when creating an array.array -instance, not when random accessing it (which I believe is what .count() does). 所以我搜索了SO,python docs等,我发现的唯一的事情就是数组中的对象必须首先被包装到int -s中,所以这可能会减慢速度,但是我希望这会在创建时发生array.array -instance,而不是随机访问它(我相信是.count() )。

So where's the catch? 捕获量在哪里?

Am I doing something wrong? 难道我做错了什么?

Or maybe I shouldn't use standard arrays and go straight to numpy.array s? 或者也许我不应该使用标准数组并直接进入numpy.array s?

where's the catch? 捕获的地方在哪里?

The initial test, as proposed above does not compare apples to apples: 如上所述,初始测试并不比较苹果与苹果:

not mentioning the , where range() creates indeed a RAM-allocated data-structure, whereas xrange() resembles a re-formulated object ( as seen below ) a generator -will never be comparable to whatever smart RAM-allocated data-structure. 没有提到 ,其中range()确实创建了一个RAM分配的数据结构,而xrange()类似于重新配置的对象(如下所示),一个生成器 - 永远不会与任何智能RAM相媲美分配的数据结构。

>>> L_above_InCACHE_computing = [ range( int( 1E24 ) ) ]    # list is created
>>> L_above_InCACHE_computing.count( 0 )                    # list has no 0 in
0
>>> L_above_InCACHE_computing.count( range( int( 1E24 ) )  )# list has this in
1

The generator's object intrinsic .__len__() spits out the length, where still no counting takes place, does it? 生成器的对象内在.__len__()吐出长度,仍然没有计数发生,是吗? ( glad it did not, it would not fit into even ~ 10^20 [TB] of RAM ... , yet it can "live" in py3+ as an object ) (很高兴它没有, 它甚至不适合~ 10^20 [TB]的RAM ... ,但它可以“生活”在py3 +作为对象)

>>> print( L[0].__doc__ )
range(stop) -> range object
range(start, stop[, step]) -> range object

Return an object that produces a sequence of integers from start (inclusive)
to stop (exclusive) by step.  range(i, j) produces i, i+1, i+2, ..., j-1.
start defaults to 0, and stop is omitted!  range(4) produces 0, 1, 2, 3.
These are exactly the valid indices for a list of 4 elements.
When step is given, it specifies the increment (or decrement).

Quantitatively fair testing? 定量公平测试? Better test engineering details are needed: 需要更好的测试工程细节:

Go well above a few tens of MB, so as to avoid false expectations from InCACHE-computing artifacts, that will never scale-out to real-world problem sizes: 远远超过几十MB,以避免来自InCACHE计算工件的错误期望,这些工件永远不会扩展到现实世界的问题大小:

>>> L_above_InCACHE_computing = [ range( int( 1E24 ) ) ]
>>> L_above_InCACHE_computing[0]
range(0, 999999999999999983222784)

>>> print( L_above_InCACHE_computing[0].__len__() )
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
OverflowError: Python int too large to convert to C ssize_t

Go into RAM-feasible, yet above InCACHE-horizon sizings: 进入RAM可行,但高于InCACHE-horizo​​n sizings:

# L_aRunABLE_above_InCACHE_computing = [ range( int( 1E9 ) ) ] # ~8+GB ->array
# would have no sense to test 
# a benchmark on an array.array().count( something ) within an InCACHE horizon

go straight to numpy arrays ? 直接进入numpy阵列?

Definitely a wise step to test either. 绝对是一个明智的步骤来测试。 Vectorised internalities may surprise, and often do a lot :o) 矢量化内部性可能会令人惊讶,并且通常会做很多事情:o)

Depends a lot on your other code, if numpy-strengths may even boost some other parts of your code-base. 如果numpy-strength甚至可以提升你的代码库的其他部分,那么在很大程度上取决于你的其他代码。 Last but not least, beware of premature optimisations and scaling. 最后但同样重要的是,要注意过早的优化和扩展。 Some [TIME] -domain traps could be coped with if can spend more in [SPACE] -domain, yet most dangerous are lost InCACHE-locality, where no tradeoffs may help. 如果可以在[SPACE] -domain中花费更多,则可以应对某些[TIME]域陷阱,但最危险的是丢失InCACHE-locality,其中没有权衡可能会有所帮助。 So, better do not prematurely lock on promising detail, at a cost of loosing a global scale performance target. 因此,最好不要过早锁定有前景的细节,但要以失去全球规模的绩效目标为代价。

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM