[英]Python timeit not working?
Can somebody explain me why this is happening? 有人可以解释一下为什么会这样吗?
aatiis@aiur ~ $ time python /usr/lib64/python2.7/timeit.py -n 1 \
-- 'x = 10**1000'
1 loops, best of 3: 0.954 usec per loop
real 0m0.055s
user 0m0.050s
sys 0m0.000s
aatiis@aiur ~ $ time python /usr/lib64/python2.7/timeit.py -n 1 \
-- 'x = 10**100000'
1 loops, best of 3: 0.954 usec per loop
real 0m0.067s
user 0m0.040s
sys 0m0.020s
aatiis@aiur ~ $ time python /usr/lib64/python2.7/timeit.py -n 1 \
-- 'x = 10**10000000'
1 loops, best of 3: 0.954 usec per loop
real 0m20.802s
user 0m20.540s
sys 0m0.170s
I get the exact same result from timeit
, but time
tells me that evaluating 10**10000000
takes more than 20 seconds. 我从
timeit
得到完全相同的结果,但time
告诉我,评估10**10000000
需要超过20秒。 The same happens if I call timeit
from the interpreter: 如果我从解释器调用
timeit
,也会发生同样的情况:
>>> t = time.time()
>>> print timeit.timeit('x = 10**10000000;', number=1)
5.00679016113e-06
>>> print time.time() - t
20.6168580055
Why is my timeit
not working, or what am I doing wrong? 为什么我的
timeit
不工作,或者我究竟做错了什么?
Additional info: 附加信息:
>>> print sys.version
2.7.1+ (2.7:4f07cacb2c3b+, Mar 28 2011, 23:11:59)
[GCC 4.4.5]
>>> print sys.version_info
>>> sys.version_info(major=2, minor=7, micro=2, releaselevel='alpha', serial=0)
UPDATE: 更新:
Here's another very interesting observation: 这是另一个非常有趣的观察:
>>> def run():
... t = time.time()
... x = 10**10000000
... print time.time() - t
When I press enter after defining this function, it takes about half a minute till I get back to a prompt. 当我在定义此功能后按Enter键时,大约需要半分钟才能返回提示。 And then:
接着:
>>> run()
2.14576721191e-06
Why is that happening? 为什么会这样? Is the function body being pre-compiled or optimized somehow?
函数体是以某种方式预编译或优化的吗?
My guess is the problem is in how you're stating the problem to timeit
. 我的猜测是,问题是出在你是如何陈述的问题
timeit
。 I think what's happening is that the expression is being evaluated once when the test expression is compiled and then just looked at (rather than re-evaluated) with each timeit loop. 我认为正在发生的事情是,在编译测试表达式时,表达式将被评估一次,然后只使用每个timeit循环查看(而不是重新评估)。 So currently all you're measuring is the time it takes to do the assignment rather than the calculation.
因此,目前您所测量的只是进行分配而不是计算所需的时间。
You'll need to force the calculation to happen each time: 你需要强制计算每次都发生:
timeit.timeit('x = 10; y = 100; z = x ** y')
Edit: in answer to your later question the function body is being optimized. 编辑:在回答您以后的问题时,正在优化功能体。 The compiler sees
10*100000
and realises that it won't ever change so calculates it at compile time rather than run time. 编译器看到
10*100000
并意识到它不会改变,所以在编译时而不是运行时计算它。
Compare: 相比:
>>> import dis
>>> def run():
... return 10**100
...
>>> dis.dis(run)
3 0 LOAD_CONST 3 (100000000000000000000000000000000000
00000000000000000000000000000000000000000000000000000000000000000L)
3 RETURN_VALUE
And 和
>>> def runvar():
... x = 10
... return x**100
...
>>> dis.dis(runvar)
3 0 LOAD_CONST 1 (10)
3 STORE_FAST 0 (x)
4 6 LOAD_FAST 0 (x)
9 LOAD_CONST 2 (100)
12 BINARY_POWER
13 RETURN_VALUE
Notice that BINARY_POWER
is executed at runtime only in the second case. 请注意,
BINARY_POWER
仅在第二种情况下在运行时执行。 So timeit
behaves as it should. 所以
timeit
表现得应该如此。
声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.