简体   繁体   中英

time.clock() and time.time() resolution in Python2/3

I'm getting really, really confused about the precision of the results of the functions above.
To me the documentation isn't clear at all, for example here are two sentences:

from time module documentation

The precision of the various real-time functions may be less than suggested by the units in which their value or argument is expressed. Eg on most Unix systems, the clock “ticks” only 50 or 100 times a second.

from timeit module documentation

Define a default timer, in a platform-specific manner. On Windows, time.clock() has microsecond granularity, but time.time()'s granularity is 1/60th of a second. On Unix, time.clock() has 1/100th of a second granularity, and time.time() is much more precise. On either platform, default_timer() measures wall clock time, not the CPU time. This means that other processes running on the same computer may interfere with the timing.

Now because real-time, in Unix, it is returned by time.time() and it has a resolution far better than 1/100 how can it just "ticks" 50 or 100 times a second?


Always about resolution, I can't understand what the exact resolution I get calling each function, so I tried the followings and I put my guesses in the comments:

>>> time.clock()
0.038955                            # a resolution of microsecond?
>>> time.time()                     
1410633457.0955694                  # a resolution of 10-7 second?
>>> time.perf_counter()
4548.103329075                      # a resolution of 10-9 second (i.e nanosecond)?

PS This was tried on Python3.4.0, in Python2 for time.clock() and time.time() I always get 6 numbers after the dot, so 1us precision?

Precision relates to how often the value changes .

If you could call any of these functions infinitely fast, each of these functions would return a new value at different rates.

Because each returns a floating point value, which doesn't have absolute precision, you cannot tell anything from their return values as to what precision they have. You'll need to measure how the values change over time to see what their precision is.

To show the differences, run:

import time

def average_deltas(*t):
    deltas = [t2 - t1 for t1, t2 in zip(t, t[1:])]
    return sum(deltas) / len(deltas)

for timer in time.clock, time.time, time.perf_counter:
    average = average_deltas(*(timer() for _ in range(1000))) * 10 ** 6
    print('{:<12} {:.10f}'.format(timer.__name__, average))

On my Mac this prints:

clock        0.6716716717
time         0.2892525704
perf_counter 0.1550070010

So perf_counter has the greatest precision on my architecture, because it changes more often per second, making the delta between values smaller.

You can use the time.get_clock_info() function to query what precision each method offers:

>>> for timer in time.clock, time.time, time.perf_counter:
...     name = timer.__name__
...     print('{:<12} {:.10f}'.format(name, time.get_clock_info(name).resolution))
... 
clock        0.0000010000
time         0.0000010000
perf_counter 0.0000000010

Just want to update this as it has changed a bit recently.

Using Python3 version Python 3.8.11 in ubuntu

There is no time.clock

The delta method in the accepted answer don't give good metrics. Run them a good few times, and swap orders, and you will see bad variations.

However...

import time
for timer in time.time, time.perf_counter:
    name = timer.__name__
    print('{:<12} {:.10f}'.format(name, time.get_clock_info(name).resolution))

time 0.0000000010

perf_counter 0.0000000010

Both are showing nanosecond resolution

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM