简体   繁体   中英

Time measuring accuracy

By implementing the line

start_time = time.time()

at the start of my code and

print("%f seconds" % (time.time() - start_time))

at the end of my code I have been measuring the performance of my script (which can take hours to run). I have heard that this may not be the best method due to it being inaccurate. How accurate is it and is there a better alternative?

Try this, timeit from the standard library:

from timeit import default_timer as timer
start_time = timer()
end_time = timer()    
print(end_time - start_time)                                                                                                                                           
logger.info("Duration was {}".format(end_time - start_time)) 

The documentation for default_timer is of interest, and should really be quoted in the answer: "Define a default timer, in a platform-specific manner. On Windows, time.clock() has microsecond granularity, but time.time()'s granularity is 1/60th of a second. On Unix, time.clock() has 1/100th of a second granularity, and time.time() is much more precise. On either platform, default_timer() measures wall clock time, not the CPU time. This means that other processes running on the same computer may interfere with the timing."

Try usng datetime

from datetime import datetime
startTime = datetime.now()
#CODE
print("Time taken:",datetime.now() - startTime)

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM