简体   繁体   中英

How to effectively measure difference in a run-time

One of the exercises in TC++PL asks:

Write a function that either returns a value or that throws that value based on an argument. Measure the difference in run-time between the two ways.

Great pity he never explaines how to measure such things. I'm not sure if I'm suppose to write simple "time start, time end" counter, or are there more effective and practical ways?

For each of the functions,

  • Get the start time
  • Call the function a million times (or more... a million isn't that many, really)
  • Get the end time and subtract from it the start time

and compare the results. That's about as practical as performance measuring gets.

Consider using boost.timer, it is about as simple as it gets.

#include <iostream>
#include <boost/timer.hpp>

boost::timer timer;
for (int i = 0; i < 100000; ++i) {
  // ... whatever you want to measure
}
std::cout << timer.elapsed() << " seconds.\n";

He doesn't explain because that's part of exercise.

If seriously, I believe you should write simple "time start, time end" and big loop inside.

The only measurement that ever matters is wall-clock time. Don't let anyone else trick you into believing anything else. Read Lean Thinking if you don't believe it yourself.

the simplest way would be to get the system time before and after your run. if that doesn't give enough resolution you can look into some of the higher-resolution system timers.

if you need more details you can use a commercial product like dot trace: http://www.jetbrains.com/profiler/

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM