简体   繁体   中英

Measuring computation time

I have a program that works with iterations. For example, if I have 50000 iterations, the program does 50000 iterations, and finally stops. When I measure computation time for 1000 iterations, I got (for example) 1 second, but for 50000 iterations, 10 seconds.

Should I expect a direct ratio with iteration numbers and computation time? For the example given, shouldn't I expect 50 seconds for 50000 iterations? I'm just confused with the results…

I am measuring it with using clock() function. Before iteration i start with srand(time(NULL)); than define clock_t startTime; startTime = clock(); ..... and measure final iteration time with ((double)(clock() - startTime)) / CLOCKS_PER_SEC):

Measuring performance can be tricky on modern processors because for example many of them have a variable clock speed.

This means that they're normally going slow because no CPU is needed and they can use less energy and generate less heat that way.

When you require an heavy computation the OS detects that now there is work for the CPU and can increase the clock speed (and heat and current) to get the results faster.

To avoid this specific problem you should measure clock cycles used by your computation instead of cpu time (or worse wall clock time). A good profiler should give you this option (I'm using oprofile ).

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM