简体   繁体   中英

why using clock() get minus number in measuring CPU time

I am using clock() to measure the amount of CPU time for my algorithm.

The code is like:

start_time = clock();
//code to be timed
.
end_time = clock();
elapsed_time = (end_time - start_time)*1000 / CLOCKS_PER_SEC;

printf("Time taken %d seconds %d milliseconds\n", elapsed_time/1000, elapsed_time%1000 );

But I get "0 seconds -175 milliseconds" as a result. I can't understand why. And seems "1 seconds 349 milliseconds" can really take 10 minutes or more in elapse time. Is that common?

Forking is a special case, where this type of code will result in negative time. One of the reason is that, clock() will return the number of clock ticks since the start of the program.

Just as a reminder, the value in start_time will be copied over to the child process.

  • For the parent process, the time should be positive. Since the clock tick count for start_time and end_time are for the same process.

  • For the child process, since it only starts after fork() , clock() will return the number of clock tick that the program runs from this point onwards. The time before the fork() is not recorded.

    Since the starting reference for counting clock ticks are different:

    • start_time is the number of clock ticks since the start of the parent process till the first clock()
    • end_time is the number of clock ticks since the start of the child process till the second clock() )

    It may result in negative result. Positive result is also possible, if the child process runs long enough to exceed the amount of time that the parent process starts.

EDIT

I am not sure what the expected time is, but if you want to count: clock ticks of parent process from start to end, and clock ticks of child process from after fork() to end, then modify your code to overwrite start_time with a new value of clock() in the child process. Or you can just set start_time to 0.

If start_time and end_time are 32-bit integers, they can only hold about 2147 seconds (about 35 minutes) before rolling over into negative numbers, because CLOCKS_PER_SEC is 1000000.

But it's worse than that, because you multiply the difference by 1000, which means if it's anything over 2.147 seconds it's going to overflow.

On the other hand, if they're not 32-bit integers, you're using the wrong printf format specifiers (and you're probably getting and ignoring a warning from the compiler), so you're seeing garbage.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM