简体   繁体   中英

Measuring CPU Time with clocks()

Libc provides the clock function for measuring the CPU Time of a Linux process. I wonder if this approach is still reliable/meaningful in modern computers? Why "CLOCKS_PER_SEC" is a constant? Why is 1e6 instructions per second assumed for every machine? Moreover, modern processors even scale the clock frequency.

No, it's not relevant anymore; it was poorly designed when it was added, and no longer fits its intended use case. Many of the details are frozen to keep backwards compatibility.

Use POSIX.1-2001 clock_gettime(CLOCK_PROCESS_CPUTIME_ID, &ts) instead. You can theoretically reach nanosecond precision using it.

Let me share my understanding.

CLOCKS_PER_SEC have no direct relation with CPU Clock. Imagine that you have abstract timer which configured with frequency 1000000 ticks per second. Actually this frequency is very low and could be achieved literally everywhere by dividing MAIN_CLK. If your OS wants to support POSIX syscall clock, then it must implement this clock() syscall which returns quantity of this low freq timer ticks. That is why you can measure only with 1ms granularity with clock().

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM