[英]Measuring CPU Time with clocks()
Libc provides the clock function for measuring the CPU Time of a Linux process. Libc提供了时钟功能,用于测量Linux进程的CPU时间。 I wonder if this approach is still reliable/meaningful in modern computers?
我想知道这种方法在现代计算机中是否仍然可靠/有意义? Why "CLOCKS_PER_SEC" is a constant?
为什么“ CLOCKS_PER_SEC”是一个常量? Why is 1e6 instructions per second assumed for every machine?
为什么每台机器假设每秒1e6条指令? Moreover, modern processors even scale the clock frequency.
此外,现代处理器甚至可以扩展时钟频率。
No, it's not relevant anymore; 不,不再相关了; it was poorly designed when it was added, and no longer fits its intended use case.
添加时它的设计很差,不再适合其预期的用例。 Many of the details are frozen to keep backwards compatibility.
许多细节被冻结以保持向后兼容性。
Use POSIX.1-2001 clock_gettime(CLOCK_PROCESS_CPUTIME_ID, &ts)
instead. 请改用POSIX.1-2001
clock_gettime(CLOCK_PROCESS_CPUTIME_ID, &ts)
。 You can theoretically reach nanosecond precision using it. 理论上,您可以使用它达到纳秒级精度。
Let me share my understanding. 让我分享我的理解。
CLOCKS_PER_SEC have no direct relation with CPU Clock. CLOCKS_PER_SEC与CPU时钟没有直接关系。 Imagine that you have abstract timer which configured with frequency 1000000 ticks per second.
想象一下,您有一个抽象计时器,它的配置频率为每秒1000000个滴答。 Actually this frequency is very low and could be achieved literally everywhere by dividing MAIN_CLK.
实际上,该频率非常低,实际上可以通过除以MAIN_CLK来实现。 If your OS wants to support POSIX syscall clock, then it must implement this clock() syscall which returns quantity of this low freq timer ticks.
如果您的操作系统要支持POSIX系统调用时钟,则它必须实现此clock()系统调用,该时钟返回此低频率计时器滴答的数量。 That is why you can measure only with 1ms granularity with clock().
因此,使用clock()只能以1ms的粒度进行测量。
声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.