简体   繁体   English

如何在Linux上的C中测量执行时间

[英]How to measure execution time in C on Linux

I am working on encryption of realtime data. 我正在对实时数据进行加密。 I have developed encryption and decryption algorithm. 我已经开发了加密和解密算法。 Now i want to measure the execution time of the same on Linux platform in C. How can i correctly measure it ?. 现在,我想测量在C语言中Linux平台上相同程序的执行时间。我如何正确地测量它? I have tried it as below 我已经尝试过如下

             gettimeofday(&tv1, NULL);
         /* Algorithm Implementation Code*/
             gettimeofday(&tv2, NULL);

        Total_Runtime=(tv2.tv_usec - tv1.tv_usec) +          
                      (tv2.tv_sec - tv1.tv_sec)*1000000);

which gives me time in microseconds. 这给了我微秒的时间。 Is it correct way of time measurement or i should use some other function? 是正确的时间测量方式还是我应该使用其他功能? Any hint will be appreciated. 任何提示将不胜感激。

Read time(7) . 读取时间(7) You probably want to use clock_gettime(2) with CLOCK_PROCESS_CPUTIME_ID or CLOCK_MONOTONIC . 您可能想将clock_gettime(2)CLOCK_PROCESS_CPUTIME_IDCLOCK_MONOTONIC Or you could just use clock(3) (for the CPU time in microseconds, since CLOCK_PER_SEC is always a million). 或者,您可以只使用clock(3) (用于CPU时间(以微秒为单位,因为CLOCK_PER_SEC始终为一百万))。

If you want to benchmark an entire program (executable), use time(1) command. 如果要对整个程序(可执行文件)进行基准测试,请使用time(1)命令。

clock() : The value returned is the CPU time used so far as a clock_t; clock() :返回的值是到目前为止使用的CPU时间,作为clock_t;

Logic 逻辑

Get CPU time at program beginning and at end. 获取程序开始和结束时的CPU时间。 Difference is what you want. 差异就是你想要的。

Code

clock_t begin = clock();

/****  code ****/

clock_t end = clock();
double time_spent = (double)(end - begin) //in microseconds

To get the number of seconds used,we divided the difference by CLOCKS_PER_SEC . 要获得使用的秒数,我们将差值除以CLOCKS_PER_SEC

More accurate 更准确

In C11 timespec_get() provides time measurement in the range of nanoseconds . 在C11中, timespec_get()提供了以纳秒为单位的时间测量。 But the accuracy is implementation defined and can vary. 但是准确性是由实现定义的,并且会有所不同。

Measuring the execution time of a proper encryption code is simple although a bit tedious. 测量适当的加密代码的执行时间很简单,尽管有点乏味。 The runtime of a good encryption code is independent of the quality of the input--no matter what you throw at it, it always needs the same amount of operations per chunk of input. 好的加密代码的运行时间与输入的质量无关-不管您对它进行什么投入,每块输入总是需要相同数量的操作。 If it doesn't you have a problem called a timing-attack. 如果不是这样,您会遇到一个称为定时攻击的问题。

So the only thing you need to do is to unroll all loops, count the opcodes and multiply the individual opcodes with their amount of clock-ticks to get the exact runtime. 因此,您唯一需要做的就是展开所有循环,对操作码进行计数,并将各个操作码乘以其时钟信号的数量,以获取准确的运行时间。 There is one problem: some CPUs have a variable amount of clock-ticks for some of their operations and you might have to change those to operations that have a fixed amount of clock-ticks. 有一个问题:某些CPU的某些操作具有可变数量的时钟,而您可能必须将其更改为具有固定时钟数量的操作。 A pain in the behind, admitted. 坦白的说,这是一种痛苦。

If the single thing you want is to know if the code runs fast enough to fit into a slot of your real-time OS you can simply take to maximum and fill cases below with NOOPs (Your RTOS might have a routine for that). 如果您只想知道代码是否能够以足够快的速度运行以适合您的实时操作系统的插槽,则可以简单地将代码最大化并在下面的示例中填充NOOP(您的RTOS可能有一个例程)。

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM