简体   繁体   中英

Measuring time in millisecond precision

My program is going to race different sorting algorithms against each other, both in time and space. I've got space covered, but measuring time is giving me some trouble. Here is the code that runs the sorts:

void test(short* n, short len) {
  short i, j, a[1024];

  for(i=0; i<2; i++) {         // Loop over each sort algo
    memused = 0;               // Initialize memory marker
    for(j=0; j<len; j++)       // Copy scrambled list into fresh array
      a[j] = n[j];             // (Sorting algos are in-place)
                               // ***Point A***
    switch(i) {                // Pick sorting algo
    case 0:
      selectionSort(a, len);
    case 1:
      quicksort(a, len);
    }
                               // ***Point B***    
    spc[i][len] = memused;     // Record how much mem was used
  }
}

(I removed some of the sorting algos for simplicity)

Now, I need to measure how much time the sorting algo takes. The most obvious way to do this is to record the time at point (a) and then subtract that from the time at point (b). But none of the C time functions are good enough:

time() gives me time in seconds, but the algos are faster than that, so I need something more accurate.

clock() gives me CPU ticks since the program started, but seems to round to the nearest 10,000; still not small enough

The time shell command works well enough, except that I need to run over 1,000 tests per algorithm, and I need the individual time for each one.

I have no idea what getrusage() returns, but it's also too long.

What I need is time in units (significantly, if possible) smaller than the run time of the sorting functions: about 2ms. So my question is: Where can I get that?

gettimeofday() has microseconds resolution and is easy to use.

A pair of useful timer functions is:

static struct timeval tm1;

static inline void start()
{
    gettimeofday(&tm1, NULL);
}

static inline void stop()
{
    struct timeval tm2;
    gettimeofday(&tm2, NULL);

    unsigned long long t = 1000 * (tm2.tv_sec - tm1.tv_sec) + (tm2.tv_usec - tm1.tv_usec) / 1000;
    printf("%llu ms\n", t);
}

For measuring time, use clock_gettime with CLOCK_MONOTONIC (or CLOCK_MONOTONIC_RAW if it is available). Where possible, avoid using gettimeofday . It is specifically deprecated in favor of clock_gettime , and the time returned from it is subject to adjustments from time servers, which can throw off your measurements.

You can get the total user + kernel time (or choose just one) using getrusage as follows:

#include <sys/time.h>
#include <sys/resource.h>

double get_process_time() {
    struct rusage usage;
    if( 0 == getrusage(RUSAGE_SELF, &usage) ) {
        return (double)(usage.ru_utime.tv_sec + usage.ru_stime.tv_sec) +
               (double)(usage.ru_utime.tv_usec + usage.ru_stime.tv_usec) / 1.0e6;
    }
    return 0;
}

I elected to create a double containing fractional seconds...

double t_begin, t_end;

t_begin = get_process_time();
// Do some operation...
t_end = get_process_time();

printf( "Elapsed time: %.6f seconds\n", t_end - t_begin );

The Time Stamp Counter could be helpful here:

static unsigned long long rdtsctime() {
    unsigned int eax, edx;
    unsigned long long val;
    __asm__ __volatile__("rdtsc":"=a"(eax), "=d"(edx));
    val = edx;
    val = val << 32;
    val += eax;
    return val;
}

Though there are some caveats to this. The timestamps for different processor cores may be different, and changing clock speeds (due to power saving features and the like) can cause erroneous results.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM