简体   繁体   中英

How to calculate time in milliseconds of functions of bubble sort, insertion sort and selection sort in c++

which functions should use to calculate the time of a function Is there any built-in function for this purpose.

You can use this code:

auto a = chrono::steady_clock::now();

//write your code here

auto b = chrono::steady_clock::now();
double time = chrono::duration <double, milli> (b - a).count();
cout << endl << "Time ---> " << time;

There's nothing built in specifically for calculating the time taken by a function.

There are functions built in to retrieve the current time. See std::chrono::high_resolution_clock::now() for more details about that.

To time a function, you'd (at least normally) retrieve the current time just after entering the function, and again just before exiting.

You can then subtract the two, and usestd::duration_cast to convert that to a duration in terms of period that's convenient (milliseconds, microseconds, or nanoseconds, as the case may be).

Putting those together, you could (for example) get something like this:

template <class T, class U, template<class, class> class F, typename ...Args>
auto timer(F<T,U> f, std::string const &label, Args && ...args) {
    using namespace std::chrono;

    auto start = high_resolution_clock::now();
    auto holder = f(std::forward<Args>(args)...);
    auto stop = high_resolution_clock::now();
    std::cout << label << " time: " << duration_cast<microseconds>(stop - start).count() << "\n";

    return holder;
}

This lets you pass some arbitrary function (and arguments to be passed to that function) and prints out the time the function took to execute. At the moment it's doing microseconds, but changing that to milliseconds should be trivial.

At the end, this returns whatever value the called function produced, so if you had something like:

x = f(a, b, c);

You could replace that with:

x = timer(f, a, b, c);

...to get the time consumed by f printed out.

I usually do not count time in seconds as it will be different on machines with different CPU frequencies. I typically use CPU cycles such that it will be roughly the same number if you run on a laptop or on an overclocked high-end server. Cycles also reflect and relate better to other machine characteristics as CPU cache latencies. This is what the Linux kernel uses internally

# cat /sys/devices/system/clocksource/clocksource0/current_clocksource
tsc

Every CPU architecture has some sort of internal timestamp counter. On x86_64 it is the TSC (time stamp counter) that can be read with the Intel intrinsic '__builtin_ia32_rdtsc()` as in

#include <cstdint>
static inline std::uint64_t now() {
    return __builtin_ia32_rdtsc();
}

Then on your code you can do something like this:

std::uint64_t start = now();
call_my_algo();
uint64_t t1 = now();
uint64_t ticks = t1>=t0 ? t1-t0 : (std::numeric_limits<uint64_t>::max() - t0)+t1;

There are a few things to be aware of. Each core contains its own timestamp counter so if they are unsynchronized then you might have a wrong reading if the process moves from one core to the other or the new timestamp might be lower than start such that the difference becomes negative - and therefore ticks will underflow. But those are rare nowadays, it's more common in old machines.

You can check if the TSC on your machine is to be trusted with

cat /proc/cpuinfo | grep tsc | tr ' ' '\n' | grep tsc | sort | uniq
constant_tsc
nonstop_tsc
rdtscp
tsc
tsc_scale

constant_tsc - the timestamp counter frequency is constant nonstop_tsc - the counter does not stop in C states rdtscp - cpu has the rdtscp instruction (returns timestamp and core) tsc - cpu has timestamp counter tsc_scale - amd tsc scaling support

You should be careful that no matter if you use std::chrono, which will eventually call clock_gettime() on Linux, or if you use the TSC, the compiler might invert the order of your statements such that the code above can become

std::uint64_t start = now();
uint64_t t1 = now();
uint64_t ticks = t1>=t0?t1-t0 : (std::numeric_limits<uint64_t>::max() - t0)+t1;
call_my_algo();

And therefore you will be measuring nothing. One way to avoid this is placing a optimization barrier like

#include <cstdint>
static inline std::uint64_t now() {
    asm volatile ( "" ::: "memory" );
    return __builtin_ia32_rdtsc();
}

There is also the matter of memory barriers that can be very involving.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM