简体   繁体   中英

Why does the runtime of high_resolution_clock increase with the greater frequency I call it?

In the following code, I repeatedly call std::chrono::high_resolution_clock::now twice, and measure the time it took between these two calls. I would expect this time to be very small, since there is no other code is run between these two calls. However, I observe strange behavior.

For small N, the max element is within a few nanoseconds, as expected. However, the more I increase N, I get very large outliers, and have gotten up to a few milliseconds. Why does this happen?

In other words, why does the max element of v increase as I increase N in the following code?

#include <iostream>
#include <vector>
#include <chrono>
#include <algorithm>

int main()
{
    using ns = std::chrono::nanoseconds;
    
    uint64_t N = 10000000;
    std::vector<uint64_t> v(N, 0);
    for (uint64_t i = 0; i < N; i++) {
        auto start = std::chrono::high_resolution_clock::now();
        v[i] = std::chrono::duration_cast<ns>(std::chrono::high_resolution_clock::now() - start).count();
    }
    
    std::cout << "max: " << *std::max_element(v.begin(), v.end()) << std::endl;
    
    return 0;
}

The longer you run your loop, the more likely it is that your OS will decide that your thread has consumed enough resources for the moment and suspend it. And the longer you run your loop, the more likely it is that this suspension will happen between those calls.

Since you're only looking at the "max" time, this only has to happen once to cause the max time to spike into the millisecond range.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM