I have a simple program that uses chrono
for timing that I had ported from MSVC to Code::Blocks. The display of the program shows the delta time from when it was started to 16 decimal places. After getting it to compile, I noticed that the timer was only moving up from the first 6 decimal places. The code remains unchanged, still using std::chrono::high_resolution_clock::now();
for the time, and then to calculate the delta time I use
double localDeltaTime = std::chrono::duration_cast<std::chrono::nanoseconds>(m_EndTime - m_StartTime).count();
localDeltaTime = localDeltaTime / 1000000000.0;
This clearly displays nanosecond timings, yet GCC only seems to do microseconds? Is this a known issue?
Here is an MRE,
#include <chrono>
#include <iostream>
#include <iomanip>
int main()
{
std::chrono::high_resolution_clock::time_point start = std::chrono::high_resolution_clock::now();
std::chrono::high_resolution_clock::time_point finish = start;
while (true)
{
finish = std::chrono::high_resolution_clock::now();
long double deltaTime = std::chrono::duration_cast<std::chrono::nanoseconds>(finish - start).count();
deltaTime /= 1000000000.0;
std::cout << std::setprecision(25) << deltaTime << std::endl;
}
return 0;
}
That's a known MinGW issue #5086 .
It mentions these possible workarounds:
std::chrono::steady_clock
QueryPerformanceCounter
Win32 APIRegarding MSVC:
First of all, on Windows, the best possible user-space timer resolution is 100 ns.
In MSVC system_clock
and steady_clock
both support this resolution, so you should see 7 decimal digits changing.
But writing to std::cout
takes a long time , on the order of 1 ms. So that's the reason why you're seeing large time steps in your version, you're essentially measuring std::cout
time.
I rewrote the test to print the minimum change in the time point:
#include <chrono>
#include <iostream>
#include <iomanip>
using namespace std::literals;
template<class Clock>
void runTest() {
std::cout << typeid(Clock).name() << '\n';
for (int i = 0; i < 5; i++) {
auto start = Clock::now();
for (;;) {
auto finish = Clock::now();
if (finish != start) {
std::cout << std::fixed << std::setprecision(9) << (finish - start) / 1.0s << '\n';
break;
}
}
}
}
int main() {
runTest<std::chrono::system_clock>();
runTest<std::chrono::steady_clock>();
runTest<std::chrono::high_resolution_clock>();
}
And here's the output I got:
MSVC 19.28 (VS 2019):
struct std::chrono::system_clock
0.000000200
0.000000100
0.000000100
0.000000100
0.000000100
struct std::chrono::steady_clock
0.000000100
0.000000100
0.000000100
0.000000200
0.000000100
struct std::chrono::steady_clock
0.000000200
0.000000100
0.000000100
0.000000100
0.000000100
MinGW-w64 GCC 10.2.0 (Rev1, Built by MSYS2 project):
NSt6chrono3_V212system_clockE
0.000999200
0.000998600
0.000999900
0.001000800
0.000999400
NSt6chrono3_V212steady_clockE
0.000000100
0.000000100
0.000000100
0.000000100
0.000000100
NSt6chrono3_V212system_clockE
0.000999900
0.001001900
0.001006200
0.001016600
0.000980700
So in case of MinGW we can see that at least steady_clock
provides 100 ns resolution, but unfortunately high_resolution_clock
is an alias of system_clock
.
The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.