简体   繁体   中英

Overflow in std::chrono::duration for remaining time measurement

I would like to know, how much time is remaining for an event to happen. I'm using boost::chrono or std::chrono for that.

Basically the algorithm is like that:

using std::chrono; // Just for this example
auto start = steady_clock::now();
seconds myTotalTime(10);
while(true){
  auto remaining = start + myTotalTime - steady_clock::now();
  seconds remainingSecs = duration_cast<seconds>(remaining);
  if(remainingSecs.count() <= 0) return true;
  else print("Remaining time = " + remainingSecs.count()); // Pseudo-Code!
}

Now the following could (theoretically) happen: start is near the end of the period of the clock (IMO there is no notion what 0 means, so it could be arbitrary).
Then start + myTotalTime could overflow and the subtraction could underflow.

Even simple things like steady_clock::now() - start could underflow.

For unsigned types this is not a problem. If they were unsigned, then the standard guarantees that I still get the correct number of "units" for steady_clock::now() - start in the case of an overflowed now() and the underflowing subtraction: 10 - 250 = 10 + 256 - 255 = 16 for 8bit unsigned math.

But AFAIK over/underflow for signed types is undefined behaviour.

Am I missing anything? Why are the duration and especially time_points defined with signed instead of unsigned types?

Here is a short program you can use to see how much range is left (relative to now) in steady_clock for any platform:

#include <chrono>
#include <iostream>

int
main()
{
    using namespace std::chrono;
    using namespace std;
    using days = duration<int, ratio<86400>>;
    using years = duration<double, ratio_multiply<ratio<146097, 400>, days::period>>;
    cout << years{steady_clock::time_point::max() -
                  steady_clock::now()}.count() << " years to go\n";
}

This will output the number of years from now that steady_clock will overflow. After running this a few times on a few different platforms, you should get a warm fuzzy feeling that unless your program is going to be running for more than a couple hundred years, you don't need to worry.

On my platform (and this is common), steady_clock is measuring the time since boot in units of nanoseconds.

For me this program outputs:

292.127 years to go

Any platform and any clock that can't represent now() without overflow is not likely to be widely adopted.

Why are the duration and especially time_points defined with signed instead of unsigned types?

  • To make duration and time_point subtraction less error prone. Yes, if you subtract unsigned_smaller - unsigned_larger you do get a well-defined result, but that result is not likely to be the answer the programmer is expecting (except in the example of time_point + duration you give).

  • So that datetimes prior to 1970 can be represented with system_clock::time_point . Though not specified, it is a de-facto standard that system_clock is measuring Unix Time (using various precisions). And that requires holding negative values to represent times prior to 1970-01-01 00:00:00 UTC. I am currently attempting to standardize this existing practice.

Instead, signed integral representation is specified with a sufficient number of bits in an attempt to make overflow a rare problem. Nanosecond precision is guaranteed to have +/- 292 years of range. Coarser precisions will have more range than that. And custom representations are allowed when this default is not sufficient.

I don't see the problem. You choose the units such that they can't overflow for your needs, surely?

On my system (yes, I know, but) std::chrono::seconds is defined as int64_t, so assuming a sensible epoch is chosen (!= big bang), it won't overflow any time soon.

The others are too, but in nanoseconds that wipes 30 bits off, leaving a good chance of overflow, The remaining 33 bits give /only/ 270 years or so.

time_point internally uses a duration, as it is duration since epoch. In principle, if your epoch is 1970 (Unix), you may want to refer to dates before then, so it has to be signed. If your epoch is 1600 (Windows), you clearly can't use nanoseconds to represent current time points, which is a portability issue.

You can define your own duration using double, of course, where range is exchanged for resolution, and I have done this in the past as a convenient way of printing decimal second timestamps.

Or you could define a larger integer class if you need both range and resolution.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM