简体   繁体   中英

Is std::greater<double> and std::less<double> safe to use?

When comparing double values in C++, using the <,>,=,!= operators, we cannot always be sure about correctness of the result. Thats why we use other techniques to compare doubles , for example, we can compare two doubles a and b by testing if their difference is really close to zero. My question is, does the C++ standard library implements std::less<double> and std::greater<double> using these techniques, or it just uses the unsafe comparison operators?

You can be a 100% sure about the correctness of the result of those operators. It's just that a prior calculation may have resulted in truncation because precision of a double is not endless. So the operators are perfectly fine, just your operands are not what you expected them to be.

So it does not matter what you use for comparison.

They use standard operators. Here is the definition of std::greater in stl_function.h header file

  templatete<typename _Tp>
    struct greater : public binary_function<_Tp, _Tp, bool>
    {
      bool
      operator()(const _Tp& __x, const _Tp& __y) const
      { return __x > __y; }
    };

operator< and operator> do give the correct result, at least as far as possible. However, there are some fundamental problems involved with using floating point arithmetics, especially double . These are not reduced by using the comparison functions you mention, as they are inherent to the floating point representation used by current CPUs.

As for the functions std::less / std::greater : They are just packaged versions of the standard operators, intended to be used when a binary predicate is needed in STL algorithms.

A double value has a 64 bit representation, whereas the Intel CPUs' original "double" arithmetic is done in 80 bits. Sounds good at first to get some more precision "for free", but it also means that the result depends on whether the compiler lets the code use intermediate results directly from the FPU registers (in 80 bits) or from the values written back to memory (rounded to 64 bit). This kind of optimization is completely up to the compiler and isn't defined by any standard.

To make things more complex, modern compilers can also make use of the newer vector instructions (MMX / SSE), which again are 64 bits only. The problems described above do not appear in this context. However, it depends on the compiler whether it makes use of these instructions for floating point arithmetics.

Comparisons for less/greater of almost equal values always will suffer when the difference is only in the last bits of the mantissa -- they are always subject to truncation errors, and you should make sure that your program does not critically rely on the result of a comparison of very close values. You can for example considering them equal when their difference is less than a threshold, eg by if (fabs(a - b)/a < factor*DBL_EPSILON) { /* EQUAL */ } . DBL_EPSILON is defined in float.h , and factor depends on how many mathematical operations with possible truncation/rounding have been made previously, and should be tested thoroughly. I've been safe with values around factor=16..32 , but your mileage may vary.

From cppreference it says

Uses operator< on type T.

Meaning that unless there's an operator< or operator> you have specifically overloaded on double to implement proper comparison, there is going to be NO proper comparison extant for you using std::less or std::greater .

IOW you can use std::greater or std::less but it will use standard comparison unless you go implement some operator< or operator> specifically to compare double s or float s properly using their difference less than std::numeric_limits<double>::epsilon()

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM