简体   繁体   中英

Operations on “double” and optimization in C

I have recently analyzed an old piece of code compiled with VS2005 because of a different numerical behaviour in "debug" (no optimizations) and "release" (/O2 /Oi /Ot options) compilations. The (reduced) code looks like:

void f(double x1, double y1, double x2, double y2)
{
double a1, a2, d;

a1 = atan2(y1,x1);
a2 = atan2(y2,x2);
d = a1 - a2;
if (d == 0.0) { // NOTE: I know that == on reals is "evil"!
   printf("EQUAL!\n");
}

The function f is expected to print "EQUAL" if invoked with identical pairs of values (eg f(1,2,1,2) ), but this doesn't always happen in "release". Indeed it happened that the compiler has optimized the code as if it were something like d = a1-atan2(y2,x2) and removed completely the assignment to the intermediate variable a2 . Moreover, it has taken advantage of the fact that the second atan2() 's result is already on the FPU stack, so reloaded a1 on FPU and subtracted the values. The problem is that the FPU works at extended precision (80 bits) while a1 was "only" double (64 bits), so saving the first atan2() 's result in memory has actually lost precision. Eventually, d contains the "conversion error" between extended and double precision.

I know perfectly that identity ( == operator) with float/double should be avoided. My question is not about how to check proximity between doubles. My question is about how "contractual" an assignment to a local variable should be considered. By my "naive" point of view, an assignment should force the compiler to convert a value to the precision represented by the variable's type (double, in my case). What if the variables were "float"? What if they were "int" (weird, but legal)?

So, in short, what does the C standard say about that cases?

By my "naive" point of view, an assignment should force the compiler to convert a value to the precision represented by the variable's type (double, in my case).

Yes, this is what the C99 standard says. See below.

So, in short, what does the C standard say about that cases?

The C99 standard allows, in some circumstances, floating-point operations to be computed at a higher precision than that implied by the type: look for FLT_EVAL_METHOD and FP_CONTRACT in the standard , these are the two constructs related to excess precision. But I am not aware of any words that could be interpreted as meaning that the compiler is allowed to arbitrarily reduce the precision of a floating-point value from the computing precision to the type precision. This should, in a strict interpretation of the standard, only happen in specific spots, such as assignments and casts, in a deterministic fashion.

The best is to read Joseph S. Myers's analysis of the parts relevant to FLT_EVAL_METHOD :

C99 allows evaluation with excess range and precision following certain rules. These are outlined in 5.2.4.2.2 paragraph 8:

Except for assignment and cast (which remove all extra range and precision), the values of operations with floating operands and values subject to the usual arithmetic conversions and of floating constants are evaluated to a format whose range and precision may be greater than required by the type. The use of evaluation formats is characterized by the implementation-defined value of FLT_EVAL_METHOD:

Joseph S. Myers goes on to describe the situation in GCC before the patch that accompanies his post. The situation was just as bad as it is in your compiler (and countless others):

GCC defines FLT_EVAL_METHOD to 2 when using x87 floating point. Its implementation, however, does not conform to the C99 requirements for FLT_EVAL_METHOD == 2, since it is implemented by the back end pretending that the processor supports operations on SFmode and DFmode:

  • Sometimes, depending on optimization, a value may be spilled to memory in SFmode or DFmode, so losing excess precision unpredictably and in places other than when C99 specifies that it is lost.
  • An assignment will not generally lose excess precision, although -ffloat-store may make it more likely that it does.

The C++ standard inherits the definition of math.h from C99, and math.h is the header that defines FLT_EVAL_METHOD . For this reason you might expect C++ compilers to follow suit, but they do not seem to be taking the issue as seriously. Even G++ still does not support -fexcess-precision=standard , although it uses the same back-end as GCC (which has supported this option since Joseph S. Myers' post and accompanying patch).

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM