简体   繁体   中英

Slightly different result from exp function on Mac and Linux

The following C program produces different results on my Mac and on Linux. I'm suprised because I assumed that the implementation of libm is somehow standardized

#include<math.h>
#include<stdio.h>

int main()
{
  double x18=-6.899495205106946e+01;
  double x19 = exp(-x18);
  printf("x19     = %.15e\n", x19);
  printf("x19 hex = %llx\n", *((unsigned long long *)(&x19)));
}

The output on Mac is

x19     = 9.207186811339878e+29
x19 hex = 46273e0149095886

and on Linux

x19     = 9.207186811339876e+29
x19 hex = 46273e0149095885

Both were compiled without any optimization flags as follows:

gcc -lm ....

I know that I never should compare floats to be excatly the same.

This issue came up during debugging, regrettably the algorithm using this calculation proofs to be numerically unstable and this slight difference leads to significant deviations in the final result. But this is a different problem.

I'm just surprised that such basic operations as exp are not standardized as I can expect for the basic algebraic operations specified by IEEE 754.

Are there any assumptions about precision I can rely on for different implementations of libm for different machines or for different versions ?


Because of the discussion below I used mpmath to compute the value with higher than machine precision and I get with two more figures the result 9.2071868113398768244 , so for both of my results the last figure is already wrong. The result on linux can be explained by down rounding this value, the Mac result is also off if the computer uses up rounding.

The C99 Specification states (other version should be similar):

J.3 Implementation-defined behavior

1 A conforming implementation is required to document its choice of behavior in each of the areas listed in this subclause. The following are implementation-defined:

...

J.3.6 Floating point

— The accuracy of the floating-point operations and of the library functions in <math.h> and <complex.h> that return floating-point results (5.2.4.2.2).

Meaning GNU libm and BSD libm are free to have different levels of accuracy. Likely what is happening, is that the BSD implemention on OSX rounds to the nearest (unit in the last place) ULP, and the GNU implementation truncates down to the next ULP.

IEEE-754 behavior is specified at the binary level. Using a Linux, I get identical values for Python's native math library, mpmath , and MPFR (via gmpy2 ). However, conversion to decimal varies between the three methods.

>>> import mpmath, gmpy2
>>> import mpmath, gmpy2, math
>>> x18=68.99495205106946
>>> x19=math.exp(x18)
>>> mp18=mpmath.mpf("68.99495205106946")
>>> mp19=mpmath.exp(mp18)
>>> gp18=gmpy2.mpfr("68.99495205106946")
>>> gp19=gmpy2.exp(gp18)
>>> x18 == mp18
True
>>> x18 == gp18
True
>>> x19 == mp19
True
>>> x19 == gp19
True
>>> print(x18, mp18, gp18)
68.99495205106946 68.9949520510695 68.994952051069461
>>> print(x19, mp19, gp19)
9.207186811339876e+29 9.20718681133988e+29 9.2071868113398761e+29

After conversion to Python's arbitrary precision integer form, all three results also show as exact.

>>> hex(int(x19))
'0xb9f00a484ac42800000000000'
>>> hex(int(mp19))
'0xb9f00a484ac42800000000000'
>>> hex(int(gp19))
'0xb9f00a484ac42800000000000'

So (at least one) Linux math library, mpmath , and gmpy2.mpfr agree.

Disclaimer: I maintain gmpy2 and have contributed to mpmath in the past.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM