简体   繁体   中英

float point arithmetic in python

>>> .1+.1+.1+.1 ==.4
True
>>> .1+.1+.1 ==.3
False
>>> 

The above is an output from python interpreter. I understand the fact that
floating point arithmetic is done using base 2 and is stored as binary in the
and so the difference in calculations like above results.
Now I found that.4 =.011(0011) [The number inside () repeats infinitely this is a binary
representation of this fraction] since this cannot be stored exactly an approximate value
will be stored.
Similary 0.3 =.01(0011)
So both 0.4 and 0.3 cannot be stored exactly internally.
But then what's the reason for python to return first as True and the second as False
As both cannot be compared
_______________________________________________________________________________
I did some research and found the following:
 >>> Decimal(.4) Decimal('0.40000000000000002220446049250313080847263336181640625') >>> Decimal(.1+.1+.1+.1) Decimal('0.40000000000000002220446049250313080847263336181640625') >>> Decimal(.1+.1+.1) Decimal('0.3000000000000000444089209850062616169452667236328125') >>> Decimal(.3) Decimal('0.299999999999999988897769753748434595763683319091796875') >>> Decimal(.1) Decimal('0.1000000000000000055511151231257827021181583404541015625')

This probably explains why the additions are happening the way they are
assuming that Decimal is giving the exact ouput of the number stored underneath

But then what's the reason for python to return first as True and the second as False As both cannot be compared

Floating-point numbers absolutely can be compared for equality. Problems arise only when you expect exact equality to be preserved by an approximate computation . But the semantics of floating-point equality comparison is perfectly well defined.

When you write 0.1 in a program, this is rounded to the nearest IEEE 754 binary64 floating-point number, which is the real number 0.1000000000000000055511151231257827021181583404541015625, or 0x1.999999999999ap−4 in hexadecimal notation (the 'p−4' part means × 2⁻⁴). Every (normal) binary64 floating-point number is a real number of the form ±2ⁿ × (1 + /2⁵³), where and are integers with −1022 ≤ ≤ 1023 and 0 ≤ < 2⁵³; this one is the nearest such number to 0.1.

When you add that to itself three times in floating-point arithmetic, the exact result 0.3000000000000000166533453693773481063544750213623046875 is rounded to 0.3000000000000000444089209850062616169452667236328125 or 0x1.3333333333334p−2 (since there are only 53 bits of precision available), but when you write 0.3 , you get 0.299999999999999988897769753748434595763683319091796875 or 0x1.3333333333333p−2 which is slightly closer to 0.3.

However, four times 0.1000000000000000055511151231257827021181583404541015625 or 0x1.999999999999ap−4 is 0.4000000000000000222044604925031308084726333618164062500 or 0x1.999999999999ap−2, which is also the closest floating-point number to 0.4 and hence is what you get when you write 0.4 in a program. So when you write 4*0.1 , the result is exactly the same floating-point number as when you write 0.4 .

Now, you didn't write 4*0.1 —instead you wrote .1 +.1 +.1 +.1 . But it turns out there is a theorem in binary floating-point arithmetic that x + x + x + x —that is, fl(fl(fl( + ) + ) + )—always yields exactly 4 without rounding (except when it overflows), in spite of the fact that x + x + x or fl(fl( + ) + ) = fl(3) may be rounded and not exactly equal to 3. (Note that fl( + ) = fl(2) is always equal to 2, again ignoring overflow, because it's just a matter of adjusting the exponent.)

It just happens that any rounding error committed by adding the fourth term cancels out whatever rounding error may have been committed by adding the third!

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM