简体   繁体   中英

C# floating point rounding behavior

I have been trying to solve a bug that was caused by floating point arithmetic and I reduced it to a simple piece of code that is causing the behavior I don't understand:

float one = 1;
float three = 3;

float result = one / three;
Console.WriteLine(result); // prints 0.33333

double back = three * result;

if (back > 1.0)
    Console.WriteLine("larger than one");
else if (back < 1.0)
    Console.WriteLine("less than one");
else
    Console.WriteLine("exactly one");

As result rounded to 0.33333, I would expect back to be less that 1, however the output is "larger than one".

Can someone explain what is going on here?

When I tried above code I found that

float result = one / three;

statement evaluate the value of result as 0.333333343 not the 0.33333 but console prints it as 0.33333 and then I executed the following statement

double back = three * result;

it evaluates the back as 1.0000000298023224 which is obviously greater than 1 that's why you are getting "larger than one" .

Using IEEE 754 rounding, let's see what's going on.

In IEEE 754 single-precision floating point, the value of a finite number is dictated by the following:

-1 sign × 2 exponent × (1 + mantissa × 2 -23 )

Where

  • sign is 0 if positive, otherwise 1;
  • exponent is a value between -126 and 127 (-127 and 128 are special); and
  • mantissa is a value between 0 and 8388607 (because it's a 23 bit integer).

If we substitute sign with 0 and exponent with -2, then we're guaranteed a value between 0.25 and 0.5. Why?

1 × 2 -2

is ¼. The value of

1 + mantissa × 2 -23

is guaranteed to be between 1 and 2, so that's our sign and exponent sorted.


Moving on, we can work out fairly quickly that there are two values which can be used as the mantissa value: 2796202 and 2796203.

Substituting each in, we get the following two values (one lower, one higher):

  • 0.333333313465118408203125 (for mantissa = 2796202)
  • 0.3333333432674407958984375 (for mantissa = 2796203)

The binary representation of the exact value (up to 22 digits) is:

1010101010101010101010...

As the next digit would be 1 , that would mean the value rounds up , not down. For this reason, the higher one has a less significant error than the lower one:

  • 0.333333313465118408203125 - ⅓ ≑ -1.987 × 10 -8
  • 0.3333333432674407958984375 - ⅓ ≑ 9.934 × 10 -9

And since it's larger than the exact value, when multiplied back it will be more than 1. That's why it uses a value that appears off initially -- binary rounding sometimes goes in the opposite direction of decimal rounding.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM