简体   繁体   中英

C# loss of precision when dividing doubles

I know this has been discussed time and time again, but I can't seem to get even the most simple example of a one-step division of doubles to result in the expected, unrounded outcome in C# - so I'm wondering if perhaps there's ie some compiler flag or something else strange I'm not thinking of. Consider this example:

double v1 = 0.7;
double v2 = 0.025;
double result = v1 / v2;

When I break after the last line and examine it in the VS debugger, the value of "result" is 27.999999999999996. I'm aware that I can resolve it by changing to "decimal," but that's not possible in the case of the surrounding program. Is it not strange that two low-precision doubles like this can't divide to the correct value of 28? Is the only solution really to Math.Round the result?

Is it not strange that two low-precision doubles like this can't divide to the correct value of 28?

No, not really. Neither 0.7 nor 0.025 can be exactly represented in the double type. The exact values involved are:

0.6999999999999999555910790149937383830547332763671875
0.025000000000000001387778780781445675529539585113525390625

Now are you surprised that the division doesn't give exactly 28? Garbage in, garbage out...

As you say, the right result to represent decimal numbers exactly is to use decimal . If the rest of your program is using the wrong type, that just means you need to work out which is higher: the cost of getting the wrong answer, or the cost of changing the whole program.

Precision is always a problem, in case you are dealing with float or double .

Its a known issue in Computer Science and every programming language is affected by it. To minimize these sort of errors, which are mostly related to rounding, a complete field of Numerical Analysis is dedicated to it.

For instance, let take the following code.

What would you expect?

You will expect the answer to be 1 , but this is not the case, you will get 0.9999907 .

        float v = .001f;            
        float sum = 0;
        for (int i = 0; i < 1000; i++ )
        {
            sum += v;
        }

It has nothing to do with how 'simple' or 'small' the double numbers are. Strictly speaking, neither 0.7 or 0.025 may be stored as exactly those numbers in computer memory, so performing calculations on them may provide interesting results if you're after heavy precision.

So yes, use decimal or round.

To explain this by analogy:

Imagine that you are working in base 3. In base 3, 0.1 is (in decimal) 1/3, or 0.333333333'.

So you can EXACTLY represent 1/3 (decimal) in base 3, but you get rounding errors when trying to express it in decimal.

Well, you can get exactly the same thing with some decimal numbers: They can be exactly expressed in decimal, but they CAN'T be exactly expressed in binary; hence, you get rounding errors with them.

Short answer to your first question: No, it's not strange. Floating-point numbers are discrete approximations of the real numbers, which means that rounding errors will propagate and scale when you do arithmetic operations.

Theres' a whole field of mathematics called numerical analyis that basically deal with how to minimize the errors when working with such approximations.

It's the usual floating point imprecision. Not every number can be represented as a double, and those minor representation inaccuracies add up. It's also a reason why you should not compare doubles to exact numbers. I just tested it, and result.ToString() showed 28 (maybe some kind of rounding happens in double.ToString() ?). result == 28 returned false though. And (int)result returned 27 . So you'll just need to expect imprecisions like that.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM