简体   繁体   中英

Wasn't the Double Type precision of 15 digits in C#?

I was testing this code from Brainteasers:

        double d1 = 1.000001;

        double d2 = 0.000001;

        Console.WriteLine((d1 - d2) == 1.0);

And the result is "False". When I change the data type:

        decimal d1 = 1.000001M;

        decimal d2 = 0.000001M;

        decimal d3 = d1-d2;

        Console.WriteLine(d3 == 1);

The program writes the correct answer: "True".

This problem just uses 6 digits after the floating point. What happened with the precision of 15 digits?

This has nothing to do with precision - it has to do with representational rounding errors.

System.Decimal is capable of representing large floating point numbers with a significantly reduced risk of incurring any rounding errors like the one you are seeing. System.Single and System.Double are not capable of this and will round these numbers off and create issues like the one you are seeing in your example.

System.Decimal uses a scaling factor to hold the position of the decimal place thus allowing for exact representation of the given floating-point number, whereas System.Single and System.Double only approximate your value as best they can.

For more information, please see System.Double :

Remember that a floating-point number can only approximate a decimal number, and that the precision of a floating-point number determines how accurately that number approximates a decimal number. By default, a Double value contains 15 decimal digits of precision, although a maximum of 17 digits is maintained internally. The precision of a floating-point number has several consequences:

  • Two floating-point numbers that appear equal for a particular precision might not compare equal because their least significant digits are different.

  • A mathematical or comparison operation that uses a floating-point number might not yield the same result if a decimal number is used because the floating-point number might not exactly approximate the decimal number.

Generally, the way to check for equality of floating-point values is to check for near -equality, ie, check for a difference that is close to the smallest value (called epsilon ) for that datatype. For example,

if (Math.Abs(d1 - d2) <= Double.Epsilon) ...

This tests to see if the d1 and d2 are represented by the same bit pattern give or take the least significant bit.

Correction (Added 2 Mar 2015)

Upon further examination, the code should be more like this:

// Assumes that d1 and d2 are not both zero
if (Math.Abs(d1 - d2) / Math.Max(Math.Abs(d1), Math.Abs(d2)) <= Double.Epsilon) ...

In other words, take the absolute difference between d1 and d2 , then scale it by the largest of d1 and d2 , and then compare it to Epsilon .

References
http://msdn.microsoft.com/en-us/library/system.double.epsilon.aspx
http://msdn.microsoft.com/en-us/library/system.double.aspx#Precision

The decimal type implements decimal floating point whereas double is binary floating point.

The advantage of decimal is that it behaves as a human would with respect to rounding, and if you initialise it with a decimal value, then that value is stored precisely as you specified. This is only true for decimal numbers of finite length and within the representable range and precision. If you initialised it with say 1.0M/3.0M, then it would not be stored precisely just as you would write 0.33333-recurring on paper.

If you initialise a binary FP value with a decimal, it will be converted from the human readable decimal form, to a binary representation that will seldom be precisely the same value.

The primary purpose of the decimal type is for implementing financial applications, in the .NET implementation it also has a far higher precision than double, however binary FP is directly supported by the hardware so is significantly faster than decimal FP operations.

Note that double is accurate to approximately 15 significant digits not 15 decimal places . d1 is initialised with a 7 significant digit value not 6, while d2 only has 1 significant digit. The fact that they are of significantly different magnitude does not help either.

The idea of floating point numbers is that they are not precise to a particular number of digits. If you want that sort of functionality, you should look at the decimal data type.

The precision isn't absolute, because it's not possible to convert between decimal and binary numbers exactly.

In this case, .1 decimal repeats forever when represented in binary. It converts to .000110011001100110011... and repeats forever. No amount of precision will store that exactly.

避免比较浮点数的相等性。

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM