简体   繁体   中英

Why is double rounded and decimal is not?

The following C# code:

int n = 3;
double  dbl = 1d / n;
decimal dec = 1m / n;
Console.WriteLine(dbl * n == 1d);
Console.WriteLine(dec * n == 1m);

outputs

True
False

Obviously, neither double nor decimal can represent 1/3 exactly. But dbl * n is rounded to 1 and dec * n is not. Why? Where is this behaviour documented?

UPDATE

Please note that my main question here is why they behave differently. Presuming that the choice of rounding was a conscious choice made when IEEE 754 and .NET were designed, I would like to know what were the reasons for choosing one type of rounding over the other. In the above example double seems to perform better producing the mathematically correct answer despite having fewer significant digits than decimal . Why did the creators of decimal not use the same rounding? Are there scenarios when the existing behaviour of decimal would be more beneficial?

test:

int n = 3;
double  dbl = 1d / n;
decimal dec = 1m / n;

Console.WriteLine("/n");
Console.WriteLine(dbl);
Console.WriteLine(dec);

Console.WriteLine("*n");
Console.WriteLine(dbl * n);
Console.WriteLine(dec * n);

result:

/n
0.333333333333333
0.3333333333333333333333333333
*n
1
0.9999999999999999999999999999

decimal saved with base 10, double and single - base 2. probably for double after 3*0.333333333333333 in cpu will be binary overload and result 1. but 3*0.3---3 with base 10 - no overload, result 0.9---9

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM