I have a pretty decent understanding of IEEE 754 so this is not one of those "why does adding number a and number b result in..."-type of questions.
Rather I want to ask if I've understood the fixed-point number-format specifier correctly because it's not behaving as I would expect for some double values.
For example:
double d = 0x3FffffFFFFfffe * (1.0 / 0x3FffffFFFFffff);
Console.WriteLine(d.ToString("R"));
Console.WriteLine(d.ToString("G20"));
Console.WriteLine(d.ToString("F20"));
Both the "R"
and "G"
specifier prints out the same thing - the correct value of: 0.99999999999999989
but the "F"
specifier always rounds up to 1.0
no matter how many decimals I tell it to include. Even if I tell it to print the maximum number of 99 decimals ( "F99"
) it still only outputs "1."-followed by 99 zeroes.
So is my understanding broken, and can someone point me to the relevant section in the spec, or is this behavior broken? (It's no deal-breaker for me, I just want to know.)
Here is what I've looked at, but I see nothing explaining this.
(This is .Net4.0)
User "wb" linked to another question which I suspect has the best possible answer available for this question... although there was some fuzzy-ness on the details and documentation. (Unfortunately the comment was removed.)
The linked question was Formatting doubles for output in C# .
And in short it appears to state that unless you use the G
or R
specifier the output is always reduced to 15 decimal-places before custom formatting. Best doc anyone was able to link to in that question was this MSDN page . The details and wording isn't as crystal clear as I would wish, but as stated I think this is the best I'm going to find.
The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.