简体   繁体   中英

Why are some floating point numbers accurately represented in C#?

Inspired by this question , the following doesn't do what I'd expect it to:

float myFloat = 0.6;
Console.WriteLine(myFloat);
// Output: 0.6

I'd expect the above to print out 0.60000002384185791 (the floating point representation of 0.6 ) - clearly there is some mechanism here which is making this work when in fact it shouldn't (although as you can see from the linked question it sometimes doesn't work)

What is this mechanism and how does it work?

If you look at the implementation of Console.WriteLine , you'll see that it ends up calling ToString on the value with a default FormatProvider . Ie the result you're seeing is how the number appears when formatted using this format provider.

While it doesn't explain the details of how the result is produced, it does show that Console.WriteLine goes through some formatting of the value before printing it.

我猜想,当将它转换为字符串时,采用浮点数的WriteLine重载会进行舍入...

0.6 can't be represented under IEEE754 floats. It sits between 0.599999964237213134765625 (0x3f199999) and 0.600000083446502685546875 (0x3f19999b). The round-to-nearest mode yields 0.60000002384185791015625 (0x3f19999a) which is what WriteLine is printing.

You have to either use a higher precision floating point representation (double) or limit the number of decimal places the WriteLine prints:

float f = 0.6; Console.WriteLine("{0:N6}", f);

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM