简体   繁体   中英

Why does printf(“%.2f”, (double) 12.555) return 12.55?

I was writing a program where I had to round double to second decimal place. I noticed printf("%.2f", (double) 12.555) return 12.55 . However, printf("%.2f", (float) 12.555) returns 12.56 . Can anyone explain why this happens?

12.555 is a number that is not representable precisely in binary floating point. It so happens that the closest value to 1.2555 that is representable in double precision floating point on your system is slightly less than 1.2555, and the closest value to 1.2555 that is representable in single precision floating point is slightly more than 1.2555.

Assuming the rounding mode used by the conversion is round to nearest (ties to even), which is the default in IEEE 754 standard, then the described output is to be expected.

Floats and doubles are stored internally using IEEE 754 representation. The part that is relevant to your question is that both floats and doubles store the closest to the decimal number they can, given the limits of their representation. Roughly speaking, those limits are to do with the conversion of the decimal part of the original number to a binary number with a finite number of bits.

It turns out, the closest float to 12.555 is actually 12.55500030517578125 while the closest double to 12.555 is 12.554999999999999715782905696. Notice how the double provides more accuracy, but the error is negative.

At this point, it's probably now obvious why the round function goes up for float but down for double - they're both rounding to the closest decimal number to the underlying representation.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM