简体   繁体   中英

Why float shows exact representation when declared

I have read many times in articles and MSDN that float (or double ) doesn't have exact representation of real world integers or decimal values. Correct ! That is visible when getting equality checks going wrong or also while asserting simple addition or subtraction tests.

It is also said that float doesn't have exact representation of decimal values like 0.1, but then if we declare a float in visual studio like float a = 0.1f; , how do they show exact 0.1 while debugging ? It should show something like 0.09999999.. . Where do I miss a link to understand it.

在此输入图像描述

It is a layman sort of question or may be I am still missing some concepts !

how do they show exact 0.1 while debugging

0.1 isn't the exact value of the float. It happens to be what you specified in the original assignment, but that's not the value of the float. I can see it's confusing :) I suspect the debugger is showing the shortest string representation which unambiguously ends up at the same value.

Try using:

float a = 0.0999999999f;

... and then I suspect in the debugger you'll see that as 0.1 as well.

So it's not that the debugger is displaying a "more exact" value - it's that it's displaying a "more generally convenient" representation.

If you want to display the exact value stored in a float or double, I have some code you can use for that .

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM