简体   繁体   中英

Why float is bigger than double after computing same value? Why float can't be implicitly converted to Decimal

So many arithmetic questions on SO, specially floating point hence I am merging my 2 questions together.

  1. Why float 0.3333333 > double 0.333333333333333 ?

Here is the program that prove it.

        static void Main(string[] args)
        {
            int a = 1;
            int b = 3;
            float f = Convert.ToSingle(a) / Convert.ToSingle(b);
            double db = Convert.ToDouble(a) / Convert.ToDouble(b);
            Console.WriteLine(f > db);
            Console.Read();
        }
  1. Why float can't be implicitly converted to decimal but int can?

eg

decimal d1 = 0.1f; //error
decimal d2 = 1; //no error

For your first question, float s are actually converted to double s when you use the > operator on them. If you print (double)f , you'll see its value:

0.333333343267441

While db is:

0.333333333333333

For the second question, although there isn't an implicit conversion from float to decimal , there is an explicit one, so you can use a cast:

float a = 0.1f; 
decimal d = (decimal)a;

I can't find anything in the language spec as to why this is, but I speculate that this conversion isn't something that you should do, so you need to be explicit about it. Why shouldn't you do this? Because decimal is supposed to represent discrete amounts like currency, while float and double are supposed to represent continuous amounts. They represent two very different things.

  1. When numerals in source text, such as .3333333 are converted to floating-point, they are rounded to the nearest representable value. It so happens that the nearest float value to .3333333 is slightly greater than .3333333; it is 0.333333313465118408203125, while the double value nearest to 0.333333333333333 is slightly less than that; it is 0.333333333333332981762708868700428865849971771240234375. Since 0.333333313465118408203125 is greater than 0.333333333333332981762708868700428865849971771240234375, f > db evaluates to true.

  2. I am unfamiliar with the rules of C# and its decimal type. However, I suspect the reason decimal d1 = 0.1f; is disallowed while decimal d2 = 1; is allowed is that not all float values can be converted to decimal without error, while all int values can be converted to decimal without error. According to Microsoft , decimal uses a significand of 96 digits, which suffices to represent any int exactly. However, it has smaller range than float , with its largest finite value being 2 96 −1, around 7.9228•10 28 . The largest finite float is 2 128 −2 104 , around 3.4028•10 38 .

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM