[英]When does Double.ToString() return a value in scientific notation?
[英]Why does Double.ToString("#") and BigInteger.ToString() produce different results for the same value?
考慮以下代碼:
double d = double.MaxValue;
BigInteger bi = new(d);
Console.WriteLine(d.ToString("#"));
Console.WriteLine(bi.ToString());
這會產生以下結果:
179769313486232000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000
179769313486231570814527423731704356798070567525844996598917476803157260780028538760589558632766878171540458953514382464234321326889464182768467546703537516986049910576551282076245490090389328944075868508455133942304583236903222948165808559332123348274797826204144723168738177180919299881250404026184124858368
該值的BigInteger
表示包含看似隨機的數字,而double
精度表示包含 294 個尾隨零。 兩者的長度相同。
為什么會這樣,那些看似隨機的數字是什么?
如果您查看BigInteger(double)
的實現,您會看到內部表示存儲在以 2 為基數的int[]
中。來自 double 的值被打包到這個_bits
數組的最后 3 個元素中;
// Populate the uints.
_bits = new uint[cu + 2];
_bits[cu + 1] = (uint)(man >> (cbit + kcbitUint));
_bits[cu] = unchecked((uint)(man >> cbit));
if (cbit > 0)
_bits[cu - 1] = unchecked((uint)man) << (kcbitUint - cbit);
_sign = sign;
所有其他_bits[0..cu -1]
元素將被初始化為零。 給出x * 2^k
形式的最終值,而不是x * 10^k
。 當轉換為以 10 為底時,任何較大的k
值都可能看起來像隨機噪聲。
但是在 base 16 中,答案是顯而易見的;
new BigInteger(double.MaxValue).ToString("X") == "0FFFFFFFFFFFFF800000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000"
double.MaxValue.ToString("#")
以 10 為基數舍入有效數字。
聲明:本站的技術帖子網頁,遵循CC BY-SA 4.0協議,如果您需要轉載,請注明本站網址或者原文地址。任何問題請咨詢:yoyou2525@163.com.