简体   繁体   中英

Floating point numbers inaccuracy?

I am prompting the user to input a float number. I save the number in float variable and multiply it by 100 to make it integer. Only 2 decimal places are allowed so it is a fairly easy thing. Now the strange part :

  1. User Input : 0.1 -> Output : 100
  2. User Input : 1.1 -> Output : 110
  3. User Input : 1.5 -> Output : 150
  4. User Input : 2.1 -> Output : 209.999985
  5. User Input : 2.5 -> Output : 250
  6. User Input : 3.8 -> Output : 380
  7. User Input : 4.2 -> Output : 419.999969
  8. User Input : 5.6 -> Output : 560
  9. User Input : 6.0 -> Output : 600
  10. User Input : 7.5 -> Output : 750
  11. User Input : 8.1 -> Output : 810.000061
  12. User Input : 9.9 -> Output : 989.999969

I tried this thing only till 10.00.

Referring Why Are Floating Point Numbers Inaccurate? I got to know the reason behind this behavior, but isn't there any way to know which number would behave strangely?

I don't know of a way to predict which numbers will do this, but most programmers don't really care.

You didn't specify the language you are using, but if you want to change something from a floating point representation into an integer representation, you usually have to do an explicit conversion using a function like Double.intValue() or Double.longValue() in Java or a cast operator (int)double_value;

These techniques usually just cast off the fractional part of the number. You may want to use a rounding function instead. Again, in java that would be Math.round() as described in the [javadoc] ( http://docs.oracle.com/javase/7/docs/api/java/lang/Math.html )

I can only tell you with which numbers this will not happen:
With all numbers that can be represented exactly in binary, that is all numbers of the form:

N = Sum(i, 2^n(i))

or:

N = 2^n1 + 2^n2 + 2^n3 + ....

where n(i) are integers (positive or negative) from a limited range.

Use FLT_DIG to print numbers within their matching decimal notation.

FLT_DIG to the number of leading decimal digits that a float will display and match its decimal assigned value. It is at least 6. In the below example, 2.1 can be printed to 2.10000e+00 which is 6 significant digits. 990.0 can be printed to 9.90000e+02 which is also 6 significant digits.

printf("%.*e", FLT_DIG - 1, 2.1f);  // 2.10000e+00
printf("%.*e", FLT_DIG - 1, 990.f); // 9.90000e+02

When code does operations like multiplying by 100 , the float product may incur a round-off error. C does not specify accuracy here, but an error of < 0.5 parts in 16 million can be expected. With many operations, this eats into the number of reliable digits down from FLT_DIG .

In general, avoid expecting matching arithmetic and computer results finer than FLT_DIG digits. If that is insufficient, use double , which is is at least good for at least 10 digits (and with typical double , is good for 15 digits - use DBL_DIG )

Note: "%.5e" direct printf() to print 1 digit before and 5 digits after the decimal point for a total of 6 significant digits. That is the reason for -1 in printf("%.*e", FLT_DIG - 1, ...);

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM