简体   繁体   中英

ANSI C floating point calculation inaccuracy

How do people deal with floating point inaccuracy in ANSI C? In this case, there is an expected result of 4.305, but ANSI C returns an additional 0.000001?

#include<stdio.h>
main()
{
  float _lcl1=66.3;
  float _ucl1=76;
  float lbl1 = 0;
  float ubl1 = 0;
  lbl1 = (_lcl1 - 2.5 * (_ucl1 - _lcl1));
  printf ("%e\n",lbl1);
}

4.205001e+01

Ideas I have are that this must be a common issue so there is a standard lib to deal with this or people convert these to integers, do the calculation, and then convert them back? Can someone provide some insight on a successful "in practice" strategy?

This has nothing to do with ANSI C and everything to do with floating point arithmetic.

You should read: "What Every Computer Scientist Should Know About Floating-Point Arithmetic" http://www.math.umd.edu/~jkolesar/mait613/floating_point_math.pdf

To give an inadequate summary, floating point arithmetic is not a magic infinite precision mechanism -- it is an approximate way of representing an infinite set of real numbers using a small (typically 32 or 64) number of bits. It does its work in binary, not in decimal, and the fractions exactly representable in binary are not the same as those exactly representable in decimal. Rounding is an issue, as is the interval between floats.

Anyway, you really should read the above paper if you are a working programmer. There is far more to this than one can describe in a few paragraphs on Stack Overflow and the topic is very important.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM