简体   繁体   中英

C- Floating point precision

I have a program:

int main() 
{   
        float f = 0.0f;  
        int i;  

        for (i = 0 ; i < 10 ; i++) 
                f = f + 0.1f; 

        if (f == 1.0f) 
                printf("f is 1.0 \n"); 
        else 
                printf("f is NOT 1.0\n"); 

        return 0; 
} 

It always prints f is NOT 1.0 . I understand this is related to floating point precision in C. But I am not sure exactly where it is getting messed up. Can someone please explain me why it is not printing the other line?

Binary floating point cannot represent the value 0.1 exactly, because its binary expansion does not have a finite number of digits (in exactly the same way that the decimal expansion of 1/7 does not).

The binary expansion of 0.1 is

0.000110011001100110011001100...

When truncated to IEEE-754 single precision, this is approximately 0.100000001490116119 in decimal. This means that each time you add the "nearly 0.1" value to your variable, you accumulate a small error - so the final value is slightly higher than 1.0 .

This is equivelent to adding 0.33 together 3 times (0.99) and wondering why it is not equal to 1.0.

You may wish to read What Every Programmer Should Know About Floating Point Arithmetic

You cannot compare floats like this. You need to define a threshold and compare based on that. This blog post explains

For floating point numbers you should always use an epsilon value when comparing them:

#define EPSILON 0.00001f

inline int floatsEqual(float f1, float f2)
{
    return fabs(f1 - f2) < EPSILON; // or fabsf
}

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM