简体   繁体   中英

Why does printf print wrong values?

Why do I get the wrong values when I print an int using printf("%f\\n", myNumber) ?

I don't understand why it prints fine with %d , but not with %f . Shouldn't it just add extra zeros?

int a = 1;
int b = 10;
int c = 100;
int d = 1000;
int e = 10000;

printf("%d %d %d %d %d\n", a, b, c, d, e);   //prints fine
printf("%f %f %f %f %f\n", a, b, c, d, e);   //prints weird stuff

well of course it prints the "weird" stuff. You are passing in int s, but telling printf you passed in float s. Since these two data types have different and incompatible internal representations, you will get "gibberish".

There is no "automatic cast" when you pass variables to a variandic function like printf , the values are passed into the function as the datatype they actually are (or upgraded to a larger compatible type in some cases).

What you have done is somewhat similar to this:

union {
    int n;
    float f;
} x;

x.n = 10;

printf("%f\n", x.f); /* pass in the binary representation for 10, 
                        but treat that same bit pattern as a float, 
                        even though they are incompatible */

如果要将它们打印为浮点型,可以在将它们传递给printf函数之前将它们强制转换为浮点型。

printf("%f %f %f %f %f\n", (float)a, (float)b, (float)c, (float)d, (float)e);

a, b, c, d and e aren't floats. printf() is interpreting them as floats, and this would print weird stuff to your screen.

Using incorrect format specifier in printf() invokes Undefined Behaviour

For example:

 int n=1;
 printf("%f", n); //UB

 float x=1.2f;
 printf("%d", x); //UB

 double y=12.34;
 printf("%lf",y); //UB 

Note: format specifier for double in printf() is %f .

the problem is... inside printf . the following happens

if ("%f") {
 float *p = (float*) &a;
 output *p;  //err because binary representation is different for float and int
}

the way printf and variable arguments work is that the format specifier in the string eg "%f %f" tells the printf the type and thus the size of the argument. By specifying the wrong type for the argument it gets confused.

look at stdarg.h for the macros used to handle variable arguments

For "normal" (non variadac functions with all the types specified) the compiler converts integer valued types to floating point types where needed.

That does not happen with variadac arguments, which are always passed "as is".

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM