int x=25,i;
float *p=(float *)&x;
printf("%f\n",*p);
I understand that bit representation for floating point numbers and int are different, but no matter what value I store, the answer is always 0.000000. Shouldn't it be some other value depending on the floating point representation?
Your code has undefined behavior -- but it will most likely behave as you expect, as long as the size and alignment of types int
and float
are compatible.
By using the "%f" format to print *p
, you're losing a lot of information.
Try this:
#include <stdio.h>
int main(void) {
int x = 25;
float *p = (float*)&x;
printf("%g\n", *p);
return 0;
}
On my system (and probably on yours), it prints:
3.50325e-44
The int
value 25
has zeros in most of its high-order bits. Those bits are probably in the same place as the exponent field of type float
-- resulting in a very small number.
Look up IEEE floating-point representation for more information. Byte order is going to be an issue. (And don't do this kind of thing in real code unless you have a very good reason.)
As rici suggests in a comment, a better way to learn about floating-point representation is to start with a floating-point value, convert it to an unsigned integer of the same size, and display the integer value in hexadecimal. For example:
#include <stdio.h>
#include <string.h>
#include <stdlib.h>
void show(float f) {
unsigned int rep;
memcpy(&rep, &f, sizeof rep);
printf("%g --> 0x%08x\n", f, rep);
}
int main(void) {
if (sizeof (float) != sizeof (unsigned int)) {
fprintf(stderr, "Size mismatch\n");
exit(EXIT_FAILURE);
}
show(0.0);
show(1.0);
show(1.0/3.0);
show(-12.34e5);
return 0;
}
For the purposes of this discussion, we're going to assume both int
and float
are 32 bits wide. We're also going to assume IEEE-754 floats.
Floating point values are represented as sign * β exp * signficand
. For 32-bit binary floats, β
is 2
, the exponent exp
ranges from -126
to 127
, and the significand is a normalized binary fraction, such that there is a single leading non-zero bit before the radix point. For example, the binary integer representation of 25
is
11001 2
while the binary floating point representation of 25.0
would be:
1.10012 * 24 // normalized
The IEEE-754 encoding for a 32-bit float is
s eeeeeeee fffffffffffffffffffffff
where s
denotes the sign bit, e
denotes the exponent bits, and f
denotes the significand (fraction) bits. The exponent is encoded using "excess 127" notation, meaning an exponent value of 127
( 01111111 2
) represents 0
, while 1
( 00000001 2
) represents -126
and 254
( 11111110 2
) represents 127
. The leading bit of the significand is not explicitly stored, so 25.0
would be encoded as
0 10000011 10010000000000000000000 // exponent 131-127 = 4
However, what happens when you map the bit pattern for the 32-bit integer value 25
onto a 32-bit floating point format? We wind up with the following:
0 00000000 00000000000000000011001
It turns out that in IEEE-754 floats, exponent value 00000000 2
is reserved for representing 0.0
and subnormal (or denormal) numbers. A subnormal number is number close to 0
that can't be represented as 1.??? * 2 exp
1.??? * 2 exp
, because the exponent would have to be smaller than what we can encode in 8 bits. Such numbers are interpreted as 0.??? * 2 -126
0.??? * 2 -126
, with as many leading 0
s as necessary.
In this case, it adds up to 0.00000000000000000011001 2 * 2 -126
, which gives us 3.50325 * 10 -44
.
You'll have to map large integer values (in excess of 2 24
) to see anything other than 0 out to a bunch of decimal places. And, like Keith says, this is all undefined behavior anyway.
The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.