简体   繁体   中英

Why is this program's output unexpected?

The following code when executed in eclipse, running on Ubuntu and compiled using g++ compiler, provides unexpected results.

code:

#include <iostream>
int main()
{
    unsigned int a=5555;
    std::cout << (unsigned int)(((char*)&a)[0]) <<  "\n";
    std::cout << (unsigned int)(((char*)&a)[1]) <<  "\n";
    std::cout << (unsigned int)(((char*)&a)[2]) <<  "\n";
    std::cout << (unsigned int)(((char*)&a)[3]) <<  "\n";

  return 0;
}

I am trying to treat the variable a as an array of integers each of one byte size. When I execute the program, this is what I get as output:

output:

4294967219
21
0
0

question:

Why is the first value displayed so large (here int is of size 32 bits or 4 bytes). So each of the output values should obviously be no greater than 255 right? And why are the last three values zero? Or why I am getting the wrong result?

I also got the same result when tested in code::blocks, running the same compiler.

This because char is a signed integer type.

Decimal 5555 is hexadecimal 0x15b3 .

The 0xb3 byte, when sign-extended to a signed int , becomes 0xffffffb3 .

0xffffffb3 interpreted as an unsigned int is 4294967219 in decimal.

This is because of sign-extension. Let's look at your unsigned int a in memory:

b3 15 00 00

When you cast the first byte from a signed char to an unsigned int , the cast from char to int happens before the conversion from signed to unsigned , and therefore, the sign bit is extended, and the result ( 0xffffffb3 ) is what you see on your first line.

Try casting to an unsigned char * instead of a char * .

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM