In the code bellow, I am trying to store a 2 byte int
in 2 char
s. Then, I try to show on the screen the number I stored. Those parts work pretty well.
If I try to see the binary form of the number I stored on the other hand I don't get what I expect. 256
gives me 00000000 1
while the correct is 10000000 0
unsigned int num = 256;
unsigned int pos = 0;
unsigned char a[2] = {num << pos, ((num << pos) & 0xFF00) >> 8};
//we store the number in 2 bytes
cout << (((unsigned int)a[0] + ((unsigned int)a[1] << 8)) >> pos) << endl;
//we check if the number we stored is the num
for(int i = 0; i < 2; i++)//now we display the binary version of the number
{
for(int j = 0; j < 8; j++)
cout << ((a[i] >> j)&1);
cout << " ";
}
Can someone please explain what I am doing wrong?
Change:
cout << ((a[i] >> j)&1);
To:
cout << ((a[i] >> (8-j-1))&1);
Or change:
for(int j = 0; j < 8; j++)
To:
for(int j = 8-1; j >= 0; j--)
unsigned char a[2] = {num << pos, ((num << pos) & 0xFF00) >> 8};
means that you store the low bits in a[0] and high bits in a[1].
256 = 1 00000000
a[0] = 00000000
a[1] = 00000001
And for(int j = 0; j < 8; j++)
( least significant bits first )should be for(int j = 7; j >=0; j--)
( most significant bits first)
Numbers are stored in memory with least significant bits first.
Think of it this way: the number 456 has 6 in the 10^0 place, 5 in 10^1 place, etc. It makes some sense to store it as "654" (where the i'th character of the string corresponds to the 10^i'th digit).
The computer stores numbers the same way: the first bit of a number represents the 2^0th place, and the i'th bit to the 2^i'th place.
This rule is actually half-broken with Big Endian, which is a way of storing hexadecimal digits in memory from biggest to smallest (human-readable way).
The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.