Im trying to understand the following piece of code:
int omgezetteTijd =
((0xFF & rekenOmNaarTijdArray[0]) << 24) | ((0xFF & rekenOmNaarTijdArray[1]) << 16) |((0xFF & rekenOmNaarTijdArray[2]) << 8) | (0xFF & rekenOmNaarTijdArray[3]);
What I do not understand is why do you AND it with OxFF, you're ANDING an 8 bit value with 8 bits like so (11111111), so this should give you the same result.
But, when I do not AND it with OxFF I'm getting negative values? Cant figure out why this is happening?
When you or a byte with an int the byte will be promoted to an int. By default this is done with sign extension . In other words:
// sign bit
// v
byte b = -1; // 11111111 = -1
int i = (int) b; // 11111111111111111111111111111111 = -1
// \______________________/
// sign extension
By doing & 0xFF
you prevent this. Ie
// sign bit
// v
byte b = -1; // 11111111 = -1
int i = (int) (0xFF & b); // 00000000000000000000000011111111 = 255
// \______________________/
// no sign extension
0xFF as a byte
represents the number -1. When it's converted to an int
, it is still -1, but that has the bit representation 0xFFFFFFFF
, because of sign extension. & 0xFF
avoids this, treating the byte as unsigned when converting to an int.
The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.