简体   繁体   中英

Mismatch between C and Java bitwise operation on hex values

I have the following line in a C code that claims to convert a signed int8 into unsigned int16.

float x = (float) (((int16_t) ((temp[0] <<8) & 0xff00) | (temp[1] & 0x00ff)));

I converted this for Java as

float x = (((temp[0] << 8) & 0xff00) | (temp[1] & 0x00ff));

For the same input array,

temp[] = {0xFC, 0x10}

x= -1008 // In C
x= 64528 // In Java

I referred on SO and google for various posts on this but could not identify what is missing.

Tried the other data types short, int, float etc but to no avail

How could we get the same value of -1008 in java? Please help.

Thanks in advance

In C you have one additional cast: (int16_t) . If you did the same in Java - (short) , you would get the same result.

The issue is that in both C and Java the temporary results are cast at least to int (or higher type which can hold the value). With cast to int16_t or short you force the compiler to lose the higher bits again.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM