简体   繁体   中英

c++ parse two bytes to int

I have two bytes read from a sensor over I2C, these are stored as unsigned char . I do know which is the most significant byte, and which is the least significant byte.

unsigned char yawMSB;
unsigned char yawLSB;

How would I go about converting these two bytes of data, into a single int ?

I've had this implemented properly in C# using

BitConverter.ToInt16(new byte[] { yawLSB, yawMSB }, 0)

In a 16-bit integer, the top 8 bits are the most significant byte, and the low 8 bits are the least significant byte.

To make the most significant byte occupy the top bits, you need to shift its value up (by 8 bits), which is done with the left-shift bitwise operator << .

Then to get the least significant byte you just add the low 8 bits using bitwise or | .

Put together it will be something like yawMSB << 8 | yawLSB yawMSB << 8 | yawLSB .

I do know which is the most significant byte, and which is the least significant byte.

  • MSB means Most Significant byte.
  • LSB means Least Significant byte.

How would I go about converting these two bytes of data, into a single float?

You can then build a float whose value is:

const float yaw = yawMSB << 8 + yawLSB;

Actually, the value yawMSB << 8 + yawLSB is probably on a scale defined by your implementation. If it is true, and if it is a linear scale from 0 to MAX_YAW , you should define your value as:

const float yaw = float(yawMSB << 8 + yawLSB) / MAX_YAW; // gives yaw in [0.f, 1.f].

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM