简体   繁体   中英

What is the correct way to convert 2 bytes to a signed 16-bit integer?

In this answer , zwol made this claim:

The correct way to convert two bytes of data from an external source into a 16-bit signed integer is with helper functions like this:

#include <stdint.h>

int16_t be16_to_cpu_signed(const uint8_t data[static 2]) {
    uint32_t val = (((uint32_t)data[0]) << 8) | 
                   (((uint32_t)data[1]) << 0);
    return ((int32_t) val) - 0x10000u;
}

int16_t le16_to_cpu_signed(const uint8_t data[static 2]) {
    uint32_t val = (((uint32_t)data[0]) << 0) | 
                   (((uint32_t)data[1]) << 8);
    return ((int32_t) val) - 0x10000u;
}

Which of the above functions is appropriate depends on whether the array contains a little endian or a big endian representation. Endianness is not the issue at question here, I am wondering why zwol subtracts 0x10000u from the uint32_t value converted to int32_t .

Why is this the correct way ?

How does it avoid the implementation defined behavior when converting to the return type?

Since you can assume 2's complement representation, how would this simpler cast fail: return (uint16_t)val;

What is wrong with this naive solution:

int16_t le16_to_cpu_signed(const uint8_t data[static 2]) {
    return (uint16_t)data[0] | ((uint16_t)data[1] << 8);
}

If int is 16-bit then your version relies on implementation-defined behaviour if the value of the expression in the return statement is out of range for int16_t .

However the first version also has a similar problem; for example if int32_t is a typedef for int , and the input bytes are both 0xFF , then the result of the subtraction in the return statement is UINT_MAX which causes implementation-defined behaviour when converted to int16_t .

IMHO the answer you link to has several major issues .

This should be pedantically correct and work also on platforms that use sign bit or 1's complement representations, instead of the usual 2's complement . The input bytes are assumed to be in 2's complement.

int le16_to_cpu_signed(const uint8_t data[static 2]) {
    unsigned value = data[0] | ((unsigned)data[1] << 8);
    if (value & 0x8000)
        return -(int)(~value) - 1;
    else
        return value;
}

Because of the branch, it will be more expensive than other options.

What this accomplishes is that it avoids any assumption on how int representation relates to unsigned representation on the platform. The cast to int is required to preserve arithmetic value for any number that will fit in target type. Because the inversion ensures top bit of 16-bit number will be zero, the value will fit. Then the unary - and subtraction of 1 apply the usual rule for 2's complement negation. Depending on platform, INT16_MIN could still overflow if it doesn't fit in the int type on the target, in which case long should be used.

The difference to the original version in the question comes at the return time. While the original just always subtracted 0x10000 and 2's complement let signed overflow wrap it to int16_t range, this version has the explicit if that avoids signed wrapover (which is undefined ).

Now in practice, almost all platforms in use today use 2's complement representation. In fact, if the platform has standard-compliant stdint.h that defines int32_t , it must use 2's complement for it. Where this approach sometimes comes handy is with some scripting languages that don't have integer data types at all - you can modify the operations shown above for floats and it will give the correct result.

The arithmetic operators shift and bitwise-or in expression (uint16_t)data[0] | ((uint16_t)data[1] << 8)(uint16_t)data[0] | ((uint16_t)data[1] << 8) don't work on types smaller than int , so that those uint16_t values get promoted to int (or unsigned if sizeof(uint16_t) == sizeof(int) ). Still though, that should yield the correct answer, since only the lower 2 bytes contain the value.

Another pedantically correct version for big-endian to little-endian conversion (assuming little-endian CPU) is:

#include <string.h>
#include <stdint.h>

int16_t be16_to_cpu_signed(const uint8_t data[2]) {
    int16_t r;
    memcpy(&r, data, sizeof r);
    return __builtin_bswap16(r);
}

memcpy is used to copy the representation of int16_t and that is the standard-compliant way to do so. This version also compiles into 1 instruction movbe , see assembly .

Another method - using union :

union B2I16
{
   int16_t i;
   byte    b[2];
};

In program:

...
B2I16 conv;

conv.b[0] = first_byte;
conv.b[1] = second_byte;
int16_t result = conv.i;

first_byte and second_byte can be swapped according to little or big endian model. This method is not better but is one of alternatives.

Here is another version that relies only on portable and well-defined behaviours (header #include <endian.h> is not standard, the code is):

#include <endian.h>
#include <stdint.h>
#include <string.h>

static inline void swap(uint8_t* a, uint8_t* b) {
    uint8_t t = *a;
    *a = *b;
    *b = t;
}
static inline void reverse(uint8_t* data, int data_len) {
    for(int i = 0, j = data_len / 2; i < j; ++i)
        swap(data + i, data + data_len - 1 - i);
}

int16_t be16_to_cpu_signed(const uint8_t data[2]) {
    int16_t r;
#if __BYTE_ORDER == __LITTLE_ENDIAN
    uint8_t data2[sizeof r];
    memcpy(data2, data, sizeof data2);
    reverse(data2, sizeof data2);
    memcpy(&r, data2, sizeof r);
#else
    memcpy(&r, data, sizeof r);
#endif
    return r;
}

The little-endian version compiles to single movbe instruction with clang , gcc version is less optimal, see assembly .

I want to thank all contributors for theirs answers. Here is what the collective works boils down to:

  1. As per the C Standard 7.20.1.1 Exact-width integer types : types uint8_t , int16_t and uint16_t must use two's complement representation without any padding bits, so the actual bits of the representation are unambiguously those of the 2 bytes in the array, in the order specified by the function names.
  2. computing the unsigned 16 bit value with (unsigned)data[0] | ((unsigned)data[1] << 8) (unsigned)data[0] | ((unsigned)data[1] << 8) (for the little endian version) compiles to a single instruction and yields an unsigned 16-bit value.
  3. As per the C Standard 6.3.1.3 Signed and unsigned integers : converting a value of type uint16_t to signed type int16_t has implementation defined behavior if the value is not in the range of the destination type. No special provision is made for types whose representation is precisely defined.
  4. to avoid this implementation defined behavior, one can test if the unsigned value is larger than INT_MAX and compute the corresponding signed value by subtracting 0x10000 . Doing this for all values as suggested by zwol may produce values outside the range of int16_t with the same implementation defined behavior.
  5. testing for the 0x8000 bit explicitly causes the compilers to produce inefficient code.
  6. a more efficient conversion without implementation defined behavior uses type punning via a union, but the debate regarding the definedness of this approach is still open, even at the C Standard's Committee level.
  7. type punning can be performed portably and with defined behavior using memcpy .

Combining points 2 and 7, here is a portable and fully defined solution that compiles efficiently to a single instruction with both gcc and clang :

#include <stdint.h>
#include <string.h>

int16_t be16_to_cpu_signed(const uint8_t data[2]) {
    int16_t r;
    uint16_t u = (unsigned)data[1] | ((unsigned)data[0] << 8);
    memcpy(&r, &u, sizeof r);
    return r;
}

int16_t le16_to_cpu_signed(const uint8_t data[2]) {
    int16_t r;
    uint16_t u = (unsigned)data[0] | ((unsigned)data[1] << 8);
    memcpy(&r, &u, sizeof r);
    return r;
}

64-bit Assembly :

be16_to_cpu_signed(unsigned char const*):
        movbe   ax, WORD PTR [rdi]
        ret
le16_to_cpu_signed(unsigned char const*):
        movzx   eax, WORD PTR [rdi]
        ret

Why not just use your "naive solution," but cast each element to int16_t instead of uint16_t ?

int16_t le16_to_cpu_signed(const uint8_t data[static 2]) {
    return (int16_t)data[0] | ((int16_t)data[1] << 8);
}

Then you would not have to deal with casting unsigned ints to signed ints (and possibly being out of the signed int range).

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM