简体   繁体   中英

Can a int16_t to int conversation result in implementation-defined behavior?

In section 7.18.1.1 paragraph 1 of the C99 standard:

The typedef name intN_t designates a signed integer type with width N , no padding bits, and a two's complement representation.

According to the C99 standard, exact-width signed integer types are required to have a two's complement representation. This means, for example, int8_t has a minimum value of -128 as opposed to the one's complement minimum value of -127 .

Section 6.2.6.2 paragraph 2 allows the implementation to decide whether to interpret a sign bit as sign and magnitude , two's complement , or one's complement :

If the sign bit is one, the value shall be modified in one of the following ways:
— the corresponding value with sign bit 0 is negated ( sign and magnitude );
— the sign bit has the value -(2 N ) ( two's complement );
— the sign bit has the value -(2 N - 1) ( ones' complement ).

The distinct between the methods is important because the minimum value of an integer in two's complement ( -128 ) can be outside the range of values representable in ones' complement ( -127 to 127 ).

Suppose an implementation defines the int types as having ones' complement representation, while the int16_t type has two's complement representation as guaranteed by the C99 standard.

int16_t foo = -32768;
int bar = foo;

In this case, would the conversion from int16_t to int cause implementation-defined behavior since the value held by foo is outside the range of values representable by bar ?

Yes.

Specifically, the conversion would yield an implementation-defined result. (For any value other than -32768 , the result and the behavior would be well defined.) Or the conversion could raise an implementation-defined signal, but I don't know of any implementations that do that.

Reference for the conversion rules: N1570 6.3.1.3p3:

Otherwise, the new type is signed and the value cannot be represented in it; either the result is implementation-defined or an implementation-defined signal is raised.

This can only happen if:

  • int is 16-bits (more precisely, has 15 value bits, 1 sign bit, and 0 or more padding bits)
  • int uses one's-complement or sign-and magnitude
  • The implementation also supports two's-complement (otherwise it just won't define int16_t ).

I'd be surprised to see an implementation that meets these criteria. It would have to support both two's-complement and either one's complement or sign-and-magnitude, and it would have to chose one of the latter for type int . (Perhaps a non-two's-complement implementation might support two's-complement in software, just for the sake of being able to define int16_t .)

If you're concerned about this possibility, you might consider adding this to one of your header files:

#include <limits.h>
#include <stdint.h>

#if !defined(INT16_MIN)
#error "int16_t is not defined"
#elif INT_MIN > INT16_MIN
#error "Sorry, I just can't cope with this weird implementation"
#endif

The #error s are not likely to trigger on any sane real-world implementation.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM