[英]Conversion from int16_t to uint32_t
int16_t s;
uint32_t ui = s;
Is the result of converting a int16_t
value to a uint32_t
value compiler dependent? 将
int16_t
值转换为uint32_t
值编译器的结果是否依赖? If not, what's the rule? 如果没有,那么规则是什么?
The results are well-defined; 结果是明确的; non-negative values stay the same, and negative values are reduced modulo 2^32.
非负值保持不变,负值以模2 ^ 32减少。 But the situations where exact sized types like
int16_t
and uint32_t
are needed are quite rare. 但是需要像
int16_t
和uint32_t
那样精确大小的类型的情况非常罕见。 There's really no need for anything other than int
and unsigned long
here: those types have at least as many bits as int16_t
and uint32_t
and, unlike int16_t
and uint32_t
, they're required to exist on any conforming implementation. 这里除了
int
和unsigned long
之外真的不需要任何东西:这些类型至少具有与int16_t
和uint32_t
一样多的位,并且与int16_t
和uint32_t
不同,它们必须存在于任何符合标准的实现上。 If you really really want the sexy new sized types, at least go for portability with int_least16_t
and uint_least32_t
. 如果你真的想要性感的新尺寸类型,至少可以使用
int_least16_t
和uint_least32_t
来实现可移植性。
声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.