简体   繁体   中英

Why is integer overflow undefined behavior only for signed integers, and not for unsigned integers?

The purpose of making signed integer overflow undefined behavior is to permit compiler optimizations. But isn't this an equally valid argument to make unsigned integer overflow undefined behavior as well?

The purpose for keeping signed integer overflow undefined may be compiler optimization 1 . But the original reason was that the bit representation of signed integers was not defined by the standard. Different implementations offered different signed integer representations, and their overflow characteristics would be different. This was only permissible because the standard did not define what those overflow characteristics were.

By contrast, unsigned integer bit representations were always well-defined (otherwise, you couldn't effectively do a lot of bitwise operations), and therefore their overflow behavior could also be well-defined.

Boolean arithmetic for unsigned integers of a specific size, under that value representation, works modulo the max unsigned value + 1. Therefore, the standard has a way to say what the result of any math operation will be: it's the expected numerical result modulo the max unsigned value + 1.

That is, if you have a 16-bit unsigned integer holding 65535, and you add 1 to it, the numerical result is 65536. However, the result you get in a 16-bit number is 0. This is how the standard defines it, because this is how boolean arithmetic works for a specific bit-depth. The representation defines the behavior.

By contrast, different signed integer forms have different overflow characteristics. If the standard defined a specific meaning for 32767 + 1 in a 16-bit signed integer, then if the particular signed integer representation did not naturally provide that answer, the compiler would have to change how it adds those values to produce that answer. For signed/magnitude, this addition results in -0. If the standard made that the actual behavior, then every twos complement implementation would be unable to just add numbers. It would have to check for overflow and fudge the results.

On every math operation.

1 there are other reasons, such as the fact that most code doesn't tolerate it being well-defined that well. That is most code where signed integers overflow would be just as broken if it were well-defined

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM