简体   繁体   中英

Why doesn't CHAR_MAX + 1 cause overflow?

There is an exercise in The C Programming Language by K&R to find the maximum/minimum values of various types, such as char and int.

In the below code:

#include<stdio.h>
#include<limits.h>
void main()
{
    printf("Size of Char Max %d\n", CHAR_MAX);
    printf("Max Char+1 = %d\n",CHAR_MAX+1); //my test code 
}

since CHAR_MAX is 127, which is the maximum value of type char , why doesn't adding 1 to it cause an overflow?

First of all, whether the range of char is -128 to 127 or 0 to 255 or something else entirely is implementation-defined. The standard does not define if char is signed or unsigned , nor does it define that it has to have eight binary bits, although the latter can be safely assumed in practice today. See 3.9.1 (1) in ISO/IEC 9899:1999:

(...) It is implementation-defined whether a char object can hold negative values. (...)

The only guarantee there is is that CHAR_MIN is no larger than 0 and CHAR_MAX no smaller than 127 (in Annex E).

As for the addition, you don't get an overflow because both CHAR_MAX and 1 (as well as 11 ) are int , not char , so no overflow can be expected.

Moreover, even the addition on

char c = '\x7f';

c + c;

(ie, char + char ) has type int and does not overflow even if char is signed because the usual arithmetic conversions are applied to the operands of + . In a nutshell, these state that in arithmetic operations, all operands are converted up to the largest involved type, and also that the integer promotions are applied to integral types. The integral promotions state that, where they are applied, integral types smaller than int are converted up to at least int , so again we have int + int and no overflow.

Addendum/side note about the "eight binary bits" remark in the first paragraph: As a matter of historical interest, the problem extends beyond the number of bits a datatype has even on binary computers. It is of little practical relevance today, but C was written at a time when not all common computers used a two's complement representation for integers. It is for this reason that the implementation limits given for signed types in Annex E are symmetric (for example, INT_MAX <= 32767 and INT_MIN >= -32767 instead of -32768). On machines that used one's complement or signed magnitude representation, the assymetry we're used to simply didn't exist, and there were two zero values (0 and -0).

You are printing the value of CHAR_MAX in integer type(%d). So it will print the value as in integer data type.

Total range of char is -128 to 127 .

For integer is four bytes. So there is no overflow.

You can try this.

char a=256;

When you are assigning like this you will get the warining.

warning: overflow in implicit constant conversion [-Woverflow]

CHAR_MAX in is defined as a macro constant, not of type char as you seem to be thinking. Take a look here .

In printf("Max Char+1 = %d\\n",CHAR_MAX+1); , the preprocessor substitutes CHAR_MAX to 127 . Since 127 + 1 is 128, and the format specifier is %d (int), there is no overflow.

Your thoughts would be correct for this:

printf("Max Char+1 = %d\n",(char)(CHAR_MAX+1));

Let's review this line of code:

  • The int value of CHAR_MAX+1 is equal to 0x00000080 (128)
  • When converted to char , it is truncated to the char value 0x80 (-128)
  • When passed to printf , it is extended to the int value 0xFFFFFF80 (-128)

Since you do not cast it to char in your code, the int value 0x00000080 is passed to printf "as is".


The answer above is under the assumption that CHAR_BIT is 8 and sizeof(int) is 4 on your platform.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM