简体   繁体   中英

Practical difference between int and char

I have to analyse the output of these code fragments:

int x, y;
x = 200; y = 100;
x = x+y; y = x-y; x = x-y;
printf ("%d %d\n", x, y);

char x, y;
x = 200; y = 100;
x = x+y; y = x-y; x = x-y;
printf ("%d %d\n", x, y);

So, I know now that int stands for integer and char for character; I've read about the differences and if I put in the printf the %d , it returns in the form of digits, and %c , in the form of a character.

The ASCII character code for 'A' is 65 for example, but why does the second function print 100 -56 , instead of 100 200 ?

C has a variety of integer types: char (at least 8 bits), short (at least 16 bits), int (at least 16 bits), long (at least 32 bits). There are unsigned varieties of those. If you assign a value that is too large to a plain type, the results are undefined (you should never do that, the compiler may assume you never do, and not check at all). In the unsigned case, they "wrap around". But note that the sizes are not guaranteed , just their minimal sizes. There have been machines in which all were 32 bits wide.

On the platform used in the question, the type char seems to be 1 byte (8 bits) size and is a signed type with 1 sign bit and 7 value bits (and using 2's complement arithmetic). It stores values from -128 to 127. So, this is what's happening to x and y :

x = 200 => x takes value -56
y = 100 => y takes value 100
x = x+y => x takes value 44
y = x-y => y takes value -56
x = x-y => x takes value 100

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM