I was working on a assignment on integer byte level representation. And I wrote a little program:
e1.c
int main(void) {
printf("%d\n", -2147483648 < 2147483647);
return 0;
}
When I compiled a 32-bit version of the executable file using the C89 standard, with the command gcc e1.c -m64 -std=c89 -g -O0 -o e1
, it worked as I expected: it printed 0
indicating that C compiler regarded the value 2147483648
as unsigned int
, thus it converts the rest of the expression to unsigned int
. But weirdly this relationship doesn't hold in the 64-bit version, which prints 1
.
Can anyone explain that?
The type of an integer constant is the first of the corresponding list in which its value can be represented. Unsuffixed decimal:
int
,long int
,unsigned long int
; [...]
Thus, the type of the literal 2147483648
depends on the size of int
, long
, and unsigned long
, respectively. Let's assume int
is 32 bits, as it is on many platforms (and is likely the case on your platforms).
On a 32-bit platform, it's common for long
to be 32 bits. Thus, the type of 2147483648
would be unsigned long
.
On a 64-bit platform, it's common for long
to be 64 bits (though some platforms, like MSVC, will still use 32 bits for long
). Thus, the type of 2147483648
would be long
.
This leads to the discrepancy you see. In one case, you're negating an unsigned long
, and in the other case, you're negating a long
.
On a 32-bit platform, -2147483648
evaluates to 2147483648
(using the unsigned long
type). Thus the resulting comparison is 2147483648 < 2147483647
, which evaluates to 0
.
On a 64-bit platform, -2147483648
evaluates to -2147483648
(using the long
type). Thus the resulting comparison is -2147483648 < 2147483647
, which evaluates to 1
.
The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.