简体   繁体   中英

Difference between '(unsigned)1' and '(unsigned)~0'

What is the difference between (unsigned)~0 and (unsigned)1 . Why is unsigned of ~0 is -1 and unsigned of 1 is 1 ? Does it have something to do with the way unsigned numbers are stored in the memory. Why does an unsigned number give a signed result. It didn't give any overflow error either. I am using GCC compiler:

#include<sdio.h>
main()
{
 unsigned int x=(unsigned)~0; 
 unsigned int y=(unsigned)1; 
 printf("%d\n",x); //prints -1
 printf("%d\n",y); //prints 1
}

Because %d is a signed int specifier. Use %u .

which prints 4294967295 on my machine.

As others mentioned, if you interpret the highest unsigned value as signed, you get -1, see the wikipedia entry for two's complement .

Your system uses two's complement representation of negative numbers. In this representation a binary number composed of all ones represent the biggest negative number -1 .

Since inverting all bits of a zero gives you a number composed of all ones, you get -1 when you re-interpret the number as a signed number by printing it with a %d which expects a signed number, not an unsigned one.

First, in your use of printf you are telling it to print the number as signed ("%d") instead of unsigned ("%u").

Second, you are right in that it has "something to do with the way numbers are stored in memory". An int (signed or unsigned) is not a single bit on your computer, but a collection of k bits. The exact value of k depends on the specifics of your computer architecture, but most likely you have k=32.

For the sake of succinctness, lets assume your ints are 8 bits long, so k=8 (this is most certainly not the case, unless you are working on a very limited embedded system,). In that case (int)0 is actually 00000000, and (int)~0 (which negates all the bits) is 11111111.

Finally, in two's complement (which is the most common binary representation of signed numbers), 11111111 is actually -1. See http://en.wikipedia.org/wiki/Two 's_complement for a description of two's complement.

If you changed your print to use "%u", then it will print a positive integer that represents (2^k-1) where k is the number of bits in an integer (so probably it will print 4294967295).

printf() only knows what type of variable you passed it by what format specifiers you used in your format string. So what's happening here is that you're printing x and y as signed integers, because you used %d in your format string. Try %u instead, and you'll get a result more in line with what you're probably expecting.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM