简体   繁体   中英

~ Unary Operator and Bitwise Tests Give Negative Results

I was studying bitwise operators and they make sense until the unary ~one's complement is used with them. Can anyone explain to me how this works?

For example, these make sense however the rest of the computations aside from these do not:

1&~0 = 1   (~0 is 1 -> 1&1 = 1)
~0^~0 = 0  (~0 is 1 -> 1^1 = 0)
~1^0 = 1   (~1 is 0 -> 0^1 = 1)
~0&1 = 1   (~0 is 1 -> 1&1 = 1)
~0^~1 = 1  (~0 is 1, ~1 is 0 -> 1^0 = 1)
~1^~1 = 0  (~1 is 0 -> 0^0)

The rest of the results produced are negative(or a very large number if unsigned) or contradict the logic I am aware of. For example:

0&~1 = 0   (~1 = 0 therefor 0&0 should equal 0 but they equal 1)
~0&~1 = -2
~1|~0 = -1

etc. Anywhere you can point me to learn about this?

They actually do make sense when you expand them out a little more. A few things to be aware of though:

  1. Bitwise AND yields a 1 only when both bits involved are 1. Otherwise, it yields 0. 1 & 1 = 1, 0 & anything = 0.

  2. Bitwise OR yields a 1 when any of the bits in that position are a 1, and 0 only if all bits in that position are 0. 1 | 0 = 1, 1 | 1 = 1, 0 | 0 = 0.

  3. Signed numbers are generally done as two's complement (though a processor does not have to do it that way.), Remember with two's complement. you invert and add 1 to get the magnitude when the highest bit position is a 1.

Assuming a 32-bit integer, you get these results:

 0 & ~1 = 0 & 0xFFFFFFFE = 0
~0 & ~1 = 0xFFFFFFFF & 0xFFFFFFFE = 0xFFFFFFFE (0x00000001 + 1) = -2
~1 | ~0 = 0xFFFFFFFE & 0xFFFFFFFF = 0xFFFFFFFF (0x00000000 + 1) = -1

~1 = 0 - No it's not. It's equal to -2 . Let's take a eight bit two complement as example. The decimal number 1 has the representation 0000 0001 . So ~1 will have 1111 1110 which is the two complement representation of -2 .

0&~1 = 0 ( ~1 = 0 therefor 0&0 should equal 0 but they equal 1 )

~1 equals -2 . If you flip all the bits of a Two's Complement number, you multiply it by -1 and subtract 1 from the result. Regardless of that 0 has 0 for all the bits, so the result of & is going to be 0 anyway.

~0&~1 = -2

~0 has all bits set so ~0&~1 is just ~1 . Which is -2 .

~1|~0 = -1

~0 has all bits set, so the result of the | is ~0 (= -1 ) no matter what it is OR'd with.

For simplicity, just expanding upto 8 bits,

1 = 0000 0001

You are assuming,

1 = 1111 1111 // which is wrong

1111 1111 , would be the maximum possible number.

If you want every bit set to 1 , you need to do:

(-1)

or

~0

Also be careful with the types you use, like you may want to do operation of 64 bits, but it only expanded upto 32 bits. You need to typecaste in such cases.

For example:

uint64_t a = 1 << 63;

would result into 0 if 1 expanded to 32 bit, but you shifted it by 63.

So to correct that,

uint64_t a = 1ULL << 63;

or

uint64_t a = (uint64_t)1 << 63;

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM