简体   繁体   中英

Generate subsets of a given binary string, how does this code work?

I encounter this code snippet while looking for answers for subset generation.

start

N; //an integer

X = N
while true

    print X
    if( X == 0 )
        break;

    X = (X-1) & N;
end while
end

Could anyone explain why this code works? Thanks in advance.


Thanks for the answers. My proof idea is the following:

If we think of N as a mask, bits that matters are the positions of 1s in N . We call this the masked space. Since X is always a subset of N , the counting down of X is like a counting down in the masked space (bits flipped in the masked space). Therefore there is no other subset between X and (X-1) & N .

You can prove it works with the following argument:

  1. (X-1) always produces a smaller number (ignoring wrap-around).
  2. (X-1)&N <= (X-1) .
  3. k & N must be a "subset" of N .
  4. No value in between k and k&N can be a "subset" of N , therefore you haven't missed anything.

To understand what's happening, you should fully understand subtraction, which is actually the most complicated operation here.

An other way to think about subtracting 1, is that you start looking at the number at the least significant bit, then you scan towards the most significant bit until you find the first 1. Flip every bit up to and including that 1.

Combined with the & , it throws out the lowest set bit, and then it puts back in any bits that used to be 1 to the right of that bit.

For example,

10110 start here
10101 subtract 1
10100 and with N
10011 subtract 1
10010 and with N
10001 subtract 1
10000 and with N
01111 subtract 1
00110 and with N

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM