简体   繁体   中英

Internal working of greater than/ less than

I was just wondering how is the result of greater than / less than computed and returned to the high level languages.

I'm looking for the hardware gate model here.

Lets use a uniform example to explain, say 5 > 3.

It is usually implemented via subtraction with carry-detection.

From a gating perspective, subtracting binary numbers is performed by passing matched pairs of bits from each operand through a subtractor:

            +-----+
carry_in -->|     |
            |     |--> a_minus_b
       a -->| SUB |
            |     |--> carry_out
       b -->|     |
            +-----+

a_minus_b = carry_in ⊕ a ⊕ b
carry_out = (carry_in ∧ b) ∨ (¬a ∧ (carry_in ∨ b))

Bit 0 from arguments a and b is passed through the first subtractor, with a carry_in of 0. Bit 1 from each argument is passed through the second subtractor, with carry_in set to the carry_out of the bit-0 stage. This continues down the chain until the final carry_out at the end sets the CPU's carry flag, which holds a 1 if a < b, otherwise 0.

Additionally, every a_minus_b is ORed together and negated, with the result going into the CPU's zero flag, denoting that a = b.

These flags can be tested by machine instructions, which are generated by compilers when you write if (a < b) { ... } .

I'll leave 5 > 3 as an exercise for the reader.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM