简体   繁体   中英

HOW does a 8 bit processor interpret the 2 bytes of a 16 bit number to be a single piece of info?

Assume the 16 bit no. to be 256.

So,

byte 1 = Some binary no.

byte 2 = Some binary no.

But byte 1 also represents a 8 bit no.(Which could be an independent decimal number) and so does byte 2..

So how does the processor know that bytes 1,2 represent a single no. 256 and not two separate numbers

The processor would need to have another long type for that. I guess you could implement a software equivalent, but for the processor, these two bytes would still have individual values.

The processor could also have a special integer representation and machine instructions that handle these numbers. For example, most modern machines nowadays use twos-complement integers to represent negative numbers. In twos-complement, the most significant bit is used to differentiate negative numbers. So a twos-complement 8-bit integer can have a range of -128 (1000 0000) to 127 (0111 111) .

You could easily have the most significant bit mean something else, so for example, when MSB is 0 we have integers from 0 (0000 0000) to 127 (0111 1111) ; when MSB is 1 we have integers from 256 (1000 0000) to 256 + 127 (1111 1111) . Whether this is efficient or good architecture is another history.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM