简体   繁体   中英

Converting a Binary number to 4 BCD digits - how does division work?

I am studying assembly for the Motorola 68000 CPU. The book I use is:
68000 Assembly Language Programming, Second Edition, Leventhal, Hawkins, Kane, Cramer
and the EASy68k simulator.

I have a few questions about the conversion of a binary number to a BCD (binary coded decimal).
The original problem in the book says: "Convert the contents of the variable NUMBER at memory location 6000 to four BCD digits in the variable STRING at location 6002 (most significant digit in 6002). The 16-bit number in NUMBER is unsigned and less than 10,000."

Example:

 input:  NUMBER - (6000) = 1C52
 output: STRING - (6002) = 07
                  (6003) = 02
                  (6004) = 05
                  (6005) = 00

Because 1C52 (hex) = 7250 (decimal) (The MC68k is a big-endian CPU)

Since the MC68k is a nice CISC CPU with a rich arsenal of instructions, it wasn't hard to code a solution:

DATA:       EQU $6000
PROGRAM:    EQU $4000

        ORG DATA

NUMBER:     DS.W 1
STRING:     DS.L 1

        ORG PROGRAM

MAIN:       
        CLR.L D0                ; Clear D0               
        MOVE.W NUMBER, D0       ; Store our number (2 bytes) to D0      
        MOVEA.L #STRING+4, A0   ; We'll go backwards -> so we store the address of the last byte + 1 of the variable STRING to A0 (+1 because we use pre-decrement addressing)
        MOVEQ #1, D2            ; A counter which will cause (DBRA) two iterations of the LOOP part of the program        
        MOVE.L #$FFFF,D3        ; D3 is a mask used to clear the 2 most significant bytes of D0 in each iteration of LOOP       

LOOP:   DIVU.W #10, D0          ; Divide D0 by 10 (the result will be saved in the first 2 bytes od D0, and the remainder (our BCD digit) in the second two (more significant) two bytes of D0         
        MOVE.L D0, D1           ; Make a copy of D0         
        SWAP D1                 ; swap the first 16 bits of D0 with the second 16 bits of D0            
        MOVE.B D1,-(A0)         ; Now the first 16 bits of D1 contain the remainder (our BCD digit) which we will save to address -(A0)          
        AND.L D3, D0            ; Use the mask to clear the second half (16 bits) of D0 so that the next DIVU instruction doesn't by mistake take the remainder as a part of the number which needs to be divided      
        DBRA D2, LOOP           ; Decrement our counter D2 by 1 and go back to LOOP if D2 is not equal to -1

        DIVU #10, D0            ; This (last) division by 10 will cause our most significant BCD decimal to be at the lower 16 bits of D0 while the second most significant BCD decimal will be the remainder of the DIVU instruction and therefore stored at the higher 16 bits of D0          
        MOVE.B D0, -2(A0)       ; Save the most significant BCD digit       
        SWAP D0                 ; swap lower and higher 16 bits of D0         
        MOVE.B D0, -(A0)        ; Save second most significant BCD digit

        MOVE.B #9, D0
        TRAP #15

        END MAIN

DIVU = DIVision Unsigned

I am happy with that solution but I would like to know/learn how the MC68k performs this division (calculation of the result and remainder) in more detail, let me explain. If we, for example, want to do the opposite, ie convert a BCD number into a binary number, we can use the following algorithm: Let's take the sequence: '7', '2', '5', '0' of BCD digits where '7' is the most significant digit and '0' the least significant digit. If we wanted to make a decimal number of those digits we can do it like this (pseudo-code):

number = 0;
number = number * 10 + 7   = 0 * 10 + 7 = 0 + 7 = 7 
number = number * 10 + 2   = 7 * 10 + 2 = 70 + 2 = 72 
number = number * 10 + 5   = 72 * 10 + 5 = 720 + 5 = 725  
number = number * 10 + 0   = 725 * 10 + 0 = 7250 + 0 = 7250  

But of course, we need to adjust the multiplication for numbers written in base 2. The MC68k offers more or less, 2 approaches:

  1. A multiplication mnemonic like "MULU #10, D1" which will simply yield a number multiplied by 10
  2. Or a set consisting of simple instructions:

     ADD.W D1, D1 ; D1 = D1 + D1 = 2x MOVE.W D1, D3 LSL.W #2, D3 ; D3 = 8x = (2x) * 4 ADD.W D3, D1 ; D1 = 10x = 2x + 8x 

which yields the same result (original number x -> 10x). The ADD instruction works like this:

ADD D1, D2  = pseudo-code =  D2 = D2 + D1

And the LSL instruction is a logical shift left instruction. And we know that the result of logically shifting a number to the left by 1 bit is the same as multiplying it by 2 and shifting it left by 2 bits is the same as multiplying the number by 4.

So, for BCD to Binary conversion, I can use a multiplication instruction like MULU in my algorithm, while for Binary to BCD, I can use a division instruction like DIVU in my algorithm.

And also, for BCD to Binary, I can use ADD and Logical shift instructions to simulate the multiplication, but what would be the analogous way for Binary to BCD? How can I simulate division and calculate the quotient/remainder by using simpler instructions than DIV (like subtraction, addition, logical shifts, ...) ?

I also found an interesting algorithm for Binary to BCD conversion here:
http://www.eng.utah.edu/~nmcdonal/Tutorials/BCDTutorial/BCDConversion.html

But I can't figure out why this works. Why do we need to add 3 ( = 11 binary) to every column (= 4 bits) which contains a number larger than or equal to 5?

I thought about coding a solution which uses this algorithm but:
- after 3 shifts, I would have to check if the ones column contains a number larger than 4 after every shift - after 7 shifts, I would have to check if the ones and tens column contain a number larger than 4 after every shift
- after 11 shifts, I would have to check if the ones, tens and hundreds column contain a number larger than 4 after every shift
- after 15 shifts, I would have to check if the ones, tens, hundreds and thousands column contain a number larger than 4 after every shift

which seems like the CPU would have a lot more to do...

On the 'add three' thing: Upon reaching a value of 5 or more in a column, on the next shift the value of the column will be something >= 10.

Now consider this: each binary number doubles its weight once left-shifted. But when going from, say, a ones column to a tens column, a 1 'loses' its previous 16 wight and becomes a ten (10). Thus, its weight isn't 16 anymore, but 10.

How do we compensate this? Simple, we add three (3) which is the half of six (6) so that on the next shift we're gonna lose six (3) in weight as explained earlier, but at the same time regain it by left-shifting (left shift => multiplication by two) the three we added previously. Weight is balanced again.

Better explainer here

Hope it helps. By the way, I'm studying M68k in university too, and your code has been a nice reading, thanks.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM