简体   繁体   中英

exponentiation in Data Structures and Algorithm Analysis in C

When addressed exponentiation in chapter 2, the author mentioned

"The number of multiplications required is clearly at most 2 log n(the base is 2), because at most two multiplications (if n is odd) are required to halve the problem. Again,a recurrence formula can be written and solved."

The code as follow:

int pow( int x, unsigned int n)
{
/*1*/ if( n == 0 )
/*2*/ return 1;
/*1*/ if( n == 1 )
/*4*/ return x;
/*5*/ if( even( n ) )
/*6*/ return( pow( x*x, n/2 ) );
else
/*7*/ return( pow( x*x, n/2 ) * x );
}

Q : As the author said,

2^16 need at most 8 multiplications

2^15 ... 7 ...

2^14 ... 7 ...

2^13 ... 7 ...

2^12 ... 7 ...

In fact, I perfrom the code:

2^16 .... 4 ...

2^15 .... 6 ...

2^14 ... 5 ...

2^13 ... 5 ...

2^12 ... 4 ...

So, is somewhere wrong?

Finding x^n will take at most 2 log n multiplications, since it is possible for n/2 to be odd at every iteration. For example:

pow(2, 15) --> pow(2 * 2, 7) * 2
           --> pow(4 * 4, 3) * 4 * 2
           --> pow(16 * 16, 1) * 16 * 4 * 2

This is six multiplications (two multiplications per function call); 2 * log(15) ~= 7.8 . So the upper bound is satisfied. The best case is n a power of 2, which takes only log n multiplications.

To calculate the complexity, consider that this algorithm reduces n by half k times, until n is between 1 and 2; that is, we have:

1 ≤ n2 k < 2

So:

2 k ≤ n < 2 k+1
⇒ k ≤ log n < k+1
⇒ (log n) - 1 < k ≤ log n

Thus, the algorithm takes log n steps, and since the worst case is two multiplications per step, at most 2 log n multiplications are required.

There's no contradiction or mistake -- the book gives an upper bound, and you're looking at the exact number of multiplications.

The exact number of multiplications (for n>0) is floor(log_2(n)) + bitcount(n) - 1. That's just by inspecting the code -- the even cases (which perform one multiplication) correspond to 0 bits in the input, the odd cases (which perform an extra multiplication) correspond to 1 bits in the input, and the code stops when it reaches the highest bit.

The book says that 2*log_2(n) is an upper bound for the number of multiplications. That's consistent with the exact formula: floor(log_2(n)) <= log_2(n) and bitcount(n) - 1 <= log_2(n). So floor(log_2(n)) + bitcount(n) - 1 <= 2*log_2(n).

From the exact formula, you can see that the lower the bitcount of n, the worse the upper bound is. The very worst cases are when n is a power of 2: then exactly log_2(n) multiplications will be performed, and the upper bound is off by a factor of 2. The very best cases are when n is one less than a power of 2: then the upper bound will be off by only 1. That matches your empirical table of results.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM