[英]Calculating time complexity of a recursive algorithm
I'm trying to calculate the time complexity of a recursive algorithm and I think I've almost got it.我正在尝试计算递归算法的时间复杂度,我想我已经差不多了。 Here's the psuedocode I've been looking at:
这是我一直在查看的伪代码:
long pow( long x, int n ) {
if (n == 0)
return 1;
if (n == 1)
return x;
if(isEven(n))
return pow(x, n / 2 ) * pow(x, n / 2);
else
return x * pow(x * x, n / 2);
}
isEven merely determines whether or not the integer passed to it is even or not, and for the point of this example, operates in constant time. isEven 仅确定传递给它的整数是否为偶数,并且就本示例而言,它在恒定时间内运行。
So, if n = 0 or n = 1, it operates it has constant time operation, like this: f(n) = C0.因此,如果 n = 0 或 n = 1,则它以恒定时间运行,如下所示:f(n) = C0。 However, when n > 1, it should operate like so: f(n)= f(n-1) + f(n-1) + C1 when n is even and f(n)= f(n-1) + 1 when n is odd, correct?
然而,当 n > 1 时,它应该像这样操作: f(n)= f(n-1) + f(n-1) + C1 当 n 是偶数并且 f(n)= f(n-1) + 1 当 n 是奇数时,对吗? Or should it be: f(n)= f(n/2) + f(n/2) + C1 when n is even and f(n)= f(n/2) + 1 when n is odd?
或者它应该是:当 n 是偶数时 f(n)= f(n/2) + f(n/2) + C1 而当 n 是奇数时 f(n)= f(n/2) + 1?
I've been looking at a lot of examples.我一直在看很多例子。 Here is one I've found very helpful.
这是我发现非常有帮助的一个。 My problem stems from there being two recursive calls when n is even.
我的问题源于当 n 为偶数时有两个递归调用。 I'm not fully sure what to do here.
我不完全确定在这里做什么。 If anyone could point me in the right direction I'd really appreciate it.
如果有人能指出我正确的方向,我将不胜感激。
Take a look at the Master Theorem .看看大师定理。 You can treat this as a "divide and conquer" algorithm.
您可以将其视为“分而治之”算法。
The end result is that with the two recursive calls in place, you end up with a worst case O(n) runtime.最终结果是,在两个递归调用到位后,您最终会得到最坏的 O(n) 运行时情况。 Eg pow(x, 4) calls pow(x, 2) twice, and pow(x, 1) four times;
例如 pow(x, 4) 调用 pow(x, 2) 两次,pow(x, 1) 调用四次; in general a power of two will result in n*2-1 calls.
通常,2 的幂将导致 n*2-1 调用。
Also note that by just calling pow(x, n/2) once and squaring the result in that branch, the algorithm becomes O(log n).另请注意,只需调用一次 pow(x, n/2) 并将该分支中的结果平方,算法就变成了 O(log n)。
Let's define f(m) as that it gives you the number of operations of a problem with size m.让我们定义 f(m) ,因为它为您提供大小为 m 的问题的操作数。 The 'problem' is of course exponentiation (pow), like
x^n
or pow(x,n)
. “问题”当然是幂运算 (pow),例如
x^n
或pow(x,n)
。 The long pow( long x, int n ) {
function does not need to do more or less work if I change x.如果我更改 x
long pow( long x, int n ) {
函数不需要做更多或更少的工作。 So, the size of the problem exponentiation does not depend on x.因此,问题求幂的大小不依赖于 x。 It does however depend on n.
然而,它确实取决于 n。 Let's say 2^4 is of size 4 and 3^120 is of size 120. (Makes sense if you see that
2^4=2*2*2*2
and 3^120=3*3*3*3*..*3
) The problem size is thus equal to n
, the second parameter.假设 2^4 的大小为 4,而 3^120 的大小为 120。(如果您看到
2^4=2*2*2*2
和3^120=3*3*3*3*..*3
,这是有道理的3^120=3*3*3*3*..*3
) 问题大小因此等于n
,即第二个参数。 We could, if you want, say the problem size is 2*log(n), but that would be silly.如果您愿意,我们可以说问题大小为 2*log(n),但这很愚蠢。
Now we have f(m) is the number of operations to calculate pow(x,m)
for any x.现在我们有 f(m) 是计算任何 x 的
pow(x,m)
的操作数。 Because pow(x,m)
is exactly the problem with size m.因为
pow(x,m)
正是大小为 m 的问题。 So, if we have pow(x,y)
then the number of operations is, by definition, f(y)
.因此,如果我们有
pow(x,y)
那么根据定义,操作次数是f(y)
。 For example, pow(3,3*m/2)
has f(3*m/2)
operations.例如,
pow(3,3*m/2)
有f(3*m/2)
操作。
Finally, let's count the operations最后,让我们计算一下操作
long pow( long x, int n ) {
if (n == 0) //1
return 1;
if (n == 1) //1
return x;
if(isEven(n)) //1
return pow(x, n / 2 ) * //that is f(n/2), y=n / 2
pow(x, n / 2); //also f(n/2)
else
return x * pow(x * x, n / 2); //1+1+f(n/2)
}
Taking it together: f(n) = 2*f(n/2) + c1
(n even) and f(n) = f(n/2) + c2
(n odd).综合考虑:
f(n) = 2*f(n/2) + c1
(n 偶数)和f(n) = f(n/2) + c2
(n 奇数)。 If you are only interested in the worst case scenario, then note that the odd case is less work.如果您只对最坏的情况感兴趣,请注意奇数情况的工作量较少。 Thus f(n) is bounded above by the even case:
f(n) <= 2*f(n/2)+c
.因此 f(n) 受偶数情况的限制:
f(n) <= 2*f(n/2)+c
。
声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.