简体   繁体   English

这个特定函数的时间复杂度\\ big(O)是多少?

[英]What is the time complexity \big(O) of this specific function?

What is the time comlexity of this function(f1)? 该函数(f1)的时间复杂度是多少?

as I can see that the first loop(i=0)-> (n/4 times) the second one(i=3)->(n/4 - 3 times).... etc, the result is: (n/3)*(n/4 + (n-3)/4 + (n-6)/4 + (n-9)/4 .... 如我所见,第一个循环(i = 0)->(n / 4次)第二个循环(i = 3)->(n / 4-3次)....等等,结果是:( n / 3)*(n / 4 +(n-3)/ 4 +(n-6)/ 4 +(n-9)/ 4 ....

And I stop here, how to continue? 我在这里停止,如何继续?

int f1(int n){
  int s=0;
  for(int i=0; i<n; i+=3)
    for (int j=n; j>i; j-=4)
      s+=j+i;
  return s;
}

The important thing about Big(O) notation is that it eliminates 'constants'. Big(O)表示法的重要之处在于它消除了“常量”。 The objective is to determine trend as input size grows without concern for specific numbers. 目的是确定输入量增长时的趋势 ,而不必担心特定数字。

Think of it as determining the curve on a graph where you don't know the number ranges of the x and y axes. 可以将其视为在不知道x和y轴的数字范围的图形上确定曲线。

So in your code, even though you skip most of the values in the range of n for each iteration of each loop, this is done at a constant rate. 因此,在您的代码中,即使您为每个循环的每次迭代都跳过了n范围内的大多数值,这也是以恒定速率完成的。 So regardless of how many you actually skip, this still scales relative to n^2 . 因此,无论您实际上跳过了多少,这仍然相对于n^2缩放。

It wouldn't matter if you calculated any of the following: 计算以下任何内容都没有关系:

1/4 * n^2
0.0000001 * n^2
(1/4 * n)^2
(0.0000001 * n)^2
1000000 + n^2
n^2 + 10000000 * n

In Big O, these are all equivalent to O(n^2) . 在Big O中,这些都等效于O(n^2) The point being that once n gets big enough (whatever that may be), all the lower order terms and constant factors become irrelevant in the 'big picture'. 关键是,一旦n 足够大(无论大小如何),所有的低阶项和常数因子在“全局”中都将变得无关紧要。

( It's worth emphasising that this is why on small inputs you should be wary of relying too heavily on Big O. That's when constant overheads can still have a big impact. ) 值得强调的是,这就是为什么在小投入时,您应该警惕过分依赖Big O的原因。那时候,恒定的间接费用仍然会产生很大的影响。

Key observation: The inner loop executes (ni)/4 times in step i , hence i/4 in step ni . 关键观察:内部循环在步骤i执行(ni)/4次,因此在步骤ni i/4

Now sum all these quantities for i = 3k, 3(k-1), 3(k-2), ..., 9, 6, 3, 0 , where 3k is the largest multiple of 3 before n (ie, 3k <= n < 3(k+1) ): 现在将i = 3k, 3(k-1), 3(k-2), ..., 9, 6, 3, 0所有这些量求和,其中3kn之前3的最大倍数(即3k <= n < 3(k+1) ):

3k/4 + 3(k-1)/4 + ... + 6/4 + 3/4 + 0/4 = 3/4(k + (k-1) + ... + 2 + 1)
                                        = 3/4(k(k+1))/2
                                        = O(k^2)
                                        = O(n^2)

because k <= n/3 <= k+1 and therefore k^2 <= n^2/9 <= (k+1)^2 <= 4k^2 因为k <= n/3 <= k+1 ,因此k^2 <= n^2/9 <= (k+1)^2 <= 4k^2

In theory it's "O(n*n)", but... 理论上是“ O(n * n)”,但是...

What if the compiler felt like optimising it into this: 如果编译器想要对其进行优化,该怎么办:

int f1(int n){
  int s=0;
  for(int i=0; i<n; i+=3)
    s += table[i];
  return s;
}

Or even this: 甚至这个:

int f1(int n){
  if(n <= 0) return 0;
  return table[n];
}

Then it could also be "O(n)" or "O(1)". 那么它也可以是“ O(n)”或“ O(1)”。

Note that on the surface these kinds of optimisations seem impractical (due to worst case memory costs); 请注意,从表面上看,这种优化似乎是不切实际的(由于最坏情况下的内存成本); but with a sufficiently advanced compiler (eg using "whole program optimisation" to examine all callers and determine that n is always within a certain range) it's not inconceivable. 但是使用足够先进的编译器(例如,使用“整个程序优化”来检查所有调用者并确定n始终在某个范围内),这是不可想象的。 In a similar way it's not impossible for all of the callers to be using a constant (eg where a sufficiently advanced compiler can replace things like x = f1(123); with x = constant_calculated_at_compile_time ). 以类似的方式,并非所有调用方都使用常量(例如,在功能足够先进的编译器可以将x = f1(123);替换为x = constant_calculated_at_compile_time )。

In other words; 换一种说法; in practice, the time complexity of the original function depends on how the function is used and how good/bad the compiler is. 实际上,原始函数的时间复杂度取决于该函数的使用方式以及编译器的优劣。

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM