简体   繁体   English

Big-O 和 Little-O 符号之间的区别

[英]Difference between Big-O and Little-O Notation

What is the difference between Big-O notation O(n) and Little-O notation o(n) ? Big-O表示法O(n)Little-O表示法o(n)有什么区别?

f ∈ O(g) says, essentially f ∈ O(g) 本质上说

For at least one choice of a constant k > 0, you can find a constant a such that the inequality 0 <= f(x) <= kg(x) holds for all x > a.对于常数k > 0的至少一个选择,您可以找到一个常数a ,使得不等式 0 <= f(x) <= kg(x) 对于所有 x > a 都成立。

Note that O(g) is the set of all functions for which this condition holds.请注意,O(g) 是满足此条件的所有函数的集合。

f ∈ o(g) says, essentially f ∈ o(g) 本质上说

For every choice of a constant k > 0, you can find a constant a such that the inequality 0 <= f(x) < kg(x) holds for all x > a.对于常数k > 0 的每个选择,您都可以找到一个常数a ,使得不等式 0 <= f(x) < kg(x) 对于所有 x > a 都成立。

Once again, note that o(g) is a set.再次注意 o(g) 是一个集合。

In Big-O, it is only necessary that you find a particular multiplier k for which the inequality holds beyond some minimum x .在 Big-O 中,您只需要找到一个特定的乘数k ,其不等式在某个最小值x之外成立。

In Little-o, it must be that there is a minimum x after which the inequality holds no matter how small you make k , as long as it is not negative or zero.在 Little-o 中,必须有一个最小值x ,在该值之后,无论您使k多么小,只要它不是负数或零,不等式都成立。

These both describe upper bounds, although somewhat counter-intuitively, Little-o is the stronger statement.这些都描述了上限,虽然有点违反直觉,但 Little-o 是更强的说法。 There is a much larger gap between the growth rates of f and g if f ∈ o(g) than if f ∈ O(g).如果 f ∈ o(g) 与 f ∈ O(g) 相比,f 和 g 的增长率之间存在更大的差距。

One illustration of the disparity is this: f ∈ O(f) is true, but f ∈ o(f) is false.差异的一个例子是:f ∈ O(f) 为真,但 f ∈ o(f) 为假。 Therefore, Big-O can be read as "f ∈ O(g) means that f's asymptotic growth is no faster than g's", whereas "f ∈ o(g) means that f's asymptotic growth is strictly slower than g's".因此,Big-O 可以理解为“f ∈ O(g) 意味着 f 的渐近增长不比 g 快”,而“f ∈ o(g) 意味着 f 的渐近增长严格慢于 g's”。 It's like <= versus < .这就像<=<

More specifically, if the value of g(x) is a constant multiple of the value of f(x), then f ∈ O(g) is true.更具体地说,如果 g(x) 的值是 f(x) 的值的常数倍,则 f ∈ O(g) 为真。 This is why you can drop constants when working with big-O notation.这就是为什么在使用大 O 表示法时可以删除常量。

However, for f ∈ o(g) to be true, then g must include a higher power of x in its formula, and so the relative separation between f(x) and g(x) must actually get larger as x gets larger.然而,要使 f ∈ o(g) 为真,则 g 必须在其公式中包含 x 的更高次,因此 f(x) 和 g(x) 之间的相对分离实际上必须随着 x 变大而变大。

To use purely math examples (rather than referring to algorithms):使用纯数学示例(而不是指算法):

The following are true for Big-O, but would not be true if you used little-o:以下内容适用于 Big-O,但如果您使用 little-o,则情况并非如此:

  • x² ∈ O(x²) x² ∈ O(x²)
  • x² ∈ O(x² + x) x² ∈ O(x² + x)
  • x² ∈ O(200 * x²) x² ∈ O(200 * x²)

The following are true for little-o:以下内容适用于 little-o:

  • x² ∈ o(x³) x² ∈ o(x³)
  • x² ∈ o(x!) x² ∈ o(x!)
  • ln(x) ∈ o(x) ln(x) ∈ o(x)

Note that if f ∈ o(g), this implies f ∈ O(g).请注意,如果 f ∈ o(g),则意味着 f ∈ O(g)。 eg x² ∈ o(x³) so it is also true that x² ∈ O(x³), (again, think of O as <= and o as < )例如 x² ∈ o(x³) 所以 x² ∈ O(x³) 也是正确的,(同样,将 O 视为<=并将 o 视为<

Big-O is to little-o as is to < . Big-O 之于 little-o 就像之于< Big-O is an inclusive upper bound, while little-o is a strict upper bound. Big-O 是包容性上限,而 little-o 是严格上限。

For example, the function f(n) = 3n is:例如,函数f(n) = 3n是:

  • in O(n²) , o(n²) , and O(n)O(n²)o(n²)O(n)
  • not in O(lg n) , o(lg n) , or o(n)不在O(lg n)o(lg n)o(n)

Analogously, the number 1 is:类似地,数字1是:

  • ≤ 2 , < 2 , and ≤ 1 ≤ 2< 2≤ 1
  • not ≤ 0 , < 0 , or < 1≤ 0< 0< 1

Here's a table, showing the general idea:这是一张表,显示了总体思路:

大o表

(Note: the table is a good guide but its limit definition should be in terms of the superior limit instead of the normal limit. For example, 3 + (n mod 2) oscillates between 3 and 4 forever. It's in O(1) despite not having a normal limit, because it still has a lim sup : 4.) (注意:该表是一个很好的指南,但它的极限定义应该是 上极限而不是正常极限。例如, 3 + (n mod 2)永远在 3 和 4 之间振荡。它在O(1)中尽管没有正常限制,因为它仍然有lim sup :4。)

I recommend memorizing how the Big-O notation converts to asymptotic comparisons.我建议记住 Big-O 符号如何转换为渐近比较。 The comparisons are easier to remember, but less flexible because you can't say things like n O(1) = P.比较更容易记住,但不太灵活,因为你不能说像 n O(1) = P 这样的东西。

I find that when I can't conceptually grasp something, thinking about why one would use X is helpful to understand X. (Not to say you haven't tried that, I'm just setting the stage.)我发现当我无法从概念上掌握某些东西时,思考为什么会使用 X有助于理解 X。(并不是说你没有尝试过,我只是在做铺垫。)

Stuff you know : A common way to classify algorithms is by runtime, and by citing the big-Oh complexity of an algorithm, you can get a pretty good estimation of which one is "better" -- whichever has the "smallest" function in the O, Even in the real world, O(N) is "better" than O(N²).你知道的东西:对算法进行分类的一种常见方法是按运行时间,通过引用算法的大 Oh 复杂性,你可以很好地估计哪个算法“更好”——以具有“最小”功能的算法为准O,即使在现实世界中,O(N) 也比 O(N²)“更好”。 barring silly things like super-massive constants and the like.除非像超大常数之类的愚蠢的事情。

Let's say there's some algorithm that runs in O(N).假设有一些算法在 O(N) 中运行。 Pretty good, huh?很不错吧? But let's say you (you brilliant person, you) come up with an algorithm that runs in O( NloglogloglogN ).但是假设你(你是个聪明人,你)想出了一个运行时间为 O( NloglogloglogN ) 的算法。 YAY.耶。 Its faster, But you'd feel silly writing that over and over again when you're writing your thesis, So you write it once, and you can say "In this paper, I have proven that algorithm X. previously computable in time O(N), is in fact computable in o(n)."它更快,但是当你写论文时你会觉得一遍又一遍地写它很傻,所以你写一次,你可以说“在这篇论文中,我已经证明了算法 X。以前可以在时间 O 上计算(N),实际上可以在 o(n) 中计算。”

Thus, everyone knows that your algorithm is faster --- by how much is unclear, but they know its faster.因此,每个人都知道您的算法更快 --- 不知道快多少,但他们知道它更快。 Theoretically.理论上。 :) :)

In general一般来说

Asymptotic notation is something you can understand as: how do functions compare when zooming out?渐近符号是您可以理解为:缩小时函数如何比较? (A good way to test this is simply to use a tool like Desmos and play with your mouse wheel). (测试这个的一个好方法是简单地使用像Desmos这样的工具并使用鼠标滚轮)。 In particular:特别是:

  • f(n) ∈ o(n) means: at some point, the more you zoom out, the more f(n) will be dominated by n (it will progressively diverge from it). f(n) ∈ o(n)的意思是:在某个时刻,你越缩小, f(n)就越受n支配(它会逐渐偏离它)。
  • g(n) ∈ Θ(n) means: at some point, zooming out will not change how g(n) compare to n (if we remove ticks from the axis you couldn't tell the zoom level). g(n) ∈ Θ(n)意味着:在某些时候,缩小不会改变g(n)n的比较方式(如果我们从轴上删除刻度,您将无法分辨缩放级别)。

Finally h(n) ∈ O(n) means that function h can be in either of these two categories.最后h(n) ∈ O(n)意味着函数h可以属于这两个类别中的任何一个。 It can either look a lot like n or it could be smaller and smaller than n when n increases.它可以看起来很像n ,也可以在n增加时n越来越小。 Basically, both f(n) and g(n) are also in O(n) .基本上, f(n)g(n)也在O(n)中。

I think this Venn diagram (from this course ) could help:我认为这个维恩图(来自本课程)可以帮助:

渐近符号的维恩图

In computer science在计算机科学

In computer science, people will usually prove that a given algorithm admits both an upper O and a lower bound在计算机科学中,人们通常会证明给定的算法同时接受上O和下界. . When both bounds meet that means that we found an asymptotically optimal algorithm to solve that particular problem Θ .当两个边界都满足时,这意味着我们找到了一个渐进最优算法来解决该特定问题Θ

For example, if we prove that the complexity of an algorithm is both in O(n) and (n) it implies that its complexity is in Θ(n) .例如,如果我们证明一个算法的复杂度在O(n)(n)中,则意味着它的复杂度在Θ(n)中。 (That's the definition of Θ and it more or less translates to "asymptotically equal".) Which also means that no algorithm can solve the given problem in o(n) . (这是Θ的定义,它或多或少地转化为“渐近相等”。)这也意味着没有算法可以解决o(n)中的给定问题。 Again, roughly saying "this problem can't be solved in (strictly) less than n steps".同样,粗略地说“这个问题不能在(严格)少于n步的情况下解决”。

Usually the o is used within lower bound proof to show a contradiction.通常在下限证明中使用o来表示矛盾。 For example:例如:

Suppose algorithm A can find the min value in an array of size n in o(n) steps.假设算法A可以在o(n)步内找到大小为n的数组中的最小值。 Since A ∈ o(n) it can't see all items from the input.由于A ∈ o(n)它无法从输入中看到所有项目。 In other words, there is at least one item x which A never saw.换句话说,至少有一个项目xA从未见过的。 Algorithm A can't tell the difference between two similar inputs instances where only x 's value changes.算法A无法区分只有x的值发生变化的两个相似输入实例。 If x is the minimum in one of these instances and not in the other, then A will fail to find the minimum on (at least) one of these instances.如果x是其中一个实例中的最小值而不是另一个实例中的最小值,则A将无法(至少)在其中一个实例中找到最小值。 In other words, finding the minimum in an array is in (n) (no algorithm in o(n) can solve the problem).换句话说,找到数组中的最小值是在(n)中( o(n)中没有算法可以解决问题)。

Details about lower/upper bound meanings有关下限/上限含义的详细信息

An upper bound of O(n) simply means that even in the worse case, the algorithm will terminate in at most n steps (ignoring all constant factors, both multiplicative and additive). O(n)的上限仅仅意味着即使在最坏的情况下,算法也将在最多n步后终止(忽略所有常数因子,包括乘法和加法)。 A lower bound of (n) is a statement about the problem itself, it says that we built some example(s) where the given problem couldn't be solved by any algorithm in less than n steps (ignoring multiplicative and additive constants). (n)的下限是关于问题本身的陈述,它表示我们构建了一些示例,其中给定问题无法通过少于n步的任何算法解决(忽略乘法和加法常数)。 The number of steps is at most n and at least n so this problem complexity is "exactly n ".步数最多为n且至少为n所以这个问题的复杂度是“恰好n ”。 Instead of saying "ignoring constant multiplicative/additive factor" every time we just write Θ(n) for short.而不是每次我们只写Θ(n)时都说“忽略常数乘法/加法因子”。

The big-O notation has a companion called small-o notation.大 O 表示法有一个称为小 O 表示法的同伴。 The big-O notation says the one function is asymptotical no more than another.大 O 表示法表示一个函数no more than另一个函数一样是渐近的。 To say that one function is asymptotically less than another, we use small-o notation.要说一个函数渐近地less than另一个函数,我们使用 small-o 表示法。 The difference between the big-O and small-o notations is analogous to the difference between <= (less than equal) and < (less than).大 O 和小 O 符号之间的区别类似于 <=(小于等于)和 <(小于)之间的区别。

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM