简体   繁体   中英

Proving an upper and lower bound for an algorithm

How can one proove an upper and lower bound for an algorithm?

Up until now, I thought both the upper and lower bound for an algorithm needs to be shown by taking into account all inputs and showing that it can not do worse than f(n) [upper bound] and not better than g(n) [lower bound].

My lecturer said that for the upper bound one needs to prove it in general [taking into accoung all inputs], but for a lower bound an example is sufficient.

This really confused me. Can anyone clarify what he meant?

Your lecturer is right if he speaks of the worst-case behavior.

From a single example, you can say that the running time is "at least that much" (and can be worse), but not that it is "at most that much" (as it can be worse).

[Symmetrically, when speaking of the best-case behavior, a single case gives a guarantee on the upper bound.]

The lower and upper bounds are quite different things. The lower bound is usually established "universally", ie irrespective of any particular algorithm. For instance, you cannot sort a sequence in less than N.Log(N) comparisons in the worst case, as you need to distinguish among N! possible permutations, and that requires to gather Lg(N!) bits of information.

On the opposite, the upper bound is determined for a particular algorithm. For instance, HeapSort never exceeds 2N.Lg(N) comparisons. When the upper bound meets the lower bound, the algorithm is said to be efficient.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM