简体   繁体   中英

What is difference between different asymptotic notations?

I am really very confused in asymptotic notations. As far as I know, Big-O notation is for worst cast, omega is for best case and theta is for average case. However, I have always seen Big O being used everywhere, even for best case. For eg in the following link, see the table where time complexities of different sorting algorithms are mentioned-

https://en.wikipedia.org/wiki/Best,_worst_and_average_case

Everywhere in the table, big O notation is used irrespective of whether it is best case, worst case or average case. Then what is the use of other two notations and where do we use it?

Big O is for upper bound, not for worst case! There is no notation specifically for worst case/best case. The examples you are talking about all have Big O because they are all upper bounded by the given value. I suggest you take another look at the book from which you learned the basics because this is immensely important to understand :)

EDIT: Answering your doubt- because usually, we are bothered with our at-most performance ie when we say, our algorithm performs in O(logn) in the best case-scenario, we know that its performance will not be worse than logarithmic time in the given scenario. It is the upper bound that we seek to reduce usually and hence we usually mention big O to compare algorithms. (not to say that we never mention the other two)

As far as I know, Big-O notation is for worst cast, omega is for best case and theta is for average case.

They aren't . Omicron is for (asymptotic) upper bound, omega is for lower bound and theta is for tight bound, which is both an upper and a lower bound. If the lower and upper bound of an algorithm are different, then the complexity cannot be expressed with theta notation.

The concept of upper,lower,tight bound are orthogonal to the concept of best,average,worst case. You can analyze the upper bound of each case, and you can analyze different bounds of the worst case (and also any other combination of the above).

Asymptotic bounds are always in relation to the set of variables in the expression. For example, O(n) is in relation to n . The best, average and worst cases emerge from everything else but n . For example, if n is the number of elements, then the different cases might emerge from the order of the elements, or the number of unique elements, or the distribution of values.

However, I have always seen Big O being used everywhere, even for best case.

That's because the upper bound is almost always the one that is the most important and interesting when describing an algorithm . We rarely care about the lower bound. Just like we rarely care about the best case.

The lower bound is sometimes useful in describing a problem that has been proven to have a particular complexity. For example, it is proven that worst case complexity of all general comparison sorting algorithms is Ω(n log n) . If the sorting algorithm is also O(n log n) , then by definition, it is also Θ(n log n) .

O(...) basically means "not (much) slower than ...".
It can be used for all three cases ("the worst case is not slower than", "the best case is not slower than", and so on).

Omega is the oppsite: You can say, something can't be much faster than ... . Again, it can be used with all three cases. Compared to O(...) , it's not that important, because telling someone "I'm certain my program is not faster than yours" is nothing to be proud of.

Theta is a combination: It's "(more or less) as fast as" ..., not just slower/faster.

The Big-O notation is somethin like this >= in terms of asymptotic equality.

For example if you see this :

x = O(x^2) it does say x <= x^2 (in asymptotic terms).

And it does mean "x is at most as complex as x^2", which is something that you are usually interesting it.

Even when you compare Best/Average case, you can say "At best possible input, I will have AT MOST this complexity".

There are two things mixed up: Big O, Omega, Theta, are purely mathematical constructions. For example, O (f (N)) is the set of functions which are less than c * f (n), for some c > 0, and for all n >= some minimum value N0. With that definition, n = O (f (n^4)), because n ≤ n^4. 100 = O (f (n)), because 100 <= n for n ≥ 100, or 100 <= 100 * n for n ≥ 1.

For an algorithm, you want to give worst case speed, average case speed, rarely the best case speed, sometimes amortised average speed (that's when running an algorithm once does work that can be used when it's run again. Like calculating n! for n = 1, 2, 3, ... where each calculation can take advantage of the previous one). And whatever speed you measure, you can give a result in one of the notations.

For example, you might have an algorithm where you can prove that the worst case is O (n^2), but you cannot prove whether there are faster special cases or not, and you also cannot prove that the algorithm isn't actually faster, like O (n^1.9). So O (n^2) is the only thing that you can prove.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM