简体   繁体   中英

algorithm with O(logn) and θ(logn) time-complexity

If we have 2 algorthims. One of them is O(f(x)) time-complexity and the other on is θ(f(x)) time-complexity. Which one we prefer to solve our problem? and why?

There is insufficient information given to decide which algorithm is preferable. It's possible that the first algorithm is preferable, it's possible that both are equally preferable, and it's even possible the second is preferable if they are asymptotically equal but the second has a lower constant factor.

  1. Consider the fact that binary search is O(n) because big-O only gives an upper bound, whereas linear search is Θ(n). Binary search is preferable, because it is asymptotically more efficient.
  2. Consider linear search, which is O(n), and... linear search, which is Θ(n). Both are equally preferable because they are literally the same.
  3. Consider bubble sort, which is O(n 2 ), and insertion sort, which is Θ(n 2 ). Insertion sort does on average ~ n 2 /4 comparisons, whereas bubble sort does on average ~ n 2 /2 comparisons, which is twice as many; so insertion sort is preferable.

So as you can see, it's not possible to say without more information.

Making such comparisons between two different notations doesn't seem to be so meaningful to me, but here is the logic: θ indicates both an upper-bound and a lower-bound for the number-of-operations function of your algorithm. Whereas O indicates only an upper-bound. Therefore technically, a function with θ(1) time complexity is also O(nlogn) . As to the question, I put my comment at the end of the explanation based on this logic.

θ(nlogn) time complexity means that number of operations your algorithm will perform is upper-bounded by nlogn and lower-bounded by nlogn , that is, number of operations will be constant multiple of nlogn .

O(nlogn) time complexity means that number of operations your algorithm will perform is upper-bounded by nlogn , that is, maximum number of operations will be constant multiple of nlogn .

Notice that, in second case we cannot make any comment on the minimum number of operations that can be performed. Any function can be said to be upper-bounded by nlogn as long as number of operations doesn't exceed a multiple of nlogn as input size goes to infinity. So your function can have constant time, or linear time, or logarithmic complexity. Since we have a possibility to have number of operations that is θ(n) , θ(logn) , θ(n1) , etc. I would say that using the algorithm with O(nlogn) would be better.

Let's try to compare the algorithms:

First algorithm has O(nlogn) time complexity which means that execution time t1 is

 t1 <= k1 * n * log(n) + o(n * log(n))

Second algorithm is θ(nlogn) , that's why

 t2 = k2 * n * log(n) + o(n * log(n))

Assuming that n is large enough so we can neglect o(n * log(n)) term, we still have two possibilities here.

  1. t1 < n * log(n)
  2. t1 = k1 * n * log(n) (at least for some worst case)

In the first case we should prefer algorithm 2 for large n , since algorithm 1 has a shorter execution time when n is large enough.

In the second case we have to compare unknown k1 and k2 , we have not enough information to choose from 1st and 2nd algorithms.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM