简体   繁体   中英

Always O(1) is better than O(n) in algorithm time complexity?

I am asking that just think for a moment if I have an algorithm which is taking O(100) ---->O(1) time complexity and I have an algorithm for the same problem which takes O(n) to solve but if I know that it's n's maximum value is 50 then I can decide it's worst case is O(50) so in a case like this still O(1) algorithm or the second O(n) algorithm is the best choice? So if it's the second one, can we always tell O(1) is better than O(n) ?

Sure, not; not always . Big O is just an asymptotic behaviour , that's why

O(1) == O(0.001) == O(50) == O(100) == O(C) # where C is any positive constant

same for O(n)

O(n) == O(0.001n) == O(100n) == O(C * n)    # where C is any positive constant

imaging two algorithms with timings

t1 = 1e100 (seconds) = O(1)
t2 = n     (seconds) = O(n)

for the infinite n (asymptotic behaviour) the 1st algorithm is better than 2nd, but for all real world cases (small n ) t2 is preferable. Even scaling is not enough:

t1 = 1000               (seconds)
t2 = 100 * n            (seconds)
t3 = n + 1e100 * log(n) (seconds)

Algorithm 3 has a better scaling ( 1 vs. 100 : n vs 100 * n ) but 1e100 * log(n) term makes it impossible to finish in real world cases.

So instead of O in general case you should compare functions :

t1 = 100 (seconds)
t2 = n   (seconds)

here if n <= 50 then t2 is a better choice (and for n > 1000 we have quite an opposite)

Two algorithms:

100n -> O(n)

10n² -> O(n²)

If n < 10 the quadratic time algorithm is better. If n > 10 the linear time algorithm is better.

There are also practical use cases. Quick sort algorithms (avg: O(n log(n))) often uses insertion sort (avg: O(n²)) in case given data collection is small enough.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM