简体   繁体   中英

Why do we use asymptotic notations (thus ignoring coefficients) when talking about time complexity?

This question is different from "Why do we ignore coefficients in Big-O notation".

When measuring time complexity we usually use Big-O notations which ignore the coefficients and non-dominant elements. However, don't 2N+C and N+C instructions result in significant differences in execution time, especially when the problem size grows very large? The former will take twice more time than the latter, which could be two weeks compared with one week in real-world large scale computation.

Examples include quicksort vs. other O(NlogN) sort algorithms and the trivial O(N^3) matrix multiplication vs. Strassen's algorithm (which can be slower because the leading coefficient is much larger even with less exponent)

In short: because it's too hard.

Accurately finding out what those coefficient are requires a model of hardware: assigning a cost to every single primitive used by the algorithm. In presence of modern optimizing compilers, out-of-order execution and memory cache hierarchies this is a nigh intractable problem.

If you want some estimate of their values, it's much easier (and likely more accurate) to figure out the asymptotic complexity formula, run some benchmarks on different problem sizes and fit the coefficients to the obtained data.

We use asymptotic notation so we can talk about the efficiency of algorithms , not the efficiency of specific computers.

If you write program that takes f(n) seconds to run on your machine...

The same program might take f(n)/10 seconds on a much faster machine, but that's still O(f(n)).

The same program might take f(n)*10 seconds on a much slower machine, but that's still O(f(n)).

Some machine could have different hardware, so it's faster at, say, floating point math, but slower at memory access. The time it takes to run your program on that machine may be faster or slower, depending on the specific input, but it will still be O(f(n)).

The time it takes to run a program depends on a lot of things, but the asymptotic complexity is a property of the algorithm itself. That's why we use it to evaluate algorithms.

Because big-O notation tells you about an algorithm's performance independent of the language, compiler/interpreter, or platform being used. Contrary to popular belief, big-O doesn't predict the run-time. Instead, it tells you how the algorithm scales with input. However long it takes for a given size input, an algorithm that has O(n 2 ) complexity will asymptotically take 4 times as long if you double the size of the input, 100 times as long if you increase the input by a factor of 10, etc., regardless of your choices of languages, compilers, or platforms.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM