简体   繁体   中英

analysis of algorithm

为什么在算法分析中我们总是考虑输入较大的值,例如:big-oh表示法?

The point of Big-O notation is precisely to work out how the running time (or space) varies as the size of input increases - in other words, how well it scales.

If you're only interested in small inputs, you shouldn't use Big-O analysis... aside from anything else, there are often approaches which scale really badly but work very well for small inputs.

Because the worst case performance is usually more of a problem than the best case performance. If your worst case performance is acceptable your algorithm will run fine.

Big O says nothing about how well an algorithm will scale. "How well" is relative. It is a general way to quantify how an algorithm will scale, but the fitness or lack of fitness for any specific purpose is not part of the notation.

Suppose we want to check whether a no is prime or not. And Ram and Shyam came up with following solutions.

Ram's solution

   for(int i = 2; i <= n-1; i++) 
            if( n % i == 0 )
                return false;
         return true;

now we know that the above algorithm will run n-2 times.

shyam's solution

 for(int i = 2; i <= sqrt(n); i++)
   if ( n % i == 0 )
     return false;
 return true;

The above algorithm will run sqrt(n) - 1 times

Assuming that in both the algorithms each run takes unit time(1ms) then

if n = 101

1st algorithm :- Time taken is 99 ms which is even less than blink of an eye

2nd algorithm :- Around 9 ms which again is not noticable.

if n = 10000000019

1st algorithm :- Time taken is 115 days which is 3rd of an year.

2nd algorithm :- Around 1.66 minutes which is equivalent to sipping a cup of coffee.

I think nothing need to be said now :D

Analysis of algorithms does not just mean running them on the computer to see which one is faster. Rather it is being able to look at the algorithm and determine how it would perform. This is done by looking at the order of magnitude of the algorithm. As the number of items(N) changes what effect does it have on the number of operations needed to execute(time). This method of classification is referred to as BIG-O notation.

Programmers use Big-O to get a rough estimate of "how many seconds" and "how much memory" various algorithms use for "large" inputs

It's because of the definition of BigO notation. Given O(f(n)) is the bounds on g([list size of n]): For some value of n, n0, all values of n, n0 < n, the run-time or space- complexity of g([list]) is less than G*f(n), where G is an arbitrary constant.

What that means is that after your input goes over a certain size, the function will not scale beyond some function. So, if f(x) = x (being eq to O(n)), n2 = 2 * n1, the function i'm computing will not take beyond double the amount of time. Now, note that if O(n) is true, so is O(n^2). If my function will never do worse than double, it will never do worse than square either. In practice the lowest order function known is usually given.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM