简体   繁体   English

算法分析

[英]analysis of algorithm

为什么在算法分析中我们总是考虑输入较大的值,例如:big-oh表示法?

The point of Big-O notation is precisely to work out how the running time (or space) varies as the size of input increases - in other words, how well it scales. Big-O表示法的主要目的是弄清楚运行时间(或空间)如何随输入大小的增加而变化-换句话说,它的缩放程度如何。

If you're only interested in small inputs, you shouldn't use Big-O analysis... aside from anything else, there are often approaches which scale really badly but work very well for small inputs. 如果您只对少量输入感兴趣,则不应该使用Big-O分析……除了其他方面,通常还有很多方法的伸缩性很差,但对较小的输入效果很好。

Because the worst case performance is usually more of a problem than the best case performance. 因为最坏情况的性能通常比最坏情况的性能更成问题。 If your worst case performance is acceptable your algorithm will run fine. 如果您的最坏情况下的性能可以接受,则您的算法将运行良好。

Big O says nothing about how well an algorithm will scale. 大O只字未提算法将如何扩展。 "How well" is relative. “好”是相对的。 It is a general way to quantify how an algorithm will scale, but the fitness or lack of fitness for any specific purpose is not part of the notation. 这是量化算法如何缩放的一种通用方法,但是对于任何特定目的的适用性或不适用性都不是此符号的一部分。

Suppose we want to check whether a no is prime or not. 假设我们要检查否是否为素数。 And Ram and Shyam came up with following solutions. Ram和Shyam提出了以下解决方案。

Ram's solution 拉姆的解决方案

   for(int i = 2; i <= n-1; i++) 
            if( n % i == 0 )
                return false;
         return true;

now we know that the above algorithm will run n-2 times. 现在我们知道上述算法将运行n-2次。

shyam's solution shyam的解决方案

 for(int i = 2; i <= sqrt(n); i++)
   if ( n % i == 0 )
     return false;
 return true;

The above algorithm will run sqrt(n) - 1 times 上面的算法将运行sqrt(n)-1次

Assuming that in both the algorithms each run takes unit time(1ms) then 假设在两种算法中每次运行都花费单位时间(1毫秒),则

if n = 101 如果n = 101

1st algorithm :- Time taken is 99 ms which is even less than blink of an eye 第一种算法 :-花费的时间为99毫秒,甚至比眨眼还短

2nd algorithm :- Around 9 ms which again is not noticable. 第二种算法 :-9毫秒左右,这同样不明显。

if n = 10000000019 如果n = 10000000019

1st algorithm :- Time taken is 115 days which is 3rd of an year. 第一种算法 :-所需时间为115天 ,即一年的第三天。

2nd algorithm :- Around 1.66 minutes which is equivalent to sipping a cup of coffee. 第二种算法 :-大约1.66分钟,相当于喝一杯咖啡。

I think nothing need to be said now :D 我认为现在无需说什么了:D

Analysis of algorithms does not just mean running them on the computer to see which one is faster. 分析算法不仅意味着在计算机上运行它们,以查看哪种算法更快。 Rather it is being able to look at the algorithm and determine how it would perform. 而是能够查看算法并确定其性能。 This is done by looking at the order of magnitude of the algorithm. 这是通过查看算法的数量级来完成的。 As the number of items(N) changes what effect does it have on the number of operations needed to execute(time). 随着项数(N)的变化,它对执行所需的操作数(时间)有什么影响。 This method of classification is referred to as BIG-O notation. 这种分类方法称为BIG-O表示法。

Programmers use Big-O to get a rough estimate of "how many seconds" and "how much memory" various algorithms use for "large" inputs 程序员使用Big-O粗略估计各种算法用于“大”输入的“多少秒”和“多少内存”

It's because of the definition of BigO notation. 这是因为BigO符号的定义。 Given O(f(n)) is the bounds on g([list size of n]): For some value of n, n0, all values of n, n0 < n, the run-time or space- complexity of g([list]) is less than G*f(n), where G is an arbitrary constant. 给定O(f(n))是g([n的列表大小])的边界:对于某些n,n0值,所有n,n0 <n值,g(的运行时或空间复杂度[list])小于G * f(n),其中G是任意常数。

What that means is that after your input goes over a certain size, the function will not scale beyond some function. 这意味着输入超过一定大小后,该功能将不会超出某些功能。 So, if f(x) = x (being eq to O(n)), n2 = 2 * n1, the function i'm computing will not take beyond double the amount of time. 因此,如果f(x)= x(等于O(n)),n2 = 2 * n1,则我正在计算的函数所花费的时间不会超过两倍。 Now, note that if O(n) is true, so is O(n^2). 现在,请注意,如果O(n)为true,那么O(n ^ 2)也是如此。 If my function will never do worse than double, it will never do worse than square either. 如果我的功能永远不会比两倍差,那么它也永远不会比平方差。 In practice the lowest order function known is usually given. 实际上,通常给出已知的最低阶函数。

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM