[英]Complexity of algorithm implementing Newton's method in finding square root
I have written a Java program to calculate the square root of a user-defined number using Newton's method. 我已经编写了一个Java程序来使用牛顿方法计算用户定义数字的平方根。 The main operations of the algo goes like that:
该算法的主要操作如下:
answer = guess - ((guess * guess - inputNumber) / (2 * guess));
while (Math.abs(answer * answer - inputNumber) > leniency) {
guess = answer;
answer = guess - ((guess * guess - inputNumber) / (2 * guess));
}
I'm now seeking to find the complexity of the algorithm (yup it's homework), and have read up from here that the time complexity of Newton's method is O(log(n) * F(x)). 我现在正在寻找算法的复杂度(是的,是功课),并且从这里开始了解到牛顿方法的时间复杂度为O(log(n)* F(x))。
However, from the above code snippet, I have interpreted the time complexity to be: 但是,从上面的代码片段中,我将时间复杂度解释为:
O(1+ ∑(1 to n) (1) ) = O(1+n) = O(n)
Not sure what I'm getting wrong here, but I can't seem to understand the disparity in big Os even after reading wiki's explanation. 不知道我在这里出了什么问题,但是即使阅读了Wiki的说明,我似乎也无法理解大操作系统之间的差异。
Also, I am assuming that "complexity of algorithm" is synonymous to "time complexity". 另外,我假设“算法复杂度”与“时间复杂度”同义。 Is it right to do so?
这样做对吗?
Would really appreciate help in explaining this paradox, as I'm a newbie student with a few 'touch and go' programming modules worth of background. 非常感谢在解释这一悖论中的帮助,因为我是一名新手学生,具有一些具有“接触和行走”编程模块的知识,并且具有一定的背景知识。
Thanks in advance :) 提前致谢 :)
The problem is that you actually know nothing about n
in your calculation - you don't say what it should be. 问题是您实际上对计算中的
n
一无所知-您没有说应该是什么。 When you calculate the actual error of the next iteration of the algorithm (do it!), you'll see that eg. 当您计算算法的下一次迭代的实际误差(执行!)时,您会看到例如。 if
a
is at least 1 and error is less than 1, you basically double the number of valid places every iteration. 如果
a
至少为1且误差小于1,则基本上每次迭代的有效位置数是原来的两倍。 So to get p
decimal places, you have to perform log(p)
iterations. 因此,要获得
p
个小数位,您必须执行log(p)
迭代。
声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.