简体   繁体   中英

Java estimate a derivative at a point

I am currently writing a calculator application. I am trying to write a derivative estimator into it. The formula below is a simple way to do it. Normally on paper you would use the smallest h possible to get the most accurate estimate. The problem is doubles can't handle adding really small numbers to comparatively huge numbers. Such as 4+1E-200 will just result in 4.0. Even if h was just 1E-16, 4+1E16 will in fact give you the right value but doing math it it is inaccurate because anything after the 16th place is lost and rounding can't happen correctly. I have heard the general rule of thumb for doubles is 1E-8 or 1E-7. The issue with this is large numbers wont work as 2E231+1E-8 will just be 2E23, the 1E-8 will be lost because of size issues.

f'(x)=(f(x+h)-f(x))/h as x approaches 0

When I test f(x)=x^2 at the point 4 so f'(4), it should be exactly 8 now I understand that I will probably never get exactly 8. but I the most accurate seems to be around 1E-7 or 1E8 the funny thing is 1E-9 all the to 1E-11 give the same answer. Here is a list of h's and results for f(x)=x^2 at x=4

1E-7 8.000000129015916
1E-8 7.999999951380232
1E-9 8.000000661922968
1E-10 8.000000661922968
1E-11 8.000000661922968
1E-12 8.000711204658728

Here are my questions:

  1. What is the best way to pick h, obviously 1E-8 or 1E-7 make sense but how can I pick an h based off of x, so that it will work with any sized number even if x is 3.14E203 or 2E-231.
  2. How many decimals of precision should I account for.
  3. Do you have any idea how Texas instruments does it, the TI 83, 84, and Inspire can numerically figure out derivatives to 12 decimals or precision and almost always be right, but the maximum precision of their numbers is 12 digits anyway and those calculators are non CAS so they aren't actually deriving anything
  4. Logically there is a number somewhere between 1E-7 and 1E-8 that will give me a more precise result, is there any way to find that number, or at least get close to it.

ANSWERED

Thank you very much BobG. The application is currently planned to be in 2 forms, a command line PC application . And an Android application. You will be mentioned in the special thanks to portions of the About page. If you would like it will be open source but I am not posting links to the project site until I work out some very very large bugs. At the moment I have been calling it Mathulator, but the name will likely change because there is already a copyright on it and it sounds stupid.I have no clue when the release candidate will be running, at the moment I have no clue when it will be stable. But it will be very powerful if I can implement everything I want too.Thanks again. Happy Programming.

There's a book that answers this question (and others like it):

Numerical Recipes in C , 2nd Edition, by Press, Vetterling, Teukolsky, and Flannery. This book also comes in C++, Fortran, and BASIC versions, as well. Sadly, no Java version exists. Additionally, I believe this book is out of print, but it is possible to buy used versions of some of the flavors online (at least through bn.com.)

Section 5.7, "Numerical Derivatives," p. 186 explains exactly the problem you're seeing with numerical derivatives and the math behind why it happens, along with a function for how to compute a numerical derivative properly (in C, but it should be easy to translate to Java). A summary of their simple approximation is presented here:

1) Numerically, you're better off computing the symmetric version:

f'(x) = (f(x + h) - f(x - h)) / 2h

2) h should be approximately (sigma_f)^(1/3) * x_c

where

sigma_f =~ fractional accuracy of the computation of f(x) for simple functions

x_c =~ x, unless x is equal to zero.

However, this does not result in optimal derivatives, as the error is ~ (sigma_f)^(2/3). A better solution is Ridders' algorithm, which is presented as the C program in the book (ref. Ridders, CJF 1982, Advances in Engineering Software, vol. 4, no. 2, pp 75-76.)

Read paper titled "What every programmer should know about floating point" (google for it). Then you'll see that most of the floating values are represented approximately in computer hardware.

To make calculations without this drawback, use symbolic calculation. But this is not as efficient as using floating point.

To make floating point results consistent, use rounding to nearest power of 10, eg, 0.1, 0.01, etc. To understand when you should stop apporximations, use some kind of threshold to watch for during approximation steps. Eg, if performing next approximation step yields only .001% change to already computed value, there's no sense to continue approximations.

Update I had my numerical computation classes long ago, but I can vaguely recall that substracting close numbers is very bad, because if numbers are very close, most reliable digits are cancelled out and you have unreliable digits. This is exactly what happens when you decreasing h . What is suggested in these situations is substitute substraction with some other operations. For example, you can switch to some kind of series to which your `f(x) expands.

I don't quite understand your 2nd question, because answer depends on your requirements -- "as many as you wish".

By the way, you might have better luck with finding answers to your questions at math.stackexchange.com.

Additionally, visit link provided by thrashgod : Numerical differentiation

1.The precision of floating-point numbers (floats and doubles) depends on the number's absolute value. Doubles have ~15 digits of precision, so you can add 1 + 1e-15 , but 10 + 1e-15 will likely be 10 again, so you will have to do 10 + 1e-14 . To get a meaningful result, I would recommend you to multiply that very 1e-8 by the original number's absolute value, this will give you about 7 correct digits in the derivative. Something like:

double h = x * 1e-8;
double derivative = (f(x+h) - f(x)) / h;

Anyway this is an approximation, say, if you try to calculate the derivative of sin(x) at x=1e9, you will get h=10 and the result will be all wrong. But for "regular" functions that have the "interesting" part near zero this will work well.

2.The less is "h", the more precise is the point at which you sample the derivative, but the fewer correct digits of the derivative you get. I can't prove this but my gut feeling is that with h = x * 1e-8 you get 7 = 15 - 8 correct digits where 15 is the double 's precision.

Also, it would be a good idea to use a "more symmetric" formula, it gives an absolutely correct answer on second order polynomials:

double derivative = (f(x+h) - f(x-h)) / (2*h);

My question is what is the most appropriate h , and how can it be scaled to any size.

As noted in Numerical Differentiation , a suitable choice for h is sqrt(ɛ) * x , where ɛ is the machine epsilon .

我不会将BigDecimal类用于此类计算,尽管它不能回答您的问题,但确实可以提高浮点运算的精度。

According to the Javadoc, 11 bits represent the exponent and 52 bits represent the significant digits. Disregarding the exponent, it seems you have 52 bits to play with. So if you choose h = x * 2^-40, you've used 40 bits here, and the precision you gonna see is 2^-12. Adjust this ratio to your needs.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM