简体   繁体   中英

Curve fit fails with exponential but zunzun gets it right

I'm trying to compute the best fit of two forms of an exponential to some x, y data (the data file can be downloaded from here )

Here's the code:

from scipy.optimize import curve_fit
import numpy as np

# Get x,y data
data = np.loadtxt('data.txt', unpack=True)
xdata, ydata = data[0], data[1]

# Define first exponential function
def func(x, a, b, c):
    return a * np.exp(b * x) + c

# Get parameters estimate
popt, pcov = curve_fit(func, xdata, ydata)

print popt

# Define second exponential function (one more parameter)
def func2(x, a, b, c, d):
    return a * np.exp(b * x + c) + d

# Get parameters estimate
popt2, pcov2 = curve_fit(func2, xdata, ydata)

print popt2

The first exponential gives the exact same values as zunzun.com ( PDF here ) for popt :

[  7.67760545e-15   1.52175476e+00   2.15705939e-02]

but the second gives values that are clearly wrong for popt2 :

[ -1.26136676e+02  -8.13233297e-01  -6.66772692e+01   3.63133641e-02]

This are zunzun.com values ( PDF here ) for that same second function:

a = 6.2426224704624871E-15
b = 1.5217697532005228E+00
c = 2.0660424037614489E-01
d = 2.1570805929514186E-02

I tried making the lists arrays as reccomended here Strange result with python's (scipy) curve fitting , but that didn't help. What am I doing wrong here?


Add 1

I'm guessing the problem has to do with the lack of initial values I'm feeding my function (as explained here: gaussian fit with scipy.optimize.curve_fit in python with wrong results )

If I feed the estimates from the first exponential to the second one like so (making the new parameter d be initially zero):

popt2, pcov2 = curve_fit(func2, xdata, ydata, p0 = [popt[0], popt[1], popt[2], 0]) 

I get results that are much reasonable but still wrong compared to zunzun.com:

[  1.22560853e-14   1.52176160e+00  -4.67859961e-01   2.15706930e-02]

So now the question changes to: how can I feed my second function more reasonable parameters automatically?

Zunzun.com uses the Differential Evolution genetic algorithm (DE) to find initial parameter estimates which are then passed to the Levenberg-Marquardt solver in scipy. DE is not actually used as a global optimizer per se, but rather as an "initial parameter guesser".

You can find links to the BSD-licensed Python source code for the zunzun.com fitter at the bottom of any of the site's web pages - it has many comprehensive examples - so there is no immediate need to code it yourself. Let me know if you have any questions and I'll do my best to help.

James Phillips zunzun@zunzun.com

Note that a=0 in the estimate by zunzun and in your first model. So they are just estimating a constant. So, b in the first case and b and c in the second case are irrelevant and not identified.

Zunzun also uses differential evolution as a global solver, the last time I looked at it. Scipy now has basinhopping as global optimizer that looks pretty good, that is worth a try in cases where local minima are possible.

My "cheap" way, since the parameters don't have a huge range in your example: try random starting values

np.random.seed(1)
err_last = 20
best = None

for i in range(10):
    start = np.random.uniform(-10, 10, size=4)
    # Get parameters estimate
    try:
        popt2, pcov2 = curve_fit(func2, xdata, ydata, p0=start)
    except RuntimeError:
        continue
    err = ((ydata - func2(xdata, *popt2))**2).sum()
    if err < err_last:
        err_last = err
        print err
        best = popt2


za = 6.2426224704624871E-15
zb = 1.5217697532005228E+00
zc = 2.0660424037614489E-01
zd = 2.1570805929514186E-02

zz = np.array([za,zb,zc,zd])
print 'zz', zz
print 'cf', best

print 'zz', ((ydata - func2(xdata, *zz))**2).sum()
print 'cf', err_last

The last part prints (zz is zunzun, cf is curve_fit)

zz [  6.24262247e-15   1.52176975e+00   2.06604240e-01   2.15708059e-02]
cf [  1.24791299e-16   1.52176944e+00   4.11911831e+00   2.15708019e-02]
zz 9.52135153898
cf 9.52135153904

Different parameters than Zunzun for b and c , but the same residual sum of squares.

Addition

a * np.exp(b * x + c) + d = np.exp(b * x + (c + np.log(a))) + d

or

a * np.exp(b * x + c) + d = (a * np.exp(c)) * np.exp(b * x) + d

The second function isn't really different from the first function. a and c are not separately identified. So optimizers, that use the derivative information, will also have problems because the Jacobian is singular in some directions, if I see this correctly.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM