[英]Curve fit fails with exponential but zunzun gets it right
I'm trying to compute the best fit of two forms of an exponential to some x, y
data (the data file can be downloaded from here ) 我正在尝试计算某些
x, y
数据的指数形式的两种形式的最佳拟合(可以从此处下载数据文件)
Here's the code: 这是代码:
from scipy.optimize import curve_fit
import numpy as np
# Get x,y data
data = np.loadtxt('data.txt', unpack=True)
xdata, ydata = data[0], data[1]
# Define first exponential function
def func(x, a, b, c):
return a * np.exp(b * x) + c
# Get parameters estimate
popt, pcov = curve_fit(func, xdata, ydata)
print popt
# Define second exponential function (one more parameter)
def func2(x, a, b, c, d):
return a * np.exp(b * x + c) + d
# Get parameters estimate
popt2, pcov2 = curve_fit(func2, xdata, ydata)
print popt2
The first exponential gives the exact same values as zunzun.com ( PDF here ) for popt
: 第一个指数给出与
popt
完全相同的值zunzun.com( 此处为PDF ):
[ 7.67760545e-15 1.52175476e+00 2.15705939e-02]
but the second gives values that are clearly wrong for popt2
: 但是第二个给出的值显然是
popt2
错误的:
[ -1.26136676e+02 -8.13233297e-01 -6.66772692e+01 3.63133641e-02]
This are zunzun.com values ( PDF here ) for that same second function: 这是相同的第二个功能的zunzun.com值( 此处为PDF ):
a = 6.2426224704624871E-15
b = 1.5217697532005228E+00
c = 2.0660424037614489E-01
d = 2.1570805929514186E-02
I tried making the lists arrays as reccomended here Strange result with python's (scipy) curve fitting , but that didn't help. 我尝试按照这里推荐的方法制作列表数组, 使用python的(scipy)曲线拟合产生奇怪的结果 ,但这没有帮助。 What am I doing wrong here?
我在这里做错了什么?
I'm guessing the problem has to do with the lack of initial values I'm feeding my function (as explained here: gaussian fit with scipy.optimize.curve_fit in python with wrong results ) 我猜测问题与我提供函数的缺少初始值有关 (如此处所述: python中scipy.optimize.curve_fit的高斯拟合结果错误 )
If I feed the estimates from the first exponential to the second one like so (making the new parameter d
be initially zero): 如果我像这样将第一个指数的估计值馈入第二个指数(使新参数
d
最初为零):
popt2, pcov2 = curve_fit(func2, xdata, ydata, p0 = [popt[0], popt[1], popt[2], 0])
I get results that are much reasonable but still wrong compared to zunzun.com: 与zunzun.com相比,我得到的结果非常合理,但仍然有误:
[ 1.22560853e-14 1.52176160e+00 -4.67859961e-01 2.15706930e-02]
So now the question changes to: how can I feed my second function more reasonable parameters automatically? 因此,现在问题变成了:如何自动为第二个函数提供更合理的参数?
Zunzun.com uses the Differential Evolution genetic algorithm (DE) to find initial parameter estimates which are then passed to the Levenberg-Marquardt solver in scipy. Zunzun.com使用差分进化遗传算法(DE)查找初始参数估计值,然后将其估计地传递给Levenberg-Marquardt求解器。 DE is not actually used as a global optimizer per se, but rather as an "initial parameter guesser".
DE实际上实际上并不用作全局优化器,而是用作“初始参数猜测器”。
You can find links to the BSD-licensed Python source code for the zunzun.com fitter at the bottom of any of the site's web pages - it has many comprehensive examples - so there is no immediate need to code it yourself. 您可以在网站的任何网页底部找到zunzun.com装配工的BSD许可Python源代码的链接-它具有许多综合示例-因此,您无需立即对其进行编码。 Let me know if you have any questions and I'll do my best to help.
如果您有任何疑问,请告诉我,我们会尽力帮助您。
James Phillips zunzun@zunzun.com 詹姆斯·菲利普斯(James Phillips)zunzun@zunzun.com
Note that a=0
in the estimate by zunzun and in your first model. 请注意,zunzun和您的第一个模型的估计中
a=0
。 So they are just estimating a constant. 因此,他们只是在估计一个常数。 So,
b
in the first case and b
and c
in the second case are irrelevant and not identified. 所以,
b
在所述第一壳体和b
和c
在第二种情况下是不相关的和未被识别。
Zunzun also uses differential evolution as a global solver, the last time I looked at it. 我上次查看时,Zunzun还使用微分进化作为全局求解器。 Scipy now has basinhopping as global optimizer that looks pretty good, that is worth a try in cases where local minima are possible.
Scipy现在已经成为了看起来不错的全局优化器,这在可能出现局部最小值的情况下值得一试。
My "cheap" way, since the parameters don't have a huge range in your example: try random starting values 我的“便宜”方式,因为在您的示例中参数没有很大的范围:尝试随机起始值
np.random.seed(1)
err_last = 20
best = None
for i in range(10):
start = np.random.uniform(-10, 10, size=4)
# Get parameters estimate
try:
popt2, pcov2 = curve_fit(func2, xdata, ydata, p0=start)
except RuntimeError:
continue
err = ((ydata - func2(xdata, *popt2))**2).sum()
if err < err_last:
err_last = err
print err
best = popt2
za = 6.2426224704624871E-15
zb = 1.5217697532005228E+00
zc = 2.0660424037614489E-01
zd = 2.1570805929514186E-02
zz = np.array([za,zb,zc,zd])
print 'zz', zz
print 'cf', best
print 'zz', ((ydata - func2(xdata, *zz))**2).sum()
print 'cf', err_last
The last part prints (zz is zunzun, cf is curve_fit) 最后一部分打印(zz是zunzun,cf是curve_fit)
zz [ 6.24262247e-15 1.52176975e+00 2.06604240e-01 2.15708059e-02]
cf [ 1.24791299e-16 1.52176944e+00 4.11911831e+00 2.15708019e-02]
zz 9.52135153898
cf 9.52135153904
Different parameters than Zunzun for b
and c
, but the same residual sum of squares. b
和c
参数与Zunzun不同,但残差平方和相同。
Addition 加成
a * np.exp(b * x + c) + d = np.exp(b * x + (c + np.log(a))) + d
or 要么
a * np.exp(b * x + c) + d = (a * np.exp(c)) * np.exp(b * x) + d
The second function isn't really different from the first function. 第二个功能与第一个功能并没有什么不同。
a
and c
are not separately identified. a
和c
未单独标识。 So optimizers, that use the derivative information, will also have problems because the Jacobian is singular in some directions, if I see this correctly. 因此,使用派生信息的优化器也会遇到问题,因为如果我正确地看到的话,雅可比行列在某些方向上是奇异的。
声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.