繁体   English   中英

在Python中将指数曲线拟合到数值数据

[英]Fitting an exponential curve to numerical data in python

我有一组数据,我想使用python拟合指数曲线。 我看了几个例子,然后想出了以下脚本。

from pylab import rc
import numpy as np
import matplotlib.pyplot as plt
from scipy import stats
from scipy import optimize


def model_func(t, A, K, C):
    return A * np.exp(-K*t) + C

def fit_exp_nonlinear(t, y):
    opt_parms, parm_cov = optimize.curve_fit(model_func, t, y, maxfev=10000)
    A, K, C = opt_parms
    return A, K, C


pi_1_3 = np.array([186774.67608906,  119480.98881671,   81320.7163753 ,
         58031.00546565,   42990.04111078,   32826.01571713,
         25700.16102843,   20548.41187945,   16726.83116009,
         13828.25419449,   11587.30623606,    9825.34010814,
          8419.5903399 ,    7283.07702701,    6353.54595154,
          5585.14190098,    4943.96579672,    4404.19434404,
          3946.30532107,    3555.02206224,    3218.38622266,
          2927.06387001,    2673.47535718,    2451.61460839,
          2256.54734046,    2084.28211566,    1931.48280645,
          1795.43700167,    1673.85361378,    1564.82468004,
          1466.7273566 ,    1378.20995936,    1298.09333835,
          1225.40136541,    1159.27214181,    1098.96372522,
          1043.84974141,     993.36880068,     947.04676203,
           904.45522954,     865.23010028,     829.04001802,
           795.60459963,     764.66375969,     735.99814339,
           709.40108872,     684.69988262,     661.73223448,
           640.35399121,     620.44025093,     601.87371066,
           584.54929565,     568.37272058,     553.26381714,
           539.14562691,     525.94472176,     513.59920745,
           502.05319667,     491.25259209,     481.15219684,
           471.70448455,     462.87002943,     454.61638734,
           446.90526636,     439.70525636,     432.99373968,
           426.73945924,     420.91972403,     415.50905545,
           410.49218343,     405.84570284,     401.55105219,
           397.59401523,     393.95737052,     390.62344145,
           387.58306896,     384.81946995,     382.319658  ,
           380.06982549,     378.05960051,     376.27758291,
           374.70924213,     373.34528829,     372.17106384,
           371.17691362,     370.34772485,     369.67316117,
           369.13731143,     368.72679983,     368.42660804,
           368.22202786,     368.09453347,     368.02778592,
           368.00352008,     367.99947609])


pi_2_3 = np.array([ 0.96762688,  0.96645461,  0.96574696,  0.96541101,  0.96521801,
        0.96519657,  0.96540386,  0.96556112,  0.96589707,  0.96634024,
        0.96686919,  0.9675554 ,  0.96833452,  0.96922802,  0.97023588,
        0.97134382,  0.97257327,  0.97390279,  0.97534668,  0.97689778,
        0.97855611,  0.98032166,  0.98219442,  0.98417441,  0.98626876,
        0.98846319,  0.99076483,  0.9931737 ,  0.99556112,  0.99834167,
        1.00103645,  1.00386705,  1.00680486,  1.00985704,  1.012995  ,
        1.01623302,  1.01956397,  1.022995  ,  1.02651894,  1.03015726,
        1.0338456 ,  1.03764832,  1.04152966,  1.04549678,  1.04953538,
        1.05365261,  1.05784132,  1.0621015 ,  1.06641887,  1.07080057,
        1.07523231,  1.07970693,  1.0842173 ,  1.08877055,  1.09335239,
        1.09794853,  1.10255897,  1.10716941,  1.11177269,  1.11636169,
        1.12092924,  1.12546104,  1.12994282,  1.13437455,  1.13874196,
        1.14303074,  1.14723374,  1.15132952,  1.15531094,  1.15917798,
        1.16290922,  1.1664975 ,  1.16992852,  1.17320229,  1.1763045 ,
        1.17922087,  1.18195854,  1.18450322,  1.18686204,  1.18902073,
        1.19097212,  1.19273052,  1.19429593,  1.19566833,  1.19686204,
        1.19787706,  1.19872051,  1.19941387,  1.19997141,  1.20040029,
        1.20070765,  1.20092924,  1.20108649,  1.20118656,  1.20125804])



A, K, C = fit_exp_nonlinear(pi_2_3,pi_1_3)

fit_y = model_func(pi_2_3, A, K, C)

print A, K, C


fig = plt.figure(num=None, figsize=(8,16), dpi=80, facecolor='w', edgecolor='k', linewidth= 2.0, frameon=True)



fig = plt.xlabel(r'$\pi_1$', fontsize=17)
fig = plt.ylabel(r'$\pi_2$', fontsize=17)
fig = plt.autoscale(enable=True, axis='both', tight=False)
#fig = plt.ylim(0, 5)
#fig = plt.xlim(-0.1, 5e+7)

fig = plt.plot(pi_2_3, pi_1_3, linewidth=1.5, color='g', linestyle = '--', marker='s', markeredgecolor='g', markerfacecolor='white', label='measured')

fig = plt.plot(pi_2_3, fit_y, linewidth=1.5, color='k', linestyle = '-', marker='', markeredgecolor='k', markerfacecolor='white', label='fitted')


fig=plt.legend()

figE = plt.show()

但是,可以看出,似乎A,K和C的估算值不够好(?)。 有没有更好的方法可以将指数曲线拟合到这些数据?

此外,python中是否有用于非线性回归分析的库?

提前多谢

您的身体不好,原因有两个:

  1. 您的模型并非特别适合您的数据。
  2. 该拟合在数值上不适。

通过将常数A移到指数exp(-K*(t - t_0)) + C ,可以改善模型的数值条件。 用这种方法得到的结果与使用指数模型得到的结果差不多。 拟合日志空间也无济于事(只要您的模型中具有加性常数C ,这实际上不是一个选择)。

有时, optimize.curve_fit需要一些帮助,以初步猜测正确的参数。 通过指定p0参数将猜测传递给curve_fit

您可能有一些独立的方法可以根据理论或经验猜测A,K,C。 例如,如果您的-K*t值非常大(且为负), model_funct很大时,您可以将C估计为model_func的值,因为A * np.exp(-K * t)项应为零(假设K不是非常小)。 无论如何,该理论应该告诉您A,K,C的哪些值是合理的。 您可以将其用作猜测。

这是一种两遍方法, 似乎无需任何先验猜测即可工作。 它首先尝试针对没有常数项的partial_func模型拟合数据。 然后,它将来自该拟合的AK参数用作对model_func的curve_fit的猜测:

def model_func(t, A, K, C):
    return A * np.exp(-K * t) + C


def partial_func(t, A, K):
    return A * np.exp(-K * t)


def fit_exp_nonlinear(t, y):
    opt_parms, parm_cov = optimize.curve_fit(partial_func, t, y, maxfev=10000) 
    A, K = opt_parms
    opt_parms, parm_cov = optimize.curve_fit(model_func, t, y, p0=(A, K, 0))
    A, K, C = opt_parms
    return A, K, C

产量

在此处输入图片说明

A, K, C = (2.3961062737821128e+73, 163.82812722558725, 338.80276054827715)

拟合度不是很好,但是考虑到数据看起来不是非常接近于指数,它可能会达到最佳效果。

暂无
暂无

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM