简体   繁体   中英

fitting two lorentz by lmfit Model minimize in python

maybe someone can help me. I spent days on it, but I couldn't get through the problem. Thanks in advance.

I want to fit 2 lorentzians to my experimental data. I broke my equations down to simple form of two lorentzians lorentz1 and lorentz2 functions. Then I defined two other functions L1 and L2 to only multiply a constant cnst to them. I have all 4 parameters to fit: cnst1 , cnst2 , tau1 , tau2 .

I use lmfit : Model and minimize (probably both use same method).

The initial fit parameters are set in a way that visually it is closer to a fine fit. But minimizing using lmfit gets lost (first image below):


不受约束的参数

using these parameters:

params.add('cnst1', value=1e3 , min=1e2, max=1e5)
params.add('cnst2', value=3e5, min=1e2, max=1e6)
params.add('tau1', value=2e0, min=0, max=1e2)
params.add('tau2', value=5e-3, min=0, max=10)

but error percentages are low:

cnst1:   117.459806 +/- 14.67188 (12.49%) (init= 1000)
cnst2:   413.959032 +/- 44.21042 (10.68%) (init= 300000)
tau1:    11.0343531 +/- 1.065570 (9.66%) (init= 2)
tau2:    1.55259664 +/- 0.125853 (8.11%) (init= 0.005)

On the other hand, constaining the parameters to very close to initial (force to be like initial):

在此处输入图片说明

using parameters:

#params.add('cnst1', value=1e3 , min=0.1e3, max=1e3)
#params.add('cnst2', value=3e5, min=1e3, max=1e6)
#params.add('tau1', value=2e0, min=0, max=2)
#params.add('tau2', value=5e-3, min=0, max=10)

fit is visually better but error values are huge:

[[Variables]]
cnst1:   752.988629 +/- 221.3098 (29.39%) (init= 1000)
cnst2:   3.0159e+05 +/- 3.05e+07 (10113.40%) (init= 300000)
tau1:    1.99684317 +/- 0.600748 (30.08%) (init= 2)
tau2:    0.00497806 +/- 0.289651 (5818.56%) (init= 0.005)

here is the total code:

import numpy as np
from lmfit import Model, minimize, Parameters, report_fit
import matplotlib.pyplot as plt

x = np.array([0.02988, 0.07057,0.19365,0.4137,0.91078,1.85075,3.44353,6.39428,\
        11.99302,24.37024,52.58804,121.71927,221.53799,358.27392,464.70405])

y = 1.0 / np.array([4.60362E-4,5.63559E-4,8.44538E-4,0.00138,0.00287,0.00657,0.01506,\
            0.03119,0.0584,0.09153,0.12538,0.19389,0.34391,0.68869,1.0])

def lorentz1(x, tau):
    L =  tau  / ( 1 + (x*tau)**2 )  
    return(L)

def lorentz2(x, tau):
    L =  tau**2  / ( 1 + (x*tau)**2 )  
    return(L)

def L1(x,cnst1,tau1):
    L1 =  cnst1 * lorentz1(x,tau1)
    return (L1)

def L2(x, cnst2, tau2):
    L2 =  cnst2 * lorentz2(x,tau2)
    return (L2)    

def L_min(params, x, y):
    cnst1 = params['cnst1'].value
    cnst2 = params['cnst2'].value
    tau1 = params['tau1'].value
    tau2 = params['tau2'].value

    L_total = L1(x, cnst1, tau1) + L2(x, cnst2, tau2)
    resids = L_total - y
    return resids

#params  = mod.make_params( cnst1=10e2, cnst2=3e5, tau1=2e0, tau2=0.5e-2)
params = Parameters()
#params.add('cnst1', value=1e3 , min=0.1e3, max=1e3)
#params.add('cnst2', value=3e5, min=1e3, max=1e6)
#params.add('tau1', value=2e0, min=0, max=2)
#params.add('tau2', value=5e-3, min=0, max=10)

params.add('cnst1', value=1e3 , min=1e2, max=1e5)
params.add('cnst2', value=3e5, min=1e2, max=1e6)
params.add('tau1', value=2e0, min=0, max=1e2)
params.add('tau2', value=5e-3, min=0, max=10)


#1-----Model--------------------
mod = Model(L1) + Model(L2)
result_mod = mod.fit(y, params, x=x)
print('---results from lmfit.Model----')
print(result_mod.fit_report())

#2---minimize-----------
result_min = minimize(L_min, params, args=(x,y))
final_min = y + result_min.residual
print('---results from lmfit.minimize----')
report_fit(params)

#-------Plot------
plt.close('all')
plt.loglog(x, y,'bo' , label='experimental data')
plt.loglog(x, result_mod.init_fit, 'k--', label='initial')
plt.loglog(x, result_mod.best_fit, 'r-', label='final')
plt.legend()
plt.show()

Searching for something, google brought my own question I asked some time ago. Now I know the answer and I give it here. I hope it helps someone. :)

I will consider the lmfit.minimize function. So the changes I made was to plot the result of lmfit.minimize . And to solve the problem of logarithmic y-scale (which was the main problem also mentioned by @mdurant), I just devided the residuals to its y-value (somehow normalize all data to be comparable when taking the residuals). I named it weighted residuals.

def L_min(params, x, y):
    ...
    ..
    .
    resids = L_total - y
    weighted_resids = resids/y
    return weighted_resids

So the result is shown by the blue line:

新配件

And the full code:

import numpy as np
from lmfit import Model, minimize, Parameters, report_fit
import matplotlib.pyplot as plt

x = np.array([0.02988, 0.07057,0.19365,0.4137,0.91078,1.85075,3.44353,6.39428,\
        11.99302,24.37024,52.58804,121.71927,221.53799,358.27392,464.70405])

y = 1.0 / np.array([4.60362E-4,5.63559E-4,8.44538E-4,0.00138,0.00287,0.00657,0.01506,\
            0.03119,0.0584,0.09153,0.12538,0.19389,0.34391,0.68869,1.0])

def lorentz1(x, tau):
    L =  tau  / ( 1 + (x*tau)**2 )  
    return(L)

def lorentz2(x, tau):
    L =  tau**2  / ( 1 + (x*tau)**2 )  
    return(L)

def L1(x,cnst1,tau1):
    L1 =  cnst1 * lorentz1(x,tau1)
    return (L1)

def L2(x, cnst2, tau2):
    L2 =  cnst2 * lorentz2(x,tau2)
    return (L2)    

def L_min(params, x, y):
    cnst1 = params['cnst1'].value
    cnst2 = params['cnst2'].value
    tau1 = params['tau1'].value
    tau2 = params['tau2'].value

    L_total = L1(x, cnst1, tau1) + L2(x, cnst2, tau2)
    resids = L_total - y
    weighted_resids = resids/y
    return weighted_resids
#    return resids

#params  = mod.make_params( cnst1=10e2, cnst2=3e5, tau1=2e0, tau2=0.5e-2)
params = Parameters()
#params.add('cnst1', value=1e3 , min=0.1e3, max=1e3)
#params.add('cnst2', value=3e5, min=1e3, max=1e6)
#params.add('tau1', value=2e0, min=0, max=2)
#params.add('tau2', value=5e-3, min=0, max=10)

params.add('cnst1', value=1e3 , min=1e2, max=1e5)
params.add('cnst2', value=3e5, min=1e2, max=1e6)
params.add('tau1', value=2e0, min=0, max=1e2)
params.add('tau2', value=5e-3, min=0, max=10)


#1-----Model--------------------
mod = Model(L1) + Model(L2)
result_mod = mod.fit(y, params, x=x)
print('---results from lmfit.Model----')
print(result_mod.fit_report())

#2---minimize-----------
result_min = minimize(L_min, params, args=(x,y))
final_min = y + result_min.residual
print('---results from lmfit.minimize----')
report_fit(params)

#-------Plot------
plt.close('all')
plt.loglog(x, y,'bo' , label='experimental data')
plt.loglog(x, result_mod.init_fit, 'k--', label='initial')
plt.loglog(x, result_mod.best_fit, 'r-', label='lmfit.Model')

min_result = L1(x, params['cnst1'].value, params['tau1'].value) + \
            L2(x, params['cnst2'].value, params['tau2'].value)
plt.loglog(x, min_result, 'b-', label='lmfit.Minimize')
plt.legend()
plt.show()

and the fitting errors are nice:

    cnst1:   832.592441 +/- 77.32939 (9.29%) (init= 1000)
    cnst2:   2.0836e+05 +/- 3.55e+04 (17.04%) (init= 300000)
    tau1:    1.64355457 +/- 0.221466 (13.47%) (init= 2)
    tau2:    0.00700899 +/- 0.000935 (13.34%) (init= 0.005)
[[Correlations]] (unreported correlations are <  0.100)
    C(tau1, tau2)                =  0.151 

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM