简体   繁体   中英

ValueError: too many values to unpack scipy's fmin vs least_squares

I would like implement scipy's optimize.least_squares instead of the fmin in the code below (almost at the bottom of the code). However, I run into the following error when changing the fmin part to least_squares and do not know how to solve this. enter image description here

What does the following error message mean: ValueError: too many values to unpack

Parameters
==========
sigma: float
    volatility factor in diffusion term
lamb: float
    jump intensity
mu: float
    expected jump size
delta: float
    standard deviation of jump

Returns
=======
RMSE: float
    root mean squared error
'''
global i, min_RMSE
sigma, lamb, mu, delta = p0
if sigma < 0.0 or delta < 0.0 or lamb < 0.0:
    return 500.0
se = []
for row, option in options.iterrows():
    T = (option['Maturity'] - option['Date']).days / 365.
    r = option['Interest']

    #absolut_error = abs(option['Call'] - option['Model'])
    model_value = M76_value_call_FFT(S0, option['Strike'], T,
                                     r, sigma, lamb, mu, delta)
    se.append((model_value - option['Call']) ** 2)
RMSE = math.sqrt(sum(se) / len(se))
min_RMSE = min(min_RMSE, RMSE)
if i % 50 == 0:
    print '%4d |' % i, np.array(p0), '| %7.3f | %7.3f' % (RMSE, min_RMSE) 
    print 
i += 1
return RMSE

def generate_plot(opt, options):
#
# Calculating Model Prices
#
sigma, lamb, mu, delta = opt
options['Model'] = 0.0
for row, option in options.iterrows():
    T = (option['Maturity'] - option['Date']).days / 365.

    options.loc[row, 'Model'] = M76_value_call_FFT(S0, option['Strike'],
                                T, r, sigma, lamb, mu, delta)

#
# Plotting
#
mats = sorted(set(options['Maturity']))
options = options.set_index('Strike')
for i, mat in enumerate(mats):
    options[options['Maturity'] == mat][['Call', 'Model']].\
            plot(style=['bs', 'r--'], title='%s' % str(mat)[:10],
            grid=True)
    plt.ylabel('Call Value')
    plt.savefig('img/M76_calibration_3_%s.pdf' % i)

if __name__ == '__main__':
#
# Calibration
#
i = 0  # counter initialization
min_RMSE = 100  # minimal RMSE initialization
print('BRUTE')
p0 = sop.brute(M76_error_function_FFT, ((0, 0.01, 1),
                (0, 0.3, 5), (-10, 0.01, 10),
                (0, 0.01, 40)), finish=None)

# p0 = [0.15, 0.2, -0.3, 0.2]
print('FMIN')
opt = sop.fmin(M76_error_function_FFT, p0,
                maxiter=500, maxfun=750,
                xtol=0.000001, ftol=0.000001)

generate_plot(opt, options)

print options 

what is your problem ? the fmin syntax is:

import scipy.optimize as sop
import numpy as np
def ftest(x):
    x,y,z = x
    data = np.array([3.65,2.41,1.59])# arbitrary & just for testing
    model = np.array([x**2,y**2,z**2])# arbitrary & just for testing
    residuals = data - model 
    return residuals
def ftest2(x):
    return sum(ftest(x)**2)# to ask fmin to minimize the chi2 but it can be anything
p0 = np.array([1,2,3])# arbitrary & just for testing
opt = sop.fmin(ftest2, p0)

while for least_squares you replace the last line by:

opt = sop.least_squares(ftest, p0)

note ftest returns the residuals while ftest2 returns the scalar you want to minimize.

Be careful with fmin. Since this is the simplex algorithm, you can never be sure it converges in a reasonable time. For instance if you replace the 'model' line by:

model = np.array([x+y**2+z**3 , x+y+z , x-y**2-z])

you do not converge with fmin. You can experiment that comparing outputs of:

opt = sop.least_squares(ftest, p0)
opt2 = sop.fmin(ftest2, p0,xtol=1e-20,ftol=1e-20,maxfun=1e20,maxiter=5e4)

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM