简体   繁体   中英

numpy/scipy analog of matlab's fminsearch

I am converting some Matlab code into python using numpy . Everything worked pretty smoothly but recently I encountered fminsearch function.

So, to cut it short: is there an easy way to make in python something like this:

banana = @(x)100*(x(2)-x(1)^2)^2+(1-x(1))^2;
[x,fval] = fminsearch(banana,[-1.2, 1])

which will return

x = 1.0000    1.0000
fval = 8.1777e-010

Up till now I have not found anything that looks similar in numpy. The only thing that I found similar is scipy.optimize.fmin . Based on the definition it

Minimize a function using the downhill simplex algorithm.

But right now I can not find to write the above-mentioned Matlab code using this function

It's just a straight-forward conversion from Matlab syntax to python syntax:

import scipy.optimize

banana = lambda x: 100*(x[1]-x[0]**2)**2+(1-x[0])**2
xopt = scipy.optimize.fmin(func=banana, x0=[-1.2,1])

with output:

Optimization terminated successfully.
         Current function value: 0.000000
         Iterations: 85
         Function evaluations: 159
array([ 1.00002202,  1.00004222])

fminsearch implements the Nelder-Mead method, see Matlab document: http://www.mathworks.com/help/matlab/ref/fminsearch.html . In the reference section.

To find its equivalent in scipy , you just need to check the doc strings of the methods provided in scipy.optimize . See: http://docs.scipy.org/doc/scipy/reference/generated/scipy.optimize.fmin.html#scipy.optimize.fmin . fmin also implements Nelder-Mead method.

The names do not always translate directly from matlab to scipy and are sometimes even misleading. For example, Brent's method is implemented as fminbnd in Matlab but optimize.brentq in scipy . So, checking the doc strings are always a good idea.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM