简体   繁体   中英

Non linear Least Squares: Reproducing Matlabs lsqnonlin with Scipy.optimize.least_squares using Levenberg-Marquardt

I am trying to minimize a function that takes a 1darray of length N and returns a scalar via Levenberg-Marquardt (:= LM).

It works in Matlab:

beta_initial = [-0.7823, -0.1441, -0.7669]; 

% substitution for my long, convoluted function
% but it also works with the proper function
F = @(beta) sum(exp(beta))+3; 

options = optimset('Algorithm','Levenberg-Marquardt');

beta_arma = lsqnonlin(F,beta_initial,[],[],options) % -21.7814  -15.9156  -21.5420

F(beta_arma) % 3

When I tried it in Python I got a value error:

ValueError: Method 'lm' doesn't work when the number of residuals is less than the number of variables.

import numpy as np
from scipy.optimize import least_squares as lsq

# substitution for my long, convoluted function
F = lambda beta: np.sum(np.exp(beta))+3 

beta_initial = [-0.7823, -0.1441, -0.7669]

beta_arma = lsq(F, beta_initial,method='lm')['x']

The way I understand the error scipy requires that

out = F(in), such that len(out) >= len(in), yet matlab doesn't

I've looked into the docs, scipy and matlab .

From the scipy doc:

Method 'lm' (Levenberg-Marquardt) calls a wrapper over least-squares algorithms implemented in MINPACK (lmder, lmdif). It runs the Levenberg-Marquardt algorithm formulated as a trust-region type algorithm. The implementation is based on paper [JJMore], it is very robust and efficient with a lot of smart tricks. It should be your first choice for unconstrained problems. Note that it doesn't support bounds. Also it doesn't work when m < n .

It looks like there is no LM implementation that works when m>=n

My question is:

How can I get non-linear Least Squares minimization using LM like Matlab in Python?

I found a work-around by splitting my function into two:

  • The first function takes an array and returns an array
  • The second function takes the processed array from the first function and returns the scalar output

I've then let the optimizer run on the first function.

In context of the minimal example from above:

import numpy as np
from scipy.optimize import least_squares as lsq

F1 = lambda beta: np.exp(beta)
F2 = lambda processed_beta: np.sum(np.exp(processed_beta))+3


beta_initial = [-0.7823, -0.1441, -0.7669]

# parameters that minimze F1
beta_arma = lsq(F1, beta_initial,method='lm')['x'] 

F2(beta_arma) # 3.0

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM