[英]Non linear Least Squares: Reproducing Matlabs lsqnonlin with Scipy.optimize.least_squares using Levenberg-Marquardt
I am trying to minimize a function that takes a 1darray of length N and returns a scalar via Levenberg-Marquardt (:= LM).我试图最小化一个 function ,它采用长度为 N 的 1darray 并通过 Levenberg-Marquardt (:= LM) 返回一个标量。
It works in Matlab:它适用于 Matlab:
beta_initial = [-0.7823, -0.1441, -0.7669];
% substitution for my long, convoluted function
% but it also works with the proper function
F = @(beta) sum(exp(beta))+3;
options = optimset('Algorithm','Levenberg-Marquardt');
beta_arma = lsqnonlin(F,beta_initial,[],[],options) % -21.7814 -15.9156 -21.5420
F(beta_arma) % 3
When I tried it in Python I got a value error:当我在 Python 中尝试它时,我得到了一个值错误:
ValueError: Method 'lm' doesn't work when the number of residuals is less than the number of variables.
ValueError:当残差数小于变量数时,方法“lm”不起作用。
import numpy as np
from scipy.optimize import least_squares as lsq
# substitution for my long, convoluted function
F = lambda beta: np.sum(np.exp(beta))+3
beta_initial = [-0.7823, -0.1441, -0.7669]
beta_arma = lsq(F, beta_initial,method='lm')['x']
The way I understand the error scipy requires that我理解错误 scipy 的方式要求
out = F(in), such that len(out) >= len(in), yet matlab doesn't out = F(in),这样 len(out) >= len(in),但 matlab 没有
I've looked into the docs, scipy and matlab .我查看了文档scipy和matlab 。
From the scipy doc:来自 scipy 文档:
Method 'lm' (Levenberg-Marquardt) calls a wrapper over least-squares algorithms implemented in MINPACK (lmder, lmdif).
方法 'lm' (Levenberg-Marquardt) 调用一个封装在 MINPACK (lmder, lmdif) 中实现的最小二乘算法。 It runs the Levenberg-Marquardt algorithm formulated as a trust-region type algorithm.
它运行被表述为信任域类型算法的 Levenberg-Marquardt 算法。 The implementation is based on paper [JJMore], it is very robust and efficient with a lot of smart tricks.
该实现基于论文 [JJMore],它非常健壮且高效,并带有许多巧妙的技巧。 It should be your first choice for unconstrained problems.
它应该是无约束问题的首选。 Note that it doesn't support bounds.
请注意,它不支持边界。 Also it doesn't work when m < n .
当 m < n 时它也不起作用。
It looks like there is no LM implementation that works when m>=n看起来当 m>=n 时没有 LM 实现有效
My question is:我的问题是:
How can I get non-linear Least Squares minimization using LM like Matlab in Python?如何在 Python 中使用像 Matlab 这样的 LM 来实现非线性最小二乘法最小化?
I found a work-around by splitting my function into two:我通过将 function 一分为二找到了解决方法:
I've then let the optimizer run on the first function.然后我让优化器在第一个 function 上运行。
In context of the minimal example from above:在上面的最小示例的上下文中:
import numpy as np
from scipy.optimize import least_squares as lsq
F1 = lambda beta: np.exp(beta)
F2 = lambda processed_beta: np.sum(np.exp(processed_beta))+3
beta_initial = [-0.7823, -0.1441, -0.7669]
# parameters that minimze F1
beta_arma = lsq(F1, beta_initial,method='lm')['x']
F2(beta_arma) # 3.0
声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.