简体   繁体   English

函数scipy.optimize.minimize的选项

[英]options of the function scipy.optimize.minimize

I am trying to minimize a very long function (it is the sum of 500000 parts of sub-functions) in order to fit some parameters to a probabilistic model. 我试图最小化一个非常长的函数(它是500000个子函数的总和),以便将一些参数拟合到概率模型。 I use the scipy.optimize.minimize function. 我使用scipy.optimize.minimize函数。 I tried both Powell and Nelder-Mead algorithms, and Powell looks really faster in my settings. 我尝试了PowellNelder-Mead算法,Powell在我的设置中看起来非常快。 But still, I really don't understand how to force the process to give me some results after a given time, even if they are not "optimal". 但是,我真的不明白如何强制这个过程在给定的时间后给我一些结果,即使它们不是“最优的”。

I fill the options maxiter , maxfev , xtol and ftol , but I don't really understand these options, since I tried to put a print in my function and I noticed that the algorithm evaluate it more than maxfev times, but when It reaches the maxiter point, It sends an error "max number of iterations reached". 我填写选项maxitermaxfevxtolftol ,但我真的不明白这些选项,因为我试图在我的函数中放置一个print ,我注意到算法对它的评估超过maxfev次,但当它达到maxiter point,它发送一个错误“达到最大迭代次数”。

Can anyone explain me how they work with respect to the two algorithms I am using? 任何人都可以解释我们如何使用我正在使用的两种算法吗? The doc is very unclear. 该文件非常不清楚。

My code: 我的代码:

def log_likelihood(r, alpha, a, b, customers):
    if r <= 0 or alpha <= 0 or a <= 0 or b <= 0:
        return -np.inf
    c = sum([log_likelihood_individual(r, alpha, a, b, x, tx, t) for x, tx, t in customers])
    print -c
    return c

negative_ll = lambda params: -log_likelihood(*params,customers=customers)
params0 = (1, 1, 1, 1)
res = minimize(negative_ll, params0, method='Powell', callback=print_callback, options={'disp': True, 'ftol':0.05, 'maxiter':3, 'maxfev":15})

Thank you. 谢谢。

You should probably ask this on the scipy mailing list, or even the scipy developer mailing list, but looking at the source code for the Nelder-Mead algorithm, I notice the actual check on maxiter and maxfev are in the outer while loop. 你可能应该在scipy邮件列表,甚至scipy开发人员邮件列表上问这个问题,但是看一下Nelder-Mead算法的源代码 ,我注意到maxitermaxfev的实际检查是在外部while循环中。 The function is called several times inside that while loop, so the actual number of function evaluations can easily surpass maxfev . 该函数在while循环中被调用多次,因此函数评估的实际数量可以轻松超过maxfev Something similar happens inside the main loop for Powell's method. Powell方法的主循环内部发生了类似的事情。 For that method, it appears the function is evaluated N times before the number of evaluations is tested ( N the number of parameters). 对于该方法,似乎在测试评估次数之前评估函数N次( N参数数量)。

I guess this is done because otherwise there would be too many if statements inside the core loops that check against maxfev , and it was deemed faster/clearer to have the condition outside the inner loops. 我想这样做是因为否则核心循环中会有太多if语句检查maxfev ,并且在内循环之外有条件被认为更快/更清楚。

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

相关问题 Scipy.optimize.minimize目标函数ValueError - Scipy.optimize.minimize objective function ValueError 使用python scipy.optimize.minimize优化具有许多输出的函数 - Optimize a function that has many outputs with python scipy.optimize.minimize scipy.optimize.minimize 与违反最大函数评估的 powell 方法 - scipy.optimize.minimize with powell method violating maximum function evaluation 如何将 scipy.optimize.minimize 用于具有 3 个变量的 function? - How to use scipy.optimize.minimize for function with 3 variables? Scipy.Optimize.Minimize 低效? 两次调用成本/梯度 function - Scipy.Optimize.Minimize inefficient? Double calls to cost/gradient function Scipy.optimize.minimize函数确定多个变量 - Scipy.optimize.minimize function to determine multiple variables scipy.optimize.minimize函数中平方拟合约束的误差 - error in constraint of square fitting in scipy.optimize.minimize function Scipy.optimize.minimize 目标 function 必须返回一个标量 - Scipy.optimize.minimize Objective function must return a scalar scipy.optimize.minimize 具有 2 个返回值的处理函数 - scipy.optimize.minimize handling function with 2 return values 避免在 scipy.optimize.minimize 中调用 function 两次 - Avoid calling function twice in scipy.optimize.minimize
 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM