简体   繁体   English

使用scipy.optimize.minimize查找全局最小值

[英]Find global minimum using scipy.optimize.minimize

Given a 2D point p , I'm trying to calculate the smallest distance between that point and a functional curve, ie, find the point on the curve which gives me the smallest distance to p , and then calculate that distance. 给定2D点p ,我试图计算该点与功能曲线之间的最小距离,即找到曲线上与p距离最小的点,然后计算该距离。 The example function that I'm using is 我正在使用的示例函数是

f(x) = 2*sin(x)

My distance function for the distance between some point p and a provided function is 我对某个点p和提供的函数之间的距离的距离函数是

def dist(p, x, func):
    x = np.append(x, func(x))
    return sum([[i - j]**2 for i,j in zip(x,p)])

It takes as input, the point p , a position x on the function, and the function handle func . 它以点p ,函数上的位置x和函数handle func Note this is a squared Euclidean distance (since minimizing in Euclidean space is the same as minimizing in squared Euclidean space). 注意,这是平方的欧几里德距离(因为在欧几里德空间中的最小化与在平方欧几里德空间中的最小化相同)。

The crucial part of this is that I want to be able to provide bounds for my function so really I'm finding the closest distance to a function segment. 至关重要的部分是我希望能够为函数提供界限,因此实际上我正在寻找与函数段最接近的距离。 For this example my bounds are 在这个例子中,我的界限是

bounds = [0, 2*np.pi]

I'm using the scipy.optimize.minimize function to minimize my distance function, using the bounds. 我正在使用scipy.optimize.minimize函数通过边界来最小化我的距离函数。 A result of the above process is shown in the graph below. 下图显示了上述过程的结果。

距离轮廓图

This is a contour plot showing distance from the sin function. 这是一个等高线图,显示了距sin函数的距离。 Notice how there appears to be a discontinuity in the contours. 请注意轮廓中似乎出现了不连续性。 For convenience, I've plotted a few points around that discontinuity and the "closet" points on the curve that they map to. 为了方便起见,我在该不连续点周围绘制了一些点,并在它们映射到的曲线上绘制了“隐蔽”点。

What's actually happening here is that the scipy function is finding a local minimum (given some initial guess), but not a global one and that is causing the discontinuity. 此处实际发生的是,scipy函数正在查找局部最小值(给出一些初始猜测),而不是全局最小值,这会导致不连续性。 I know finding the global minimum of any function is impossible, but I'm looking for a more reliable way to find the global minimum. 我知道不可能找到任何函数的全局最小值,但是我正在寻找一种更可靠的方法来找到全局最小值。

Possible methods for finding a global minimum would be 寻找全局最小值的可能方法是

  1. Choose a smart initial guess, but this amounts to knowing approximately where the global minimum is to begin with, which is using the solution of the problem to solve it. 选择一个明智的初始猜测,但这相当于大致了解全局最小值的起始位置,这将使用问题的解决方案来解决。
  2. Use a multiple initial guesses and choose the answer which gets to the best minimum. 使用多个初始猜测,然后选择最理想的答案。 This however seems like a poor choice, especially when my functions get more complicated (and higher dimensional). 但是,这似乎是一个糟糕的选择,尤其是当我的函数变得更加复杂(以及更高的维数)时。
  3. Find the minimum, then perturb the solution and find the minimum again, hoping that I may have knocked it into a better minimum. 找到最小值,然后扰动解决方案,然后再次找到最小值,希望我可以将其降低为更好的最小值。 I'm hoping that maybe there is some way to do this simply without evoking some complicated MCMC algorithm or something like that. 我希望也许有某种方法可以简单地做到这一点,而无需引起一些复杂的MCMC算法或类似的事情。 Speed counts for this process. 速度很重要。

Any suggestions about the best way to go about this, or possibly directions to useful functions that may tackle this problem would be great! 任何有关解决此问题的最佳方法的建议,或者可能指向解决此问题的有用功能的指导都将是很棒的!

As suggest in a comment, you could try a global optimization algorithm such as scipy.optimize.differential_evolution . 如评论中的建议,您可以尝试使用全局优化算法,例如scipy.optimize.differential_evolution However, in this case, where you have a well-defined and analytically tractable objective function, you could employ a semi-analytical approach, taking advantage of the first-order necessary conditions for a minimum. 但是,在这种情况下,如果您具有定义明确且易于分析的目标函数,则可以采用半分析方法,并充分利用一阶必要条件。

In the following, the first function is the distance metric and the second function is (the numerator of) its derivative wrt x , that should be zero if a minimum occurs at some 0<x<2*np.pi . 在下面,第一个函数是距离度量,第二个函数是其导数wrt x (的分子),如果最小值出现在0<x<2*np.pi ,则该值应为零。

import numpy as np    
def d(x, p):
    return np.sum((p-np.array([x,2*np.sin(x)]))**2)

def diff_d(x, p):
    return -2 * p[0] + 2 * x - 4 * p[1] * np.cos(x) + 4 * np.sin(2*x)

Now, given a point p , the only potential minimizers of d(x,p) are the roots of diff_d(x,p) (if any), as well as the boundary points x=0 and x=2*np.pi . 现在,给定点pd(x,p)的唯一潜在极小值是diff_d(x,p) (如果有)的根以及边界点x=0x=2*np.pi It turns out that diff_d may have more than one root. 事实证明diff_d可能具有多个根。 Noting that the derivative is a continuous function, the pychebfun library offers a very efficient method for finding all the roots, avoiding cumbersome approaches based on the scipy root-finding algorithms. pychebfun库注意到导数是一个连续函数,它为查找所有根提供了一种非常有效的方法,避免了基于scipy根查找算法的繁琐方法。

The following function provides the minimum of d(x, p) for a given point p : 以下函数提供给定点pd(x, p)的最小值:

import pychebfun
def min_dist(p):
    f_cheb = pychebfun.Chebfun.from_function(lambda x: diff_d(x, p), domain = (0,2*np.pi))
    potential_minimizers = np.r_[0, f_cheb.roots(), 2*np.pi]
    return np.min([d(x, p) for x in potential_minimizers])

Here is the result: 结果如下:

在此处输入图片说明

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM