简体   繁体   English

在不评估导数的情况下最大化函数f(x)

[英]Maximizing a function f(x) without evaluating the derivative

I'm an experimental physicist, trying to automate a relatively simple (but sensitive) optimization in my measurements that is currently done completely manually and takes up a lot of my time. 我是一名实验物理学家,试图在我的测量中自动化一个相对简单(但敏感)的优化,该优化目前完全手动完成,并且占用了我很多时间。 After some thinking and tips in the comments, I've reduced the problem to the following: 经过一些思考和评论中的提示,我将问题简化为以下内容:

There is some function f(x) that I want to maximize. 我想最大化一些函数f(x)。 However, I can only evaluate f(x); 但是,我只能评估f(x); I cannot evaluate its derivative explicitly. 我无法明确评估其派生词。 Moreover, I cannot sample a large range of x; 而且,我无法对大范围的x进行采样; if f(x) < threshold, I am in trouble (and it takes me over a day to recover). 如果f(x)<阈值,则我有麻烦(并且我需要一天的时间才能恢复)。 Luckily, I have one starting value x_0 such that f(x_0) > threshold, and I can guess some initial step size eps for which f(x_0 + eps) > threshold also holds (however, I don't know if f(x_0 + eps) > or < f(x_0) before evaluating). 幸运的是,我有一个起始值x_0,使得f(x_0)>阈值,我可以猜出一些初始步长eps,f(x_0 + eps)>阈值也成立(但是,我不知道f(x_0 + eps)>或<f(x_0)进行评估)。 Could someone suggest an algorithmic/adaptive/feedback protocol to find the x that maximizes f(x) up to some tolerance x_tol? 有人可以建议一种算法/自适应/反馈协议来找到将f(x)最大化到一定容差x_tol的x吗? So far I've found the golden-section search, but that requires choosing some range (a,b) over which you want to maximize which I cannot do; 到目前为止,我已经找到了黄金分割搜索,但是这需要选择一个想要最大化的范围(a,b),而我无法做到。 I can't start from some wide range as that might bring me below my tolerance. 我不能从一个广泛的范围开始,因为那可能使我低于我的承受能力。

How I currently do it manually is as follows: I evaluate f(x_0) and then f(x_0 + eps). 我目前的手动操作如下:先评估f(x_0),然后评估f(x_0 + eps)。 If this leads to a decrease, I evaluate f(x_0-eps) instead. 如果这导致减少,我改为评估f(x_0-eps)。 Based on the gradient (essentially I just look at if there are big steps or large steps wrt the threshold I cannot cross), I either increase or decrease eps and continue searching in the same direction until a maximum is found, which I notice because f(x) starts decreasing. 基于梯度(本质上,我只是查看是否存在较大的步长或无法跨过阈值的大步长),我要么增大或减小eps,一边继续沿相同方向搜索,直到找到最大值为止,这是因为f (x)开始减少。 I then go back to that maximum. 然后,我回到那个最大值。 This way I am always probing the top part of the maximum and thus remain in a safe range. 这样,我一直在探测最大值的顶部,因此保持在安全范围内。

I would say you need to define the problem or break it down such as finding local optimum . 我会说您需要定义问题或将其分解,例如找到局部最优值 Or Gradient descent 或梯度下降

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

相关问题 生成 f 的导数的函数 - A function generating the derivative of f 在 sorted 方法中使用 if else f'y{x}' 评估 lambda function - Evaluating a lambda function with if else f'y{x}' inside the sorted method 为什么f(x)的导数相对于&#39;x&#39;&#39;x&#39;而不是pytorch中的1? - Why is the derivative of f(x) with respect to 'x' 'x' and not 1 in pytorch? 没有派生的Python函数最小化 - Python function minimisation without derivative 手动求解导数.f(x1,x2) = (0,0) - Manually solving derivative.f(x1,x2) = (0,0) 无法将 g(x) 计算为导数 (f(x)) - Can't calculate g(x) as Derivative(f(x)) cosec(x) 上的 SymPy diff function 正在返回 Derivative(cosec(x),x) - SymPy diff function on cosec(x) is returning Derivative(cosec(x),x) 如何在x和y中找到f(x,y)的偏导数:python中的del ^ 2 f(x,y)/ [del(x)] [del(y)] - How to find Partial derivative of f(x,y) along x and y: del^2 f(x,y)/[del(x)][del (y)] in python Keras&TensorFlow:获得f(x)wrt x的二阶导数,其中dim(x)=(1,n) - Keras & TensorFlow: getting 2nd derivative of f(x) wrt x, where dim(x) = (1, n) 如何使用导数和梯度下降来找到最小化 function 的 x 值 - How to use derivative and gradient decent to find the value of x that minimizes function
 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM