简体   繁体   English

scipy curve_fit 对于大 X 值不正确

[英]scipy curve_fit incorrect for large X values

To determine trends over time, I use scipy curve_fit with X values from time.time() , for example 1663847528.7147126 (1.6 billion).为了确定一段时间内的趋势,我使用scipy curve_fit和来自time.time()的 X 值,例如1663847528.7147126 (16 亿)。 Doing a linear interpolation sometimes creates erroneous results, and providing approximate initial p0 values doesn't help.进行线性插值有时会产生错误的结果,并且提供近似的初始p0值也无济于事。 I found the magnitude of X to be a crucial element for this error and I wonder why?我发现 X 的大小是这个错误的关键因素,我想知道为什么?

Here is a simple snippet that shows working and non-working X offset:这是一个简单的片段,显示了工作和非工作 X 偏移量:

import scipy.optimize

def fit_func(x, a, b):
    return a + b * x

y = list(range(5))

x = [1e8 + a for a in range(5)]
print(scipy.optimize.curve_fit(fit_func, x, y, p0=[-x[0], 0]))
# Result is correct:
#   (array([-1.e+08,  1.e+00]), array([[ 0., -0.],
#          [-0.,  0.]]))

x = [1e9 + a for a in range(5)]
print(scipy.optimize.curve_fit(fit_func, x, y, p0=[-x[0], 0.0]))
# Result is not correct:
#   OptimizeWarning: Covariance of the parameters could not be estimated
#   warnings.warn('Covariance of the parameters could not be estimated',
#   (array([-4.53788811e+08,  4.53788812e-01]), array([[inf, inf],
#          [inf, inf]]))

Almost perfect p0 for b removes the warning but still curve_fit doesn't work
print(scipy.optimize.curve_fit(fit_func, x, y, p0=[-x[0], 0.99]))
# Result is not correct:
#   (array([-7.60846335e+10,  7.60846334e+01]), array([[-1.97051972e+19,  1.97051970e+10],
#          [ 1.97051970e+10, -1.97051968e+01]]))
   
# ...but perfect p0 works
print(scipy.optimize.curve_fit(fit_func, x, y, p0=[-x[0], 1.0]))
#(array([-1.e+09,  1.e+00]), array([[inf, inf],
#       [inf, inf]]))

As a side question, perhaps there's a more efficient method for a linear fit?作为一个附带问题,也许有一种更有效的线性拟合方法? Sometimes I want to find the second-order polynomial fit, though.不过,有时我想找到二阶多项式拟合。

Tested with Python 3.9.6 and SciPy 1.7.1 under Windows 10.在 Windows 10 下使用 Python 3.9.6 和 SciPy 1.7.1 进行测试。

If you just need to compute a linear fit, I believe curve_fit is not necessary and I would just use the linregress function instead from SciPy as well:如果您只需要计算线性拟合,我相信curve_fit不是必需的,我也会使用linregress function 代替 SciPy :

>>> from scipy import stats

>>> y = list(range(5))

>>> x = [1e8 + a for a in range(5)]
>>> stats.linregress(x, y)
LinregressResult(slope=1.0, intercept=-100000000.0, rvalue=1.0, pvalue=1.2004217548761408e-30, stderr=0.0, intercept_stderr=0.0)

>>> x2 = [1e9 + a for a in range(5)]
>>> stats.linregress(x2, y)
LinregressResult(slope=1.0, intercept=-1000000000.0, rvalue=1.0, pvalue=1.2004217548761408e-30, stderr=0.0, intercept_stderr=0.0)

In general, if you need a polynomial fit I would use NumPy polyfit .一般来说,如果您需要多项式拟合,我会使用 NumPy polyfit

Root cause根本原因

You are facing two problems:你面临两个问题:

  • Fitting procedure are scale sensitive.拟合过程是规模敏感的。 It means chosen units on a specific variable (eg. µA instead of kA) can artificially prevent an algorithm to converge properly (eg. One variable is several order of magnitude bigger than another and dominate the regression);这意味着在特定变量上选择的单位(例如,µA 而不是 kA)可以人为地阻止算法正确收敛(例如,一个变量比另一个变量大几个数量级并主导回归);
  • Float Arithmetic Error.浮点算术错误。 When switching from 1e8 to 1e9 you just hit the magnitude when such a kind of error become predominant.当从1e8切换到1e9时,当这种错误占主导地位时,您就会达到幅度。

The second one is very important to realize.第二个是非常重要的实现。 Let's say you are limited to 8 significant digits representation, then 1 000 000 000 and 1 000 000 001 are the same numbers as they are both limited to this writing 1.0000000e9 and we cannot accurately represents 1.0000000_e9 which requires one more digit ( _ ).假设您仅限于 8 位有效数字表示,那么1 000 000 0001 000 000 001是相同的数字,因为它们都仅限于这种写作1.0000000e9并且我们无法准确表示1.0000000_e9需要多一个数字( _ ) . This is why your second example fails.这就是您的第二个示例失败的原因。

Additionally you are using an Non Linear Least Square algorithm to solve a Linear Least Square problem, but this is not related to your problem.此外,您正在使用非线性最小二乘算法来解决线性最小二乘问题,但这与您的问题无关。

You have two solutions:你有两个解决方案:

  • Increase the machine precision while performing computations;在执行计算的同时提高机器精度;
  • Normalize your problem.规范你的问题。

I'll choose the second one as it is more generic.我会选择第二个,因为它更通用。

Normalization正常化

To mitigate both problems, a common solution is normalization.为了缓解这两个问题,一个常见的解决方案是标准化。 In your case a simple standardization is enough:在您的情况下,一个简单的标准化就足够了:

import numpy as np
import scipy.optimize

y = np.arange(5)
x = 1e9 + y

def fit_func(x, a, b):
    return a + b * x

xm = np.mean(x)         # 1000000002.0
xs = np.std(x)          # 1.4142135623730951

result = scipy.optimize.curve_fit(fit_func, (x - xm)/xs, y)

# (array([2.        , 1.41421356]),
# array([[0., 0.],
#        [0., 0.]]))

# Back transformation:
a = result[0][1]/xs                    # 1.0
b = result[0][0] - xm*result[0][1]/xs  # -1000000000.0

Or the same result using sklearn interface:或者使用sklearn接口得到相同的结果:

from sklearn.pipeline import Pipeline
from sklearn.preprocessing import StandardScaler, MinMaxScaler
from sklearn.linear_model import LinearRegression

pipe = Pipeline([
    ("scaler", StandardScaler()),
    ("regressor", LinearRegression())
])

pipe.fit(x.reshape(-1, 1), y)

pipe.named_steps["scaler"].mean_          # array([1.e+09])
pipe.named_steps["scaler"].scale_         # array([1.41421356])
pipe.named_steps["regressor"].coef_       # array([1.41421356])
pipe.named_steps["regressor"].intercept_  # 2.0

Back transformation反向变换

Indeed when normalizing the fit result is then expressed in term of normalized variable.实际上,当归一化拟合结果时,然后用归一化变量表示。 To get the required fit parameters, you just need to do a bit of math to convert back the regressed parameters into the original variable scales.要获得所需的拟合参数,您只需做一些数学运算即可将回归参数转换回原始变量比例。

Simply write down and solve the transformation:简单地写下并解决转换:

 y = x'*a' + b'
x' = (x - m)/s
 y = x*a + b

Which gives you the following solution:这为您提供了以下解决方案:

a = a'/s
b = b' - m/s*a'

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM