简体   繁体   中英

Improve curve fit to data in R

Having trouble fitting an appropriate curve to this data.

x <- c(1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 21, 31, 41, 51, 61, 71, 
81, 91, 110, 210, 310, 410, 510, 610, 710, 810, 910, 1100, 2100, 
3100, 4100, 5100, 6100, 7100, 8100, 9100)

y <- c(75, 84, 85, 89, 88, 91, 92, 92, 93, 92, 94, 95, 95, 96, 95, 
95, 94, 97, 97, 97, 98, 98, 98, 99, 99, 99, 99, 99, 99, 99, 99, 
99, 99, 99, 99, 99, 99)

Tried so far:

fit1 <- lm(y~log(x)+I(1/x))
fit2 <- lm(y~log(x)+I(1/x)+x)

plot(x,y, log="x")
lines(0.01:10000, predict(fit1, newdata = data.frame(x=0.01:10000)))
lines(0.01:10000, predict(fit2, newdata = data.frame(x=0.01:10000)), col='red')

在此输入图像描述

The fits are ok, but arrived at entirely empirically and there is room for improvement. I did not fit loess or splines to be any better.

The concrete goal is to increase the R^2 of the fit and improve regression diagnostics (eg QQ plots of residuals).

Edit : Expected Model: this is sampling data, where more samples (x) improve the accuracy of the estimate (y); it would saturate at 100%.

This would be my function guess and according fit in python

# -*- coding: utf-8 -*-
import matplotlib.pyplot as plt
import numpy as np
import scipy.optimize as so


def f( x, a, b , s, p ):
    return a + b * s * ( x - 1 ) / (  1 + ( s * ( x - 1 ) )**( abs( 1 / p ) ) )**abs( p )


def g( x, a , s, p ):
    return a * s * x / (  1 + ( s * x )**( abs( 1 / p ) ) )**abs( p )


def h( x, s, p ):
    return 100 * s * x / (  1 + ( s * x )**( abs( 1 / p ) ) )**abs( p )


xData = [ 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 21, 31, 41, 51, 61, 71, 
        81, 91, 110, 210, 310, 410, 510, 610, 710, 810, 910, 1100, 2100, 
        3100, 4100, 5100, 6100, 7100, 8100, 9100 ]

yData = [ 75, 84, 85, 89, 88, 91, 92, 92, 93, 92, 94, 95, 95, 96, 95, 
        95, 94, 97, 97, 97, 98, 98, 98, 99, 99, 99, 99, 99, 99, 99, 99, 
        99, 99, 99, 99, 99, 99 ]

xList = np.logspace( 0, 5, 100 )

bestFitF, err = so.curve_fit( f , xData, yData, p0=[ 75, 25, 1, 1])
bestFitG, err = so.curve_fit( g , xData, yData)
bestFitH, err = so.curve_fit( h , xData, yData)

fList = np.fromiter( ( f(x, *bestFitF ) for x in xList ), np.float)
gList = np.fromiter( ( g(x, *bestFitG ) for x in xList ), np.float)
hList = np.fromiter( ( h(x, *bestFitH ) for x in xList ), np.float)

fig = plt.figure()
ax = fig.add_subplot( 1, 1, 1 )

ax.plot( xData, yData, marker='o', linestyle='')
ax.plot( xList, fList, linestyle='-.', label='f')
ax.plot( xList, gList, linestyle='-.', label='g')
ax.plot( xList, hList, linestyle='-.', label='h')

ax.set_xscale( 'log' )
ax.legend( loc=0 )
plt.show()

三种选择

Function f requires start values, g and h don't. It should be possible to write some code to guess the parameters, basically the first one is yData[0] , the second is yData[-1] - yData[0] and the others don't matter and are just set to 1 , but I did it manually here.

Both, g and h have the property that they pass ( 0, 0 ) . Additionally, h will saturate at 100 .

Note: Sure the more parameters the better the fit, but if it is, eg, a CDF you probably want a fixed saturation value and maybe the pass through ( 0, 0 ) as well.

This might be an acceptable fit to the Gunary equation, with an R-squared value of 0.976:

y = x / (a + bx + cx^0.5)

Fitting target of lowest sum of squared absolute error = 2.4509677507601545E+01

a =  1.2327255760994933E-03
b =  1.0083740273268828E-02
c =  1.9179200839782879E-03

R package drc has many options.

Here is a 5-parameter log-logistic model, which yields residuals lower than the fits in the question.

BONUS: It has a self-starter function, so you avoid the challenge of finding initial values for non-linear regression.

library(drc)
dosefit <- drm(y ~ x, fct = LL2.5())

在此输入图像描述

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM