简体   繁体   中英

Can I test autocorrelation from the generalized least squares model?

I am trying to use a generalized least square model ( gls in R) on my panel data to deal with autocorrelation problem. I do not want to have any lags for any variables.

I am trying to use Durbin-Watson test ( dwtest in R) to check the autocorrelation problem from my generalized least square model ( gls ). However, I find that the dwtest is not applicable over gls function while it is applicable to other functions such as lm .

Is there a way to check the autocorrelation problem from my gls model?

Durbin-Watson test is designed to check for presence of autocorrelation in standard least-squares models (such as one fitted by lm ). If autocorrelation is detected, one can then capture it explicitly in the model using, for example, generalized least squares ( gls in R). My understanding is that Durbin-Watson is not appropriate to then test for "goodness of fit" in the resulting models, as gls residuals may no longer follow the same distribution as residuals from the standard lm model. (Somebody with deeper knowledge of statistics should correct me, if I'm wrong).

With that said, function durbinWatsonTest from the car package will accept arbitrary residuals and return the associated test statistic. You can therefore do something like this:

v <- gls( ... )$residuals
attr(v,"std") <- NULL      # get rid of the additional attribute
car::durbinWatsonTest( v )

Note that durbinWatsonTest will compute p-values only for lm models (likely due to the considerations described above), but you can estimate them empirically by permuting your data / residuals.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM