[英]How to bootstrap a linear regression and estimate confidence intervals in R?
我有執行引導程序的數據集,因此只有兩個主要因素復制/水平內的值被替換。
replicate level high.density low.density
1 low 14 36
1 low 54 31
1 mid 82 10
1 mid 24 NA
2 low 12 28
2 low 11 45
2 mid 12 17
2 mid NA 24
2 up 40 10
2 up NA 5
2 up 20 2
##數據幀
df <- structure(list(replicate = c(1, 1, 1, 1, 2, 2, 2, 2, 2, 2, 2), level = c("low", "low", "mid", "mid", "low", "low", "mid", "mid", "up", "up", "up"), high.density = c(14, 54, 82, 24, 12, 11, 12, NA, 40, NA, 20), low.density = c(36, 31, 10,
NA, 28, 45, 17, 24, 10, 5, 2)), class = c("spec_tbl_df","tbl_df","tbl", "data.frame"), row.names = c(NA, -11L), spec = structure(list(cols = list(replicate = structure(list(), class = c("collector_double", "collector")), level = structure(list(), class = c("collector_character","collector")), high.density = structure(list(), class = c("collector_double","collector")), low.density = structure(list(), class = c("collector_double",
"collector"))), default = structure(list(), class = c("collector_guess", "collector")), skip = 1L), class = "col_spec"))
df$replicate <- as.factor(as.numeric(df$replicate))
df$level <- as.factor(as.character(df$level))
##創建啟動所需的數據集。 只有唯一復制/級別中的值才允許洗牌( Credits: Dion )
df_shuffle <- function(DF) {
my_split <- split(DF, f = ~ DF$replicate + DF$level)
shuffle <- lapply(my_split, function(x) {
nrX <- nrow(x)
cbind(x[, c('replicate', 'level')],
high.density = x[sample(seq_len(nrX), replace = TRUE), 'high.density'],
low.density = x[sample(seq_len(nrX), replace = TRUE), 'low.density'])
})
DF_new <- do.call(rbind, shuffle)
rownames(DF_new) <- NULL
return(DF_new)
}
B <- 1000
df_list <- replicate(B, df_shuffle(df), simplify = FALSE)
df_list <- lapply(df_list, function(x) x[complete.cases(x), ]) #choose complete cases
現在我想引導這些觀察結果來估計系數、p 值和置信區間。 我正在嘗試復制啟動 function就像在這個例子中一樣,並像在這個例子中一樣繪制正確的置信區間(我只需要整體引導線和置信區間)
#plot 的示例代碼
df_boot <- rbindlist(df_list, idcol = 'count')
ggplot(aes(x = low.density, y = high.density), data = df_boot) +
stat_smooth(aes(group = factor(count)), geom = "line", method = "lm", se = FALSE, color = "red", alpha=0.02) +
stat_smooth(geom = "line", method = "lm", se = FALSE, color = "black", linetype = "longdash") +
theme(panel.background = element_blank()) + theme(legend.position="none")
我們可以對假設的線性 model 進行非參數或參數引導。 非參數和參數引導程序之間的重要區別在於,在非參數情況下,我們將從原始 dataframe df
中重復采樣,而在參數情況下,我們從原始 model中模擬新數據。
我們假設以下線性 model:
high.density
i = b 0 + b 1 low.density
+ e i 。
重要的是首先按列表刪除包含缺失觀察值的行,然后運行引導程序(即使您最好將獨立的缺失觀察值相乘...)。 在您在問題中提到的 function df_shuffle()
的帖子中,是在重新采樣數據后執行的列表刪除。 在重采樣之前執行列表刪除可確保每個引導樣本具有與df
相同的行數。 這是引導程序工作的先決條件,因此能夠根據引導程序進行有效推理。
function boot_lm()
允許用戶執行非參數或參數引導。 它由以下 arguments 組成:
original_model
:指定假定線性 model 的字符串。original_data
:指定 dataframe 的字符串,用於擬合假定的線性 model。type
:一個字符串,指定是否應該執行參數 ( param
) 或非參數 ( ordinary
) 引導。B
:要采取的引導樣本數。seed
:一個 integer 修復隨機數生成器。# listwise deletion
df <- df[complete.cases(df), ]
# linear model to be bootstrapped
fm0 <- lm(high.density ~ low.density, data = df)
boot_lm <- function(original_data, original_model,
type = c('ordinary', 'param'),
B = 1000L, seed = 1) {
set.seed(seed)
betas_original_model <- coef(original_model)
len_coef <- length(betas_original_model)
mat <- matrix(rep(0L, B * len_coef), ncol = len_coef)
if (type %in% 'ordinary') {
n_rows <- length(residuals(original_model))
for (i in seq_len(B)) {
boot_dat <- original_data[sample(seq_len(n_rows), replace = TRUE), ]
mat[i, ] <- coef(lm(terms(original_model), data = boot_dat))
}
}
if (type %in% 'param') {
X <- model.matrix(delete.response(terms(original_model)),
data = original_data)[, -1L]
for (i in seq_len(B)) {
mat[i, ] <- coef(lm(unlist(simulate(original_model)) ~ X,
data = original_data))
}
}
confints <- matrix(rep(0L, 2L * len_coef), ncol = 2L)
pvals <- numeric(len_coef)
for (i in seq_len(len_coef)) {
pvals[i] <- mean(abs(mat[, i] - mean(mat[, i])) > abs(betas_original_model[i]))
confints[i, ] <- quantile(mat[, i], c(.025, 0.975))
}
names(pvals) <- names(betas_original_model)
out <- data.frame(estimate = betas_original_model,
'lwr' = confints[, 1], 'upr' = confints[, 2],
p_value = pvals)
return(out)
}
Output:您的數據
# non-parametric bootstrap
ordinary <- boot_lm(original_data = df, original_model = fm0,
type = 'ordinary', B = 1e4)
> ordinary
estimate lwr upr p_value
(Intercept) 45.1522806 16.290080 88.6969733 0.0220
low.density -0.6492639 -2.055204 0.5368766 0.2792
# --------------------------------------------------------
# parametric bootstrap
param <- boot_lm(original_data = df, original_model = fm0,
type = 'param', B = 1e4)
> param
estimate lwr upr p_value
(Intercept) 45.1522806 10.472075 79.1197394 0.0103
low.density -0.6492639 -1.971258 0.6381189 0.3245
Output:mtcars
# linear model to be bootstrapped
fm1 <- lm(mpg ~ wt + cyl + qsec, data = mtcars)
ordinary <- boot_lm(original_data = mtcars, original_model = fm1,
type = 'ordinary', B = 1e4)
> ordinary
estimate lwr upr p_value
(Intercept) 29.4290521 13.8283579 41.2637258 0.0009
wt -3.8616401 -6.6867159 -2.0884969 0.0084
cyl -0.9277487 -1.9447741 0.4831599 0.1174
qsec 0.4944817 -0.1141793 1.3369213 0.1825
聽起來最大的要求之一是能夠 plot 引導結果。 這是一種選擇:
首先,制作數據
df <- structure(list(replicate = c(1, 1, 1, 1, 2, 2, 2, 2, 2, 2, 2), level = c("low", "low", "mid", "mid", "low", "low", "mid", "mid", "up", "up", "up"), high.density = c(14, 54, 82, 24, 12, 11, 12, NA, 40, NA, 20), low.density = c(36, 31, 10,
NA, 28, 45, 17, 24, 10, 5, 2)), class = c("spec_tbl_df","tbl_df","tbl", "data.frame"), row.names = c(NA, -11L), spec = structure(list(cols = list(replicate = structure(list(), class = c("collector_double", "collector")), level = structure(list(), class = c("collector_character","collector")), high.density = structure(list(), class = c("collector_double","collector")), low.density = structure(list(), class = c("collector_double",
"collector"))), default = structure(list(), class = c("collector_guess", "collector")), skip = 1L), class = "col_spec"))
df$replicate <- as.factor(as.numeric(df$replicate))
df$level <- as.factor(as.character(df$level))
接下來,制作一些假設數據,使感興趣的變量low.density
在其范圍內移動。 在這里,我們正在為所有可能的replicate
和level
組合進行此操作,但您也可以為每個選擇一個值。
hyp <- expand.grid(replicate = as.factor(c(1,2)),
level=factor(1:3, labels=c("low", "mid", "up")),
low.density = seq(min(df$low.density, na.rm=TRUE),
max(df$low.density, na.rm=TRUE),
length=25))
然后,我們進行引導。 在下面的 function 中,我們繪制數據,估計 model 然后生成預測。 如果 model 或預測失敗,則該特定的 model 將被丟棄並繪制另一個,直到您獲得 1000 次有效抽獎。
res <- NULL
i <- 1
while(i <= 1000){
tmp <- df_shuffle(df)
mod <- try(lm(high.density ~ low.density + replicate + level, data=tmp))
if(!inherits(mod, "try-error")){
pred <- try(predict(mod, newdata=hyp))
if(!inherits(pred, "try-error")){
res <- cbind(res, pred )
i <- i+1
}
}
}
估計原始 model 得到預測值
orig <- lm(high.density ~ low.density + level + replicate, data=df)
hyp$fit <- predict(orig, newdata=hyp)
計算每個自舉預測的分位數置信區間並將它們添加到數據集中。
cis <- t(apply(res, 1, quantile, c(.025,.975)))
hyp$lwr <- cis[,1]
hyp$upr <- cis[,2]
最后,制作 plot。
ggplot(hyp, aes(x=low.density, y=fit,
ymin = lwr, ymax=upr)) +
geom_ribbon(colour="transparent", alpha=.25) +
geom_line() +
facet_grid(replicate ~ level) +
theme_bw() +
theme(panel.grid=element_blank())
要獲得更平滑的邊界,請嘗試更多的引導復制。
聲明:本站的技術帖子網頁,遵循CC BY-SA 4.0協議,如果您需要轉載,請注明本站網址或者原文地址。任何問題請咨詢:yoyou2525@163.com.