I am working with Linear Regression model, and I want to calculate some performance measures manually. I split my data using Leave One Out Cross-Validation (LOOCV).
The following R code give me the desired results but takes a long time since I am using for loop with LOOCV.
Is there a way to rewrite my code in a fast way using, for example, apply family of functions in R?
The dataset is uploaded from here
wdbc <- read_excel("Folds5x2_pp.xlsx")
wdbc[] <- lapply(wdbc, scale)
dim(wdbc)
9568 5
head(wdbc)
1 -0.629 -0.987 1.82 -0.00952 0.521
2 0.742 0.681 1.14 -0.975 -0.586
3 -1.95 -1.17 -0.185 1.29 2.00
4 0.162 0.237 -0.508 0.228 -0.462
5 -1.19 -1.32 -0.678 1.60 1.14
6 0.888 0.404 -0.173 -0.996 -0.627
fitted_value <- rep(0,nrow(wdbc))
for(i in 1:nrow(wdbc)){
test<-wdbc[i,]
training<-wdbc[-i,]
m=lad(PE ~ ., data=training, method="BR")
co.data = coef(m)
x = cbind(1, as.matrix(test[, !(colnames(test) %in% "PE")]))
fitted_value[i] <- x %*% co.data
}
R2<-(cor(wdbc$PE,fitted_value)^2)
SAD<-sum(abs(wdbc$PE-fitted_value))
c(round(SAD,2) ,round(R2,2))
NOTE 1
The data used in the question is only for an explanation because in my project I have many datasets with high dimensions.
EDIT
Based on @Dominic van Essen answer, I used the following R code using parSapply
function from parallel
package but it takes time more than the for loop.
library(parallel)
mycluster=makeCluster(detectCores()-1)
wdbc <- read_excel("Folds5x2_pp.xlsx")
wdbc[] <- lapply(wdbc, scale)
clusterExport(mycluster,c("lad","wdbc"))
fitted_value = parSapply(mycluster,seq_len(nrow(wdbc)),function(i) {
for(i in 1:nrow(wdbc)){
test<-wdbc[i,]
training<-wdbc[-i,]
m=lad(PE ~ ., data=training, method="BR")
co.data = coef(m)
x = cbind(1, as.matrix(test[, !(colnames(test) %in% "PE")]))
}
return (x %*% co.data)
})
NOTE 2
I have 8 cores, and "PE" is the dependent variable in my data set.
You can easily re-write your loop using sapply
instead of for...
, although, as bzki commented, this alone will not speed-up your code:
# sapply version:
fitted_value = sapply(seq_len(nrow(wdbc)),function(i) {
# put all the gubbins in here
# ...
return (x %*% co.data)
})
However, if you have multiple cores available on your computer, or - even better - access to a server with many processors, then an sapply
loop can easily be parallelized using parSapply
from the 'parallel' package, as shown in this example:
# slow sapply loop (takes 12s):
data=123
answer = sapply(1:12,function(i) {
Sys.sleep(1)
return(data+i)
})
# faster parallel version (takes 4s on my laptop with 4 cores):
library(parallel)
mycluster=makeCluster(detectCores()-1) # leave 1 core available for system
data=123
clusterExport(mycluster,"data") # specify variable(s) that should be available to parallel function
answer = parSapply(mycluster,1:12,function(i) {
Sys.sleep(1)
return(data+i)
})
stopCluster(mycluster)
The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.