Let's assume to have a large dataset of climatic data at monthly time steps for a large number of points in the world. Then dataset is shaped as a data.frame
of the type:
lon, lat, data_month_1_yr_1, ..., data_month_12_yr_100
Example:
set.seed(123)
data<- data.frame(cbind(runif(10000,-180,180), runif(10000,-90,90))
, replicate(1200, runif(10000,0,150)))
I would like to perform a Mann-Kendall test (using trend::mk.test
) over the monthly time series of each of the spatial points and get the main statistics in a data.frame
. In order to speed up this very long process I parallelized my code and wrote something like the following:
coords<-data[,1:2] #get the coordinates out of the initial dataset
names(coords)<-c("lon","lat")
data_t<- as.data.frame(t(data[,3:1202])) #each column is now the time series associated to a point
data_t$month<-rep(seq(1,12,1),100) # month index as last column of the data frame
# start the parallel processing
library(foreach)
library(doParallel)
cores=detectCores() #count cores
cl <- makeCluster(cores[1]-1) #take all the cores minus 1 not to overload the pc
registerDoParallel(cl)
mk_out<- foreach(m=1:12, .combine = rbind) %:%
foreach (a =1:10000, .combine = rbind) %dopar% {
data_m<-data_t[which(data_t$month==m),]
library(trend) #need to load this all the times otherwise I get an error (don't know why)
test<-mk.test(data_m[,a])
mk_out_temp <- data.frame("lon"=coords[a,1],
"lat"=coords[a,2],
"p.value" = as.numeric(test$p.value),
"z_stat" = as.numeric(test$statistic),
"tau" = as.numeric(test$estimates[3]),
"month"= as.numeric(m))
mk_out_temp
}
stopCluster(cl)
head(mk_out)
lon lat p.value z_stat tau month
1 -76.47209 -34.09350 0.57759040 -0.5569078 -0.03797980 1
2 103.78985 -31.58639 0.64436238 0.4616081 0.03151515 1
3 -32.76831 66.64575 0.11793238 1.5635113 0.10626263 1
4 137.88627 -30.83872 0.79096910 0.2650524 0.01818182 1
5 158.56822 -67.37378 0.09595919 -1.6647673 -0.11313131 1
6 -163.59966 -25.88014 0.82325630 0.2233588 0.01535354 1
This runs just fine and gives me exactly what I am after: a matrix reporting the MK statistics for each combination of coordinates and month. Although the process is parallelized, however, the computation takes still a considerable amount of time.
Is there a way to speed up this process? Any room for using functions from the apply
family?
You note that you have already fixed your problem. Is obtainable using one of the following steps:
1: Copy the necessary objects to the foreach loops using .packages
and .export
. This ensures that each instance will not clash when trying to access the same memory.
2: Utilizing high performance libraries such as tidyverse of data.table to perform subsetting and computation.
The latter is a bit more complicated but yielded a massive boost to performance on my tiny tiny laptop. (Performing all calculations i roughly 1.5 minutes for the entire dataset.)
Below is my added code. Note that i replaced foreach with a single parLapply function from the parallel package.
set.seed(123)
data<- data.frame(cbind(runif(10000,-180,180), runif(10000,-90,90))
, replicate(1200, runif(10000,0,150)))
coords<-data[,1:2] #get the coordinates out of the initial dataset
names(coords)<-c("lon","lat")
data_t<- as.data.frame(t(data[,3:1202])) #each column is now the time series associated to a point
data_t$month<-rep(seq(1,12,1),100) # month index as last column of the data frame
# start the parallel processing
library(data.table)
library(parallel)
library(trend)
setDT(data_t)
setDT(coords)
cores=detectCores() #count cores
cl <- makeCluster(cores[1]-1) #take all the cores minus 1 not to overload the pc
#user system elapsed
#17.80 35.12 98.72
system.time({
test <- data_t[,parLapply(cl,
.SD, function(x){
(
unlist(
trend::mk.test(x)[c("p.value","statistic","estimates")]
)
)
}
), by = month] #Perform the calculations across each month
#create a column that indicates what each row is measuring
rows <- rep(c("p.value","statistic.z","estimates.S","estimates.var","estimates.tau"),12)
final_tests <- dcast( #Cast the melted structure to a nice form
melt(cbind(test,rowname = rows), #Melt the data for a better structure
id.vars = c("rowname","month"), #Grouping variables
measure.vars = paste0("V",seq.int(1,10000))), #variable names
month + variable ~ rowname, #LHS groups the data along rows, RHS decides the value columns
value.var = "value", #Which column contain values?
drop = TRUE) #should we drop unused columns? (doesnt matter here)
#rename the columns as desired
names(final_tests) <- c("month","variable","S","tau","var","p.value","z_stat")
#finally add the coordinates
final_tests <- cbind(final_form,coords)
})
At the end the problem was easily addressed by replacing the second loop with a lapply
function (inspired by this answer ). The execution time is now contained to just few seconds. Vectorizing remains the best solution to execution times in R (see this post and this )
I share the final code here below for reference:
set.seed(123)
data<- data.frame(cbind(runif(10000,-180,180), runif(10000,-90,90)), replicate(1200, runif(10000,0,150)))
coords<-data[,1:2]
names(coords)<-c("lon","lat")
data_t<- as.data.frame(t(data[,3:1202]))
data_t$month<-rep(seq(1,12,1),100)
library(foreach)
library(doParallel)
cores=detectCores()
cl <- makeCluster(cores[1]-1) #take all the cores minus 1
registerDoParallel(cl)
mk_out<- foreach(m=1:12, .combine = rbind) %dopar% {
data_m<-data_t[which(data_t$month==m),]
library(trend)
mk_out_temp <- do.call(rbind,lapply(data_m[1:100],function(x)unlist(mk.test(x))))
mk_out_temp <-cbind(coords,mk_out_temp,rep(m,dim(coords)[1]))
mk_out_temp
}
stopCluster(cl)
head(mk_out)
head(mk_out)
lon lat data.name p.value statistic.z null.value.S parameter.n estimates.S estimates.varS
1 -76.47209 -34.09350 x 0.577590398263635 -0.556907839290681 0 100 -188 112750
2 103.78985 -31.58639 x 0.644362383361713 0.461608102085858 0 100 156 112750
3 -32.76831 66.64575 x 0.117932376736468 1.56351131351662 0 100 526 112750
4 137.88627 -30.83872 x 0.79096910003836 0.265052394100912 0 100 90 112750
5 158.56822 -67.37378 x 0.0959591933285242 -1.66476728429674 0 100 -560 112750
6 -163.59966 -25.88014 x 0.823256299016955 0.223358759073802 0 100 76 112750
estimates.tau alternative method pvalg rep(m, dim(coords)[1])
1 -0.037979797979798 two.sided Mann-Kendall trend test 0.577590398263635 1
2 0.0315151515151515 two.sided Mann-Kendall trend test 0.644362383361713 1
3 0.106262626262626 two.sided Mann-Kendall trend test 0.117932376736468 1
4 0.0181818181818182 two.sided Mann-Kendall trend test 0.79096910003836 1
5 -0.113131313131313 two.sided Mann-Kendall trend test 0.0959591933285242 1
6 0.0153535353535354 two.sided Mann-Kendall trend test 0.823256299016955 1
The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.