Suppose I do this:
m <- matrix(0, nrow = 20, ncol = 3)
system.time(m[1, 1:3] <- c(1,1,1))
That takes 0 seconds.
Now I do this:
m <- matrix(0, nrow = 10000000, ncol = 3)
system.time(m[1, 1:3] <- c(1,1,1))
This takes about 0.47 seconds on my system.
I need to fill in a matrix of around 8.5 million rows so at 0.47 seconds each it's not an option. Is there any way around this? Other than creating many smaller sub matrices and rbinding later?
Thanks!
After starting a new R session:
m <- matrix(0, nrow = 10000000, ncol = 3)
system.time(m[1, 1:3] <- c(1,1,1))
# User System elapsed
# 0 0 0
n <- m
system.time(m[1, 1:3] <- c(1,1,1))
# User System elapsed
# 0.074 0.061 0.135
The first time m
is modified in place. The second time a copy is made since m
is referred to by n
.
This question might be of interest. However, if you do a rolling regression, you should first look, if it is implemented in some package. If you want to do this in Rcpp, you should do the whole loop in Rcpp and not assign to m
8.5M times.
The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.