简体   繁体   中英

slow function by groups in data.table r

My experimental design has trees measured in various forests, with repeated measurements across years.

DT <- data.table(forest=rep(c("a","b"),each=6),
                    year=rep(c("2000","2010"),each=3),
                    id=c("1","2","3"),
                    size=(1:12))

DT[,id:=paste0(forest,id)]

> DT
    forest year id size
 1:     a 2000 a1     1
 2:     a 2000 a2     2
 3:     a 2000 a3     3
 4:     a 2010 a1     4
 5:     a 2010 a2     5
 6:     a 2010 a3     6
 7:     b 2000 b1     7
 8:     b 2000 b2     8
 9:     b 2000 b3     9
10:     b 2010 b1    10
11:     b 2010 b2    11
12:     b 2010 b3    12

For each tree i, I want to calculate a new variable, equal to the summatory of the size of all the other individuals in the same group/year that are bigger than the tree i.

I have created the following function:

f.new <- function(i,n){ 
 DT[forest==DT[id==i, unique(forest)] & year==n # select the same forest & year of the tree i
 & size>DT[id==i & year==n, size], # select the trees larger than the tree i
 sum(size, na.rm=T)] # sum the sizes of all such selected trees
}

When applied within the data table, I got the correct results.

       DT[,new:=f.new(id,year), by=.(id,year)]

> DT
    forest year id size new
 1:     a 2000 a1     1   5
 2:     a 2000 a2     2   3
 3:     a 2000 a3     3   0
 4:     a 2010 a1     4  11
 5:     a 2010 a2     5   6
 6:     a 2010 a3     6   0
 7:     b 2000 b1     7  17
 8:     b 2000 b2     8   9
 9:     b 2000 b3     9   0
10:     b 2010 b1    10  23
11:     b 2010 b2    11  12
12:     b 2010 b3    12   0

Note that I have a large dataset with several forests (40) & repeated years (6) & single individuals (20,000), for a total of almost 50,000 measurements. When I carry out the above function it takes 8-10 minutes (Windows 7, i5-6300U CPU @ 2.40 GHz 2.40 GHz, RAM 8 GB). I need to repeat it often with several small modifications and it takes a lot of time.

  1. Is there any faster way to do it? I checked the *apply functions but cannot figure out a solution based on them.
  2. Can I make a generic function that doesn't rely on the specific structure of the dataset (ie I could use as "size" different columns)?

Just sort the data and this can be extremely fast:

setorder(DT, forest, year, -size)
DT[, new := cumsum(size) - size, by = .(forest, year)]
setorder(DT, forest, year, id)
DT
#    forest year id size new
# 1:      a 2000 a1    1   5
# 2:      a 2000 a2    2   3
# 3:      a 2000 a3    3   0
# 4:      a 2010 a1    4  11
# 5:      a 2010 a2    5   6
# 6:      a 2010 a3    6   0
# 7:      b 2000 b1    7  17
# 8:      b 2000 b2    8   9
# 9:      b 2000 b3    9   0
#10:      b 2010 b1   10  23
#11:      b 2010 b2   11  12
#12:      b 2010 b3   12   0

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM