简体   繁体   中英

Subset data.table by the third column of a multi-column key

Say I have a data.table, with a 3-column key. For example, let's say we have time nested in students nested in schools.

dt <- data.table(expand.grid(schools = 200:210, students = 1:100, time = 1:5),
                 key = c("schools", "students", "time"))

And say I want to take the subset of my data that only includes time 5. I know I can use subset :

time.5 <- subset(dt, wave == 5)

Or I could do a vector scan:

time.5 <- dt[wave == 5]

But those aren't the "data.table way" -- I want to take advantage of the speed of a binary search. Since I have 3 columns in my key, using unique as follows produces incorrect results:

dt[.(unique(schools), unique(students), 5)]

Any ideas?

You may try

 setkey(dt, time)
 dt[J(5)]

 all( dt[J(5)][,time]==5)
 #[1] TRUE

Benchmarks

dt1 <- data.table(expand.grid(schools=200:450, students=1:600,time=1:50),
        key=c('schools', 'students', 'time'))
f1 <- function(){dt1[time==5]}

f2 <- function(){setkey(dt1, time)
               new.dt <- dt1[J(5)]
             setkeyv(new.dt, colnames(dt1)) 
             }

 f3 <- function() {setkey(dt1, time)
                   dt1[J(5)]}


microbenchmark(f1(), f2(), f3(), unit='relative', times=20L)
#Unit: relative
#expr      min       lq     mean   median       uq      max neval cld
#f1() 3.188559 3.240377 3.342936 3.218387 3.224352 5.319811    20   b
#f2() 1.050202 1.083136 1.081707 1.089292 1.087572 1.129741    20  a 
#f3() 1.000000 1.000000 1.000000 1.000000 1.000000 1.000000    20  a 

If the query performance is the main factor you can yet speed up @akrun's solution.

# install_github("jangorecki/dwtools")
# or just source: https://github.com/jangorecki/dwtools/blob/master/R/idxv.R
library(dwtools)
# instead of single key you can define multiple to be used automatically without the need to re-setkey
Idx = list(
  c('schools', 'students', 'time'),
  c('time')
)
IDX <- idxv(dt1, Idx)
f4 <- function(){
  dt1[CJI(IDX,TRUE,TRUE,5)]
}
microbenchmark(f4(), f1(), f2(), f3(), unit='relative', times=1L)
#Unit: relative
#expr       min        lq      mean    median        uq       max neval
#f4()  1.000000  1.000000  1.000000  1.000000  1.000000  1.000000     1
#f1()  6.431114  6.431114  6.431114  6.431114  6.431114  6.431114     1
#f2()  2.320577  2.320577  2.320577  2.320577  2.320577  2.320577     1
#f3() 23.706655 23.706655 23.706655 23.706655 23.706655 23.706655     1

Correct me if I'm wrong but it seems that f3() computation reuses it's key while microbenchmarking times > 1L .

Be aware that multiple indices ( Idx ) requires lot of memory.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM