简体   繁体   中英

How to efficiently operate on column of vectors in data.table

I want to do an operation on a column that consists of numeric vectors and I'm wondering what is the best way to do it.

So far I have tried the following and the set way seems to be the best but maybe I'm missing out on some superior way to approach this? How big of speed boost could be expected by doing this in C++?

testVector <- data.table::data.table(A = lapply(1:10^5, function(x) runif(100)))

microbenchmark::microbenchmark(lapply = testVector[, B := lapply(A, diff)],
                           map = testVector[, C := Map(diff, A)],
                           set = set(testVector, NULL, "D", lapply(testVector[["A"]], diff)),
                           forset = {for(i in seq(nrow(testVector))) set(testVector, i, "E", list(list(diff(testVector[[i, "A"]]))))},
                           times = 10L)

The results are following:

Unit: milliseconds
   expr       min        lq     mean   median       uq      max neval
    set  789.7967  924.8178 1031.923 1082.325 1146.306 1174.671    10
 lapply 1122.2454 1468.9556 1563.002 1619.668 1692.217 1919.405    10
    map 1297.5236 1320.7022 1571.344 1592.176 1695.673 2012.051    10
 forset 1887.0003 2023.7357 2139.202 2174.912 2245.943 2396.844    10

Update

I have checked how Rcpp fares with the task. While my C++ skills are very poor the speed increase is >10x.

The C++ code:

#include <Rcpp.h>
using namespace Rcpp;

// [[Rcpp::export]]

List cppDiff(List column){
  int cSize = column.size();
  List outputColumn(cSize, NumericVector());
  for(int i = 0; i < cSize; ++i){
    NumericVector vectorElement = column[i];
    outputColumn[i] = Rcpp::diff(vectorElement);
  }

  return(outputColumn);
}

Testing code:

library(Rcpp);library(data.table);library(microbenchmark)
sourceCpp("diffColumn.cpp")

vLen <- 100L
cNum <- 1e4L
test <- data.table(A = lapply(1L:cNum, function(x) runif(vLen)))

throughMatrix <- function(column){
  difmat <- diff(matrix(unlist(column), nrow = vLen, ncol = cNum))
  lapply(seq(cNum), function(i) difmat[, i])
}

microbenchmark::microbenchmark(DT  = set(test, NULL, "B", lapply(test[["A"]], diff)),
                               mat = set(test, NULL, "C", throughMatrix(test[["A"]])),
                               cpp = set(test, NULL, "D", cppDiff(test[["A"]])),
                               times = 5)

> all.equal(test$B, test$C)
[1] TRUE
> all.equal(test$B, test$D)
[1] TRUE

Unit: milliseconds
 expr       min        lq       mean     median         uq       max neval
   DT 845.04418 912.60961 1024.79183 1011.59417 1107.14306 1323.9963    10
  mat 643.02187 663.92700  778.91145  816.95972  844.37206  864.1173    10
  cpp  45.28504  49.35746   84.27799   78.32085   84.87942  226.1347    10

And another benchmark for 10000 x 10000 column:

Unit: milliseconds
 expr       min        lq      mean     median        uq       max neval
   DT 7851.4352 8504.3501 21632.018 25246.7860 29133.358 37424.163     5
  mat 8679.9386 8724.1497 22852.724 18235.7693 39199.966 39423.794     5
  cpp  244.8572  247.7443  1439.011   303.2556  2715.643  3683.552     5

Have you considered using matrices? The syntax and data structure is different enough that the code below isn't a drop-in replacement, but depending on the analysis pipeline before and after this operation I suspect matrix inputs/outputs might be a more fitting way to handle the data than list-columns anyway.

library(data.table)

VectorLength <- 1e5L
testVector <- data.table::data.table(A = lapply(1:VectorLength, function(x) runif(100)))
A <- matrix(data = runif(100L*VectorLength),nrow = 100L,ncol = VectorLength)

microbenchmark::microbenchmark(set = testVector[, B := lapply(A, diff)],
                               Matrix = B <- diff(A),
                               times = 10L)

Yields the following on a Windows PC:

Unit: milliseconds
   expr      min       lq     mean    median        uq       max neval
    set 1143.933 1251.064 1316.944 1331.4672 1376.8016 1431.8988    10
 Matrix  307.945  315.689  363.255  335.4382  390.1124  499.5492    10

And the following on a Linux server running Ubuntu 14.04

Unit: milliseconds
   expr       min        lq      mean    median        uq       max neval
    set 1342.6969 1410.3132 1519.6830 1551.2051 1594.3431 1699.7480    10
 Matrix  285.0472  297.3283  375.0613  302.4198  488.3482  503.0959    10

Just as for reference as to what the output looks like here when coerced to a data.table:

str(as.data.table(t(B)))

returns

Classes ‘data.table’ and 'data.frame':  99 obs. of  100000 variables:
 $ V1     : num  0.23 0.24 -0.731 0.724 0.074 ...
 $ V2     : num  -0.628 0.585 -0.164 0.269 -0.16 ...
 $ V3     : num  0.1735 0.1128 -0.3069 0.0341 -0.2664 ...
 $ V4     : num  -0.392 0.593 -0.345 -0.327 0.747 ...
 $ V5     : num  0.1084 0.2915 0.3858 -0.1574 -0.0929 ...
 $ V6     : num  -0.2053 -0.2669 -0.2 0.0214 0.1111 ...
 $ V7     : num  0.0582 -0.2141 0.7282 -0.6877 0.4981 ...
 $ V8     : num  -0.439 -0.114 0.275 0.4 -0.184 ...
 $ V9     : num  0.13673 0.55244 -0.43132 0.21692 -0.00308 ...
 $ V10    : num  0.701 -0.0486 -0.1464 -0.5595 -0.046 ...
 $ V11    : num  0.3583 -0.2588 -0.0742 -0.2113 0.9434 ...
 $ V12    : num  -0.1146 0.5346 -0.0594 -0.6534 0.6112 ...
 $ V13    : num  0.473 0.307 -0.544 0.718 -0.315 ...

Update: It depends.

So I was curious how the performance improvement would look at a larger scale, and this one turns out to be a somewhat interesting problem where the most efficient method is highly dependent on the size/shape of the data.

Using the following format:

VectorLength <- 1e5L
ItemLength <- 1e2L
testVector <- data.table::data.table(A = lapply(1:VectorLength, function(x) runif(ItemLength)))
A <- matrix(data = runif(ItemLength*VectorLength),nrow = ItemLength,ncol = VectorLength)

microbenchmark::microbenchmark(set = set(testVector, NULL, "D", lapply(testVector[["A"]], diff)),
                               Matrix = B <- diff(A),
                               times = 5L)

I ran through a range of VectorLength and ItemLengths values. Referred to from here on as (Vector x Item) where (10,000 x 100) would signify 10,000 vectors (data.table rows) with 100 elements. Since the matrix form was transposed to fit the base R diff function, this would therefore translate to a matrix with 100 rows and 10,000 columns.

(10,000 x 10)

Unit: milliseconds
   expr       min        lq       mean   median         uq        max neval
    set 83.947769 88.420871 102.822626 90.91088 104.737002 146.096606     5
 Matrix  2.368524  2.437371   2.661553  2.45122   2.476745   3.573904     5

(10,000 x 100)

Unit: milliseconds
   expr       min        lq      mean    median        uq       max neval
    set 119.33550 140.35294 174.17641 198.14286 199.56239 213.48837     5
 Matrix  20.75578  23.00535  60.10874  79.47677  88.33331  88.97251     5

(10,000 x 1,000)

Unit: milliseconds
   expr      min       lq     mean   median       uq      max neval
    set 337.0859 382.6305 407.9396 429.0512 440.6331 450.2971     5
 Matrix 300.3360 316.5533 411.4678 352.0477 534.4063 553.9957     5

(10,000 x 10,000)

Unit: milliseconds
   expr      min       lq     mean   median       uq      max neval
    set 1428.319 1483.324 1518.096 1508.114 1578.929 1591.792     5
 Matrix 3059.825 3119.654 4366.107 3224.755 6164.489 6261.815     5

The take-away

Depending on the dimensions of the data you will actually be using, the relative performance of methods will change drastically.

If your actual data is similar to what you originally proposed for benchmarking purposes, then the matrix operation should work well, but if the dimensions vary one way or another I'd re-benchmark with a representative "shape" data.

Hope this is as helpful for you as it was interesting to me.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM