简体   繁体   中英

fastest way to find all numbers between target numbers

Given numbers in a vector (eg 1 5 10 12), I'm looking for those numbers in the vector that fall between a number range of my choice (eg c(9, 11)). I expect vec = c(10) to be returned in this small example.

Here's a larger MWE below where I use dplyr::between to subset the relevant values...However, I'm looking for a faster way to do this (that does not use parallelization as a solution). Let me know if I can explain something better.

# Data
set.seed(1)
targets <- sort(sample(1:1e8, 1e7, replace=FALSE))
vec <- c(1345706, 1405938)

# Function
dplyr_between <- function(vec, targets) {
            require(dplyr)
            targets <- targets[dplyr::between(targets, vec[1], vec[2])]
            return(targets)
        }

test <- dplyr_between(vec, targets)
# 1345732 1345761 1345779 1345780 1345797

Edit Add function based on comment using x < max & x > min (since deleted)

# More Functions
base_compare <- function(vec, targets) {
            targets <- targets[targets < vec[2] & targets > vec[1]]
            return(targets)
        }

base_compare(vec, targets)
# 1345732 1345761 1345779 1345780 1345797

Edit using data.table::inrange from @docendo

# inrange function
dt_inrange <- function(vec, targets) {
            require(data.table)
            targets <- targets[inrange(targets, vec[1], vec[2])]
            return(targets)
        }

dt_inrange(vec, targets)
# 1345732 1345761 1345779 1345780 1345797

Benchmark

library(microbenchmark)
microbenchmark(dplyr_between(vec, targets), base_compare(vec, targets), dt_inrange(vec, targets), times=10L)
# Unit: milliseconds
                       # expr      min       lq     mean   median       uq      max
# dplyr_between(vec, targets) 265.5192 283.5998 296.0947 296.7552 309.4403 323.3634
#  base_compare(vec, targets) 303.4629 317.8389 343.6311 343.3765 354.6891 427.1962
#    dt_inrange(vec, targets) 129.3800 131.1634 142.8658 144.4569 149.3728 164.5824
 # neval
    # 10
    # 10
    # 10

Thanks!

Simple Rcpp implementation:

C++ code in temp.cpp"

#include <Rcpp.h>
#include <vector>

using namespace Rcpp;
// [[Rcpp::plugins(cpp11)]]
// [[Rcpp::export]]
std::vector<int> betweenRcpp(IntegerVector vec, int lower, int upper) {
  std::vector<int> ret;
  for(int i=0; i<vec.size(); i++) {
    if((vec[i] > lower) & (vec[i] < upper)) {
      ret.push_back(vec[i]);
    } else if(vec[i] >= upper) {
      break;
    }
  }
  return ret;
}

R code:

library(Rcpp)
library(microbenchmark)
setwd("~/Desktop")
# Data
set.seed(1)
targets <- sort(sample(1:1e8, 1e7, replace=FALSE))
vec <- c(1345706, 1405938)

# Function
dplyr_between <- function(vec, targets) {
  require(dplyr)
  targets <- targets[dplyr::between(targets, vec[1], vec[2])]
  return(targets)
}

sourceCpp("temp.cpp")

test <- dplyr_between(vec, targets)
test2 <- betweenRcpp(targets, vec[1], vec[2])

microbenchmark(dplyr_between(vec, targets), betweenRcpp(targets, vec[1], vec[2]), times=10)


Unit: microseconds
                                 expr       min        lq        mean      median        uq        max neval cld
          dplyr_between(vec, targets) 72066.027 77809.681 108023.3793 103723.4075 125280.89 173892.552    10   b
 betweenRcpp(targets, vec[1], vec[2])   439.124   464.475    502.7439    481.8025    543.12    594.578    10  a 

Test equality between solutions:

all(test == test2)

Since your data is sorted, you can use a keyed data table. I equate pre-sortind the data with pre-keying the data table, so the time to create the key is not part of the benchmark. I also removed the cruft from dt_inrange so the comparison can focus on the task at hand.

key_dt = data.table(targets, key = "targets")
# note that `targets` does not need to be sorted beforehand
# the key = "targets" will sort it as the table is created.
# You can also use `setkey` to add a key to an existing data table.

dt_inrange <- function(vec, targets) {
            targets[inrange(targets, vec[1], vec[2])]
        }

key_dt_inrange <- function(vec, target_dt) {
  target_dt[inrange(targets, vec[1], vec[2]), targets]
}
print(microbenchmark(
  dt_inrange(vec, targets),
  key_dt_inrange(vec, key_dt),
  times = 10
), signif = 3, order = "mean")
# Unit: milliseconds
#                         expr  min   lq     mean median    uq   max neval cld
#  key_dt_inrange(vec, key_dt) 47.5 47.9 54.75557   50.4  52.2  98.6    10   a
#     dt_inrange(vec, targets) 48.8 49.8 99.18932   60.4 185.0 219.0    10   a

For whatever reason, it looks like unkeyed method has some right skew, with the mean 50% larger than the median, but this is prevented in the keyed data table method.

microbenchmark(db = {
    x = findInterval(vec, targets)
    targets[(x[1]+1):x[2]]
},
dplyr_between(vec, targets))
#Unit: milliseconds
#                        expr       min        lq      mean    median        uq      max neval cld
#                          db  51.02101  58.43651  78.81237  70.51761  79.58609 410.3919   100  a 
# dplyr_between(vec, targets) 127.03341 148.65899 177.43284 156.37937 170.22009 431.5442   100   b


identical({x = findInterval(vec, targets)
          targets[(x[1]+1):x[2]]}, test)
#[1] TRUE

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM