Please consider the following:
A custom function CustomFun
takes several numeric arguments. The argument name is stored in resp
and corresponds to the function argument name. The argument value is stored in colum val
.
The data.frame
holds information on several patients ( id
), hence the data needs to be grouped by id
.
Problem:
How can we apply a custom function to da grouped data.frame
or data.table
, that takes arguments from columns in that same data structure?
library(dplyr)
#>
#> Attaching package: 'dplyr'
#> The following objects are masked from 'package:stats':
#>
#> filter, lag
#> The following objects are masked from 'package:base':
#>
#> intersect, setdiff, setequal, union
library(data.table)
#>
#> Attaching package: 'data.table'
#> The following objects are masked from 'package:dplyr':
#>
#> between, first, last
# The data
df.x <- data.frame(id = rep(c(1:2), each = 5),
resp = c("val.a", "val.b", "val.c", "val.d", "val.e"),
val = c(10, 15, NA, NA, NA,
1, 5, NA, NA, NA))
df.x
#> id resp val
#> 1 1 val.a 10
#> 2 1 val.b 15
#> 3 1 val.c NA
#> 4 1 val.d NA
#> 5 1 val.e NA
#> 6 2 val.a 1
#> 7 2 val.b 5
#> 8 2 val.c NA
#> 9 2 val.d NA
#> 10 2 val.e NA
# A simple function (minimal replicable example)
CustomFun <- function(a,b){
a+b
}
Desired output:
# Desired output
df.x %>% mutate(res = c(25, 25, NA, NA, NA, 6, 6, NA, NA, NA))
#> id resp val res
#> 1 1 val.a 10 25
#> 2 1 val.b 15 25
#> 3 1 val.c NA NA
#> 4 1 val.d NA NA
#> 5 1 val.e NA NA
#> 6 2 val.a 1 6
#> 7 2 val.b 5 6
#> 8 2 val.c NA NA
#> 9 2 val.d NA NA
#> 10 2 val.e NA NA
Own approach:
This approach is working when there are no groups ( id
). Not having NA
in val
for all non val.a
or val.b
would not be problem as they could be filtered out in a second step.
# Approach without the need of grouping: one id only, problem: NA also assigned to val in df.z[3:5, ]
# dplyr
df.z <- df.x %>% slice(1:5)
df.z
#> id resp val
#> 1 1 val.a 10
#> 2 1 val.b 15
#> 3 1 val.c NA
#> 4 1 val.d NA
#> 5 1 val.e NA
df.z %>% mutate(test = CustomFun(a = df.z %>% filter(resp == "val.a") %>% pull(val),
b = df.z %>% filter(resp == "val.b") %>% pull(val))
)
#> id resp val test
#> 1 1 val.a 10 25
#> 2 1 val.b 15 25
#> 3 1 val.c NA 25
#> 4 1 val.d NA 25
#> 5 1 val.e NA 25
# data.table
setDT(df.z)[, .(test= CustomFun(a = setDT(df.z)[resp == "val.a", val],
b = setDT(df.z)[resp == "val.b", val])),
by = .(id, val, resp)]
#> id val resp test
#> 1: 1 10 val.a 25
#> 2: 1 15 val.b 25
#> 3: 1 NA val.c 25
#> 4: 1 NA val.d 25
#> 5: 1 NA val.e 25
# NOT working for groups =====================================
# data.frame
df.x %>%
group_by(id) %>%
mutate(test = CustomFun(a = df.x %>% filter(resp == "val.a") %>% pull(val),
b = df.x %>% filter(resp == "val.b") %>% pull(val))
)
#> Error in mutate_impl(.data, dots): Column `test` must be length 5 (the group size) or one, not 2
# data.table
setDT(df.x)[, .(test= CustomFun(a = setDT(df.x)[resp == "val.a", val],
b = setDT(df.x)[resp == "val.b", val])),
by = .(id, val, resp)]
#> id val resp test
#> 1: 1 10 val.a 25
#> 2: 1 10 val.a 6
#> 3: 1 15 val.b 25
#> 4: 1 15 val.b 6
#> 5: 1 NA val.c 25
#> 6: 1 NA val.c 6
#> 7: 1 NA val.d 25
#> 8: 1 NA val.d 6
#> 9: 1 NA val.e 25
#> 10: 1 NA val.e 6
#> 11: 2 1 val.a 25
#> 12: 2 1 val.a 6
#> 13: 2 5 val.b 25
#> 14: 2 5 val.b 6
#> 15: 2 NA val.c 25
#> 16: 2 NA val.c 6
#> 17: 2 NA val.d 25
#> 18: 2 NA val.d 6
#> 19: 2 NA val.e 25
#> 20: 2 NA val.e 6
Created on 2018-11-13 by the reprex package (v0.2.1)
Thanks a lot!
There were 2 different issues: you have added grouping variables in data.table
which were not needed, and you have subset the data incorrectly in both versions.
Adjustment for data.table
:
setDT(df.x)[!is.na(val), test := CustomFun(a = val[resp == "val.a"],
b = val[resp == "val.b"]), by = id]
There was no need to group by resp
and val
, only by id
.
For dplyr
, you could do:
df.x %>%
group_by(id) %>%
mutate(test = if_else(!is.na(val), CustomFun(a = val[resp == "val.a"],
b = val[resp == "val.b"]), NA_real_)
)
Output in both cases:
id resp val test
1: 1 val.a 10 25
2: 1 val.b 15 25
3: 1 val.c NA NA
4: 1 val.d NA NA
5: 1 val.e NA NA
6: 2 val.a 1 6
7: 2 val.b 5 6
8: 2 val.c NA NA
9: 2 val.d NA NA
10: 2 val.e NA NA
We could subset the values by group (assuming that there are only a single 'val.a', 'val.b' per each 'id' and add
library(dplyr)
df.x %>%
group_by(id) %>%
mutate(res = (val[resp == 'val.a'] + val[resp == 'val.b']) * NA^(is.na(val)))
# A tibble: 10 x 4
# Groups: id [2]
# id resp val res
# <int> <fct> <dbl> <dbl>
# 1 1 val.a 10 25
# 2 1 val.b 15 25
# 3 1 val.c NA NA
# 4 1 val.d NA NA
# 5 1 val.e NA NA
# 6 2 val.a 1 6
# 7 2 val.b 5 6
# 8 2 val.c NA NA
# 9 2 val.d NA NA
#10 2 val.e NA NA
Or another option is to filter
, do a summarize
by group and then join with the original dataset
df.x %>%
filter(resp %in% c('val.a', 'val.b')) %>%
group_by(id) %>%
summarise(res = sum(val)) %>%
right_join(df.x) %>%
mutate(res = replace(res, is.na(val), NA))
The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.