簡體   English   中英

為什么 tidymodels/recipes 中的“id 變量”會起到預測作用?

[英]Why does an "id variable" in tidymodels/recipes play a predictor role?

與使用 step_naomit 預測相同的問題並使用 tidymodels 保留 ID ,但即使有一個公認的答案,OP 的最后一條評論指出了“id 變量”被用作預測器的問題,正如在查看時所看到的model$fit$variable.importance

我有一個我想保留的帶有“id 變量”的數據集。 我以為我可以通過 recipe() 規范來實現這一點。

library(tidymodels)

# label is an identifier variable I want to keep even though it's not
# a predictor
df <- tibble(label = 1:50, 
             x = rnorm(50, 0, 5), 
             f = factor(sample(c('a', 'b', 'c'), 50, replace = TRUE)),
             y = factor(sample(c('Y', 'N'), 50, replace = TRUE)) )

df_split <- initial_split(df, prop = 0.70)

# Make up any recipe: just note I specify 'label' as "id variable"
rec <- recipe(training(df_split)) %>% 
  update_role(label, new_role = "id variable") %>% 
  update_role(y, new_role = "outcome") %>% 
  update_role(x, new_role = "predictor") %>% 
  update_role(f, new_role = "predictor") %>% 
  step_corr(all_numeric(), -all_outcomes()) %>%
  step_dummy(all_predictors(),-all_numeric()) %>% 
  step_meanimpute(all_numeric(), -all_outcomes())

train_juiced <- prep(rec, training(df_split)) %>% juice()

logit_fit <- logistic_reg(mode = "classification") %>%
  set_engine(engine = "glm") %>% 
  fit(y ~ ., data = train_juiced)

# Why is label a variable in the model ?
logit_fit[['fit']][['coefficients']]
#> (Intercept)       label           x         f_b         f_c 
#>  1.03664140 -0.01405316  0.22357266 -1.80701531 -1.66285399

reprex 包(v0.3.0) 於 2020 年 1 月 27 日創建

但即使我確實指定label是一個 id 變量,它也被用作預測器。 所以也許我可以在公式中使用我想要的特定術語,並專門添加label作為 id 變量。

rec <- recipe(training(df_split), y ~ x + f) %>% 
  update_role(label, new_role = "id variable") %>% 
  step_corr(all_numeric(), -all_outcomes()) %>%
  step_dummy(all_predictors(),-all_numeric()) %>% 
  step_meanimpute(all_numeric(), -all_outcomes())
#> Error in .f(.x[[i]], ...): object 'label' not found

reprex 包(v0.3.0) 於 2020 年 1 月 27 日創建

我可以試着不提label

rec <- recipe(training(df_split), y ~ x + f) %>% 
  step_corr(all_numeric(), -all_outcomes()) %>%
  step_dummy(all_predictors(),-all_numeric()) %>% 
  step_meanimpute(all_numeric(), -all_outcomes())


train_juiced <- prep(rec, training(df_split)) %>% juice()

logit_fit <- logistic_reg(mode = "classification") %>%
  set_engine(engine = "glm") %>% 
  fit(y ~ ., data = train_juiced)

# Why is label a variable in the model ?
logit_fit[['fit']][['coefficients']]
#> (Intercept)           x         f_b         f_c 
#> -0.98950228  0.03734093  0.98945339  1.27014824

train_juiced
#> # A tibble: 35 x 4
#>          x y       f_b   f_c
#>      <dbl> <fct> <dbl> <dbl>
#>  1 -0.928  Y         1     0
#>  2  4.54   N         0     0
#>  3 -1.14   N         1     0
#>  4 -5.19   N         1     0
#>  5 -4.79   N         0     0
#>  6 -6.00   N         0     0
#>  7  3.83   N         0     1
#>  8 -8.66   Y         1     0
#>  9 -0.0849 Y         1     0
#> 10 -3.57   Y         0     1
#> # ... with 25 more rows

reprex 包(v0.3.0) 於 2020 年 1 月 27 日創建

好的,模型可以工作了,但是我丟失了label
我該怎么做?

您遇到的主要問題/概念問題是,一旦您對配方進行了juice() ,它就只是 data ,即字面上的數據框。 當您使用它來擬合模型時,模型無法知道某些變量具有特殊作用。

library(tidymodels)

# label is an identifier variable to keep even though it's not a predictor
df <- tibble(label = 1:50, 
             x = rnorm(50, 0, 5), 
             f = factor(sample(c('a', 'b', 'c'), 50, replace = TRUE)),
             y = factor(sample(c('Y', 'N'), 50, replace = TRUE)) )

df_split <- initial_split(df, prop = 0.70)

rec <- recipe(y ~ ., training(df_split)) %>% 
  update_role(label, new_role = "id variable") %>% 
  step_corr(all_numeric(), -all_outcomes()) %>%
  step_dummy(all_predictors(),-all_numeric()) %>% 
  step_meanimpute(all_numeric(), -all_outcomes()) %>%
  prep()

train_juiced <- juice(rec)
train_juiced
#> # A tibble: 35 x 5
#>    label     x y       f_b   f_c
#>    <int> <dbl> <fct> <dbl> <dbl>
#>  1     1  1.80 N         1     0
#>  2     3  1.45 N         0     0
#>  3     5 -5.00 N         0     0
#>  4     6 -4.15 N         1     0
#>  5     7  1.37 Y         0     1
#>  6     8  1.62 Y         0     1
#>  7    10 -1.77 Y         1     0
#>  8    11 -3.15 N         0     1
#>  9    12 -2.02 Y         0     1
#> 10    13  2.65 Y         0     1
#> # … with 25 more rows

請注意, train_juiced只是字面上的常規小標題。 如果您使用fit()在此 tibble 上訓練模型,它將不知道用於轉換數據的配方的任何信息。

tidymodels 框架確實有一種方法可以使用配方中的角色信息來訓練模型。 可能最簡單的方法是使用工作流

logit_spec <- logistic_reg(mode = "classification") %>%
  set_engine(engine = "glm") 

wf <- workflow() %>%
  add_model(logit_spec) %>%
  add_recipe(rec)

logit_fit <- fit(wf, training(df_split))

# No more label in the model
logit_fit
#> ══ Workflow [trained] ══════════════════════════════════════════════════════════════════════════════════════════════════════════════════════════════
#> Preprocessor: Recipe
#> Model: logistic_reg()
#> 
#> ── Preprocessor ────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
#> 3 Recipe Steps
#> 
#> ● step_corr()
#> ● step_dummy()
#> ● step_meanimpute()
#> 
#> ── Model ───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
#> 
#> Call:  stats::glm(formula = formula, family = stats::binomial, data = data)
#> 
#> Coefficients:
#> (Intercept)            x          f_b          f_c  
#>     0.42331     -0.04234     -0.04991      0.64728  
#> 
#> Degrees of Freedom: 34 Total (i.e. Null);  31 Residual
#> Null Deviance:       45 
#> Residual Deviance: 44.41     AIC: 52.41

reprex 包(v0.3.0) 於 2020 年 2 月 15 日創建

模型中不再有標簽!

暫無
暫無

聲明:本站的技術帖子網頁,遵循CC BY-SA 4.0協議,如果您需要轉載,請注明本站網址或者原文地址。任何問題請咨詢:yoyou2525@163.com.

 
粵ICP備18138465號  © 2020-2024 STACKOOM.COM