From what I understood, standardized coefficients can be used as indices of effect size (with the possibility of using rules of thumb such as Cohen's 1988). I also understood that standardized coefs are expressed in terms of standard deviation , which makes them relatively close to a Cohen's d.
I also understood that one way of obtaining standardized coefs is to standardize the data beforehand. Another is to use the std.coef
function from the MuMIn
package.
These two methods are equivalent when using a linear predictor:
library(tidyverse)
library(MuMIn) # For stds coefs
df <- iris %>%
select(Sepal.Length, Sepal.Width) %>%
scale() %>%
as.data.frame() %>%
mutate(Species = iris$Species)
fit <- lm(Sepal.Length ~ Sepal.Width, data=df)
round(coef(fit), 2)
round(MuMIn::std.coef(fit, partial.sd = TRUE), 2)
In both cases, the coefficient is -0.12. I interpret it as follows: for each increase of 1 standard deviation of Sepal.Width, Sepal.Length diminishes of 0.12 of its SD .
And yet, these two methods give different results with a categorical predictor:
fit <- lm(Sepal.Length ~ Species, data=df)
round(coef(fit), 2)
round(MuMIn::std.coef(fit, partial.sd = TRUE), 2)
Which gives, for the effect of versicolor as compared to setosa (the intercept), 1.12 and 0.46.
Which should I believe to be able to say "the difference between versicolor and setosa is ... of Sepal.Length's SD"? Thanks a lot
You didn't standardize the implicit variables associated with Species
, so those coefficients would not be standardized.
You could do so as follows:
dummies <- scale(contrasts(df$Species)[df$Species,])
fit <- lm(Sepal.Length ~ dummies, data = df)
round(coef(fit), 2)
# (Intercept) dummiesversicolor dummiesvirginica
# 0.00 0.53 0.90
This agrees with the results of MuMIn::std.coef
if you set the partial.sd
argument to FALSE
.
The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.