简体   繁体   中英

How to check p-value in REML (or mixed model) using lmer() in R?

This is my data and I'd like to do REML analysis to see variance components of random factor.

Plant<- rep(c("P1","P2","P3","P4"), each=9)
Leaves<- rep(rep(c("L1","L2","L3"), each=3),4)
Rep<-rep(c(1,2,3),12)
Ca<- c(3.280, 3.090, 3.185, 3.520, 3.600, 3.560, 2.880, 2.800, 2.840, 2.460, 2.440,
       2.450, 1.870, 1.800, 1.835, 2.190, 2.100, 2.145, 2.770, 2.660, 2.715, 3.740,
       3.440, 3.590, 2.550, 2.700, 2.625, 3.780, 3.870, 3.825, 4.070, 4.200, 4.135,
       3.310, 3.400, 3.355)

tomato<- data.frame(Plant,Leaves,Rep,Ca)

and this is my code

library(lme4)

lmer <- lmer(Ca ~ (1|Plant)+ (1|Plant:Leaves), REML=TRUE, data=tomato)
summary(lmer)

I assumed that leaves are nested to plant. So I code as (1|Plant:Leaves)

在此处输入图像描述

This result indicates that Plant and leaves account for the most variability in data, while replicates are minor, doesn't it?

Then, I want to know plant and leave (nested to plant) are significant or not. Where can I find the p-value? or can I add more code to check p-value?

Or in REML, only variance component is possible for us to check?

If so, if I choose plant as a fixed factor like below

lmer <- lmer(Ca ~ Plant+ (1|Plant:Leaves), REML=TRUE, data=tomato)
summary(lmer)

在此处输入图像描述

At least, in this mixed model, p-value for plant (= fixed factor) should be presented? isn't it? But, I can't still find it.

Could you guide me how to analyze the result of REML?

This is an older question, but I wanted to clarify in case others spotted it. It is usually not standard to check significance of random effects directly . It warrants knowing why you are interested in random effects significance terms in the first place. An example given from Ben Bolker's useful FAQ on mixed models provides one way of ascertaining differences with significance terms using a likelihood ratio test for comparing models with differing random effects:

library(lme4)
m2 <- lmer(Reaction~Days+(1|Subject)+(0+Days|Subject),sleepstudy,REML=FALSE)
m1 <- update(m2,.~Days+(1|Subject))
m0 <- lm(Reaction~Days,sleepstudy)
anova(m2,m1,m0) ## two sequential tests

Therein providing a summary of whether or not the model differences with changed random effects are significant based on chi-square changes:

## Data: sleepstudy
## Models:
## m0: Reaction ~ Days
## m1: Reaction ~ Days + (1 | Subject)
## m2: Reaction ~ Days + (1 | Subject) + (0 + Days | Subject)
##    npar    AIC    BIC  logLik deviance   Chisq Df Pr(>Chisq)    
## m0    3 1906.3 1915.9 -950.15   1900.3                          
## m1    4 1802.1 1814.8 -897.04   1794.1 106.214  1  < 2.2e-16 ***
## m2    5 1762.0 1778.0 -876.00   1752.0  42.075  1  8.782e-11 ***
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1

You can see here that the null model, a random effects model with subject and day REs, and a final model with just subject random intercepts are compared with the test. In this way, we can in some part deduce that the m2 model may provide the best balance of parsimony and complexity. However, it is very important to have both theory-driven and data-driven reasons for this decision.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM