[R-lang] Lmer interactions in factorial designs

T. Florian Jaeger tiflo at csli.stanford.edu
Sat Jul 25 14:48:09 PDT 2009


Dear Jakke,

my answers are inserted below.

Imagine a 2 x 3 factorial design, with Factor A having two levels (A1 and
> A2) and Factor B having three levels (B1, B2, B3). The dependent variable
> is
> reaction time (logRT).
>
> If I'm interested in the main effects of A and B, I run the following:
>
> lmer(logRT~A+B+(1|Subject)+(1|Item), data)
>
> This would give me something along these lines:
>
> Fixed effects:
>             Estimate Std. Error t value
> (Intercept)  6.286297   0.023018  273.11
> A2               0.007858   0.004204    1.87
> B2          -0.017007   0.003689   -4.61
> B3          -0.012179   0.003700   -3.29
>
> If I understand correctly, the model here is evaluating A2 against A1, B2
> against B1, and B3 against B1. This leads me to my first question: Is there
> any way to find out if the main effect of B is significant?


Do I understand correctly that you want an omnibus test assessing whether B
contributes significant information to the model? Is that what you mean by
the "main effect" of B? If so, you need to do model comparison of this model
against a model without B. Fit both models with method="ML" (see Baayen et
al., 2008-JML for elaboration). Though I think if you create the two models
and use anova() to compare them, anova() does the right thing anyway.

so

l <- lmer(logRT~A+B+(1|Subject)+(1| Item), data)
l.woB <- lmer(logRT~A+(1|Subject)+(1| Item), data)

anova(l, l.woB)

should do the job. Note that I would also at least test whether random
slopes for A+B.

Moving on with the same example, assume that I'm also interested in the
> interaction between A and B. Specifically, I want to find out whether the
> effect of A differs at the three levels of B. I run the following model:
>
> lmer(logRT~A*B+(1|Subject)+(1|Item), data)
>
> which would give me something like this:
>
> Fixed effects:
>                  Estimate Std. Error t value
> (Intercept)       6.286656   0.023133  271.76
> A2              0.007149   0.006009    1.19
> B2               -0.013616   0.005211   -2.61
> B3               -0.016637   0.005225   -3.18
> A2:B2        -0.006842   0.007377   -0.93
> A2:B3         0.008973   0.007395    1.21
>
> These are really hard tables to interpret. I believe we are now seeing the
> difference between A1 and A2 at B1 (0.007149). Furthermore, the last two
> lines tell us that at B2 the difference needs to be adjusted by -0.006842,
> and at B3 it needs to be adjusted by 0.008973, and that these adjustments
> are non-significant. This model doesn't provide information about the main
> effects.


Be cautious with the interpretation of A and B's contrast coefficients since
there may be collinearity in the model (especially when you include the
interaction of A and B). Have you checked the fixed effect correlations? I
recommend reading Baayen et al., 2008 and maybe browse through Baayen's
book. Also, check out Victor Kuperman and my slides for WOMM (
http://hlplab.wordpress.com/2009-pre-cuny-workshop-on-ordinary-and-multilevel-models-womm/).
These slides cover what you need to do about collinearity (as well as what
that is to begin with ;).



> If I wanted to report these, would I refer back to the first model?


You report everything from the last model. If you use treatment coding of
factors (R default) then what you called main effects actually are not main
effects. They are simple effects. To get main effects as in ANOVAs, you
should contrast code (contrast.sum()) the factors. There are some commented
R scripts on coding on our lab wiki (
http://wiki.bcs.rochester.edu:2525/HlpLab/StatsCourses/HLPMiniCourse).
Conveniently, contrast-coding with also deal with collinearity between main
effects and interactions if you have balanced data.


> And my third question: when we do ANOVAs, we're told to first see if the
> interaction between A and B is significant, and only then look at the
> interaction contrasts. Lmer in the above table gives you (some of) the
> contrasts, but doesn't evaluate the interaction as a whole. Do we still
> need
> to worry about the interaction as a whole, and if yes, how would we
> evaluate
> it?


If you want to follow ANOVA logic, do model comparison. start with the full
model and then do stepwise removal. For a balanced data set, this procedure
basically brings you back to ANOVA-land ;) -- while still taking advantage
of mixed models (relaxed assumptions, etc.). So, start with a full model:

1) l <- lmer(logRT~A*B+(1+A*B|Subject)+(1+A*B| Item), data)
2) follow the procedure outline on our lab blog to figure out which random
effects you need:
http://hlplab.wordpress.com/2009/05/14/random-effect-should-i-stay-or-should-i-go/
3) take the resulting model and compare it against a model without the
interaction, using anova(l, l.woInteraction).
4) *if removal of the interaction is not significant*, you could further
compare the model against a model with only A (see above).
5) Interpret coefficients in the full model or in the reduced model (I would
do the former unless I don't have much data or cannot reduce collinearity,
but you may prefer the latter).
6) If you find any of the scripts of references given above useful,
cite/refer to them, so that others can find them ;)

HTH,
Florian



>
>
> Many thanks in advance!
>
> Jakke
>
>
> _______________________________________________
> R-lang mailing list
> R-lang at ling.ucsd.edu
> http://pidgin.ucsd.edu/mailman/listinfo/r-lang
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://pidgin.ucsd.edu/pipermail/r-lang/attachments/20090725/e033cffa/attachment.htm>


More information about the R-lang mailing list