[R-lang] Lmer interactions in factorial designs
T. Florian Jaeger
tiflo at csli.stanford.edu
Fri Jul 31 13:16:41 PDT 2009
On Thu, Jul 30, 2009 at 5:59 AM, Jakke Tamminen <j.tamminen at psych.york.ac.uk
> wrote:
> My thanks to Andy, James, and Florian for their responses to my question.
> The replies were, as always, prompt, helpful, and lucid. I have a couple of
> quick further questions about model comparison: I think all three replies
> included suggestions of doing likelihood ratio tests to assess the
> significance of a single fixed factor in the model. How reliable is this? As
> far as I can recall, Baayen in his book and in the JML paper only uses this
> to evaluate random factors, and the paper by Bolker et al that Andy cited
> recommends against it in the case of fixed factors. Are there good
> alternatives?
>
It's still being debated what's best to be done there, but I think it's a
valid alternative for now and especially so for simple models.
>
> Finally, a quick follow up question regarding Florian's six-step procedure,
> reproduced below. In step 5 you suggest I interpret the coefficients in the
> full _or_ the reduced model. So is it acceptable to look at the coefficients
> of a factor or an interaction even if the factor or interaction does not
> "survive" a likelihood ratio test, i.e. does not significantly contribute to
> the fit of the model?
>
I would usually leave non-significant predictors in the model *if they are
theoretically motivated *(which is why they should be why you put them in
there to begin with ;)). There are many different traditions and approaches,
but I feel that, if you have enough data to avoid overfitting or other
problems, you should leave even relatively insignificant predictors into the
model (p>.7 [sic] is often given as a removal threshold).
HTH,
Florian
> I hope that makes sense, thank you again for all the help!
>
> Jakke
>
>
> 1) l <- lmer(logRT~A*B+(1+A*B|Subject)+(1+A*B| Item), data)
> 2) follow the procedure outline on our lab blog to figure out which random
> effects you need:
> http://hlplab.wordpress.com/2009/05/14/random-effect-should-i-stay-or-should-i-go/
> 3) take the resulting model and compare it against a model without the
> interaction, using anova(l, l.woInteraction).
> 4) *if removal of the interaction is not significant*, you could further
> compare the model against a model with only A (see above).
> 5) Interpret coefficients in the full model or in the reduced model (I
> would do the former unless I don't have much data or cannot reduce
> collinearity, but you may prefer the latter).
> 6) If you find any of the scripts of references given above useful,
> cite/refer to them, so that others can find them ;)
>
>
> _______________________________________________
> R-lang mailing list
> R-lang at ling.ucsd.edu
> http://pidgin.ucsd.edu/mailman/listinfo/r-lang
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://pidgin.ucsd.edu/pipermail/r-lang/attachments/20090731/638906e2/attachment.htm>
More information about the R-lang
mailing list