[R-lang] trouble with mixed-model

Francisco Torreira ftorrei2 at uiuc.edu
Tue Aug 7 00:17:03 PDT 2007


Dear Roger,

Thanks for your comment. Yes, I understand that the p values do not
refer to the probability of Type I error. I think I will use the CI as
they are for the moment. Regarding the TukeyHSD procedure, I can only
tell you that it is more powerful than Bonferroni corrections when you
plan to make many or all possible mean comparisons. However I don't
think it can be applied to the means obtained with the MCMC
simulation. I would also like to learn more about this topic.

Thanks,
Francisco

On 8/6/07, Roger Levy <rlevy at ucsd.edu> wrote:
> Francisco Torreira wrote:
> > Dear Roger and Florian,
> >
> > Thanks so much for your comments. A model with random slopes but no
> > random intercepts (e.g. (0+type|spk)) also seems to lead to
> > singularity. As I said in my previous message, this happens too for a
> > model with both random intercept and slop (e.g. (1+type|spk)). I
> > understand Roger's suggestion to merge levels 'e' and 'i'. However, if
> > I am fitting the model, it's precisely to compare the level means :-)
> >
> > I have therefore fitted a model with a random intercept and calculated
> > the CI for the level means using Baayen's pvals.fnc().
> > I suppose that the CI obtained this way are not equivalent to the ones
> > obtained with post-hoc comparison procedures (e.g. TukeyHSD). Does
> > anyone have an idea how to do this with a mixed model?
>
> Dear Francisco,
>
> I don't really know much about the Tukey HSD procedure (can you suggest
> a reference?), but the Bonferroni correction, for example, could be
> applied to the t-test-based p-values returned by pvals.fnc().  The
> MCMC-based confidence intervals are Bayesian confidence intervals and
> thus represent the model's posterior beliefs about the likely parameter
> values, not degree of unlikeliness of seeing the data under the null
> hypothesis.  As such, they don't seem very philosophically compatible
> with something like the Bonferroni correction, which at heart asks the
> question "if my null hypothesis is really true, how many times would I
> expect to get at least one false positive if I conduct multiple tests?".
>    On the other hand, in the simulated-data cases of the Baayen et al.
> paper, the MCMC-based p-values are generally pretty close to the
> p-values you'd want for a classical hypothesis test, so there's nothing
> to stop you in practice from applying the ordinary Bonferroni correction
> to the MCMC-derived p-values.
>
> Hope that is useful. I'd be curious to hear what other people have to
> say about this.
>
> Roger
>


-- 
Francisco Torreira
PhD Candidate in Hispanic Linguistics
University of Illinois at Urbana-Champaign

https://netfiles.uiuc.edu/ftorrei2/www/index.html
tel: (+1) 217 - 778 8510


More information about the R-lang mailing list