[R-lang] analysis of raw RTs w/ glmer and inverse gaussian
João Veríssimo
jl.verissimo@gmail.com
Tue Feb 14 19:35:54 PST 2017
Dear all,
I've been trying to analyse response times following Lo &
Andrews' (2015) proposal here:
https://doi.org/10.3389/fpsyg.2015.01171
Specifically, they propose that raw RTs can be analysed without any
transformation, by using a GLMM that assumes a Gamma or inverse Gaussian
distribution.
For example:
glmer(rt ~ (1|subject) + (1|item) + prime * form * group + scale(trial),
mydata, family=inverse.gaussian(link="identity"))
Has anyone tried this? Any disadvantages or issues that I should be
aware of? I have noted a few issues that concern me (relatively to an
lmer model on log RTs):
1. If using treatment contrasts, I get different t-values for
interactions, depending on the reference level.
For example, these are the 3-way interactions when using one of the
levels of "form" as the reference:
primeType2:formInf:groupL2 -39.094 15.266 -2.56 0.01044 *
primeType3:formInf:groupL2 -37.020 15.495 -2.39 0.01689 *
And here the are same interactions when using the other level of "form"
as the reference:
primeType2:formFinite:groupL2 39.0939 17.5759 2.224 0.0261 *
primeType3:formFinite:groupL2 37.0203 18.0101 2.056 0.0398 *
The estimates are exactly the same (as expected), but the SEs are larger
in the second case.
2. The estimate for the Intercept is not close to mean RTs for the
reference condition (e.g., mean of raw RTs for that condition is 891 ms;
Intercept is 1020 ms). I imagine this has something to do with assuming
a skewed distribution, but the difference seems quite large.
3. Convergence is much more difficult, especially for the inverse
Gaussian, and required a number of modifications: a) removal of random
slopes, b) scaling of trial number, rather than just centering, c) use
of bobyqa optimizer, and d) increasing the number of iterations to
20000.
I'd be grateful for any opinions on this type of models, as well as for
explanations for these behaviours.
Thank you!
João
More information about the ling-r-lang-L
mailing list