[R-lang] Random slopes in LME
Zhenguang Cai
s0782345@sms.ed.ac.uk
Sun Feb 13 10:17:00 PST 2011
Hi all,
I have a question concerning random slopes in mixed effects modeling.
So I ran a structural priming experiment with a 4-level variable (prime
type, A, B, D and D). The dependent variable is response construction
(DO dative vs. PO dative). The following is a summary of the experiment
results.
Prime A B C D
DOs 85 24 38 59
POs 82 144 128 109
% of DOs .51 .14 .23 .35
I am interested in whether different prime types induced different
priming, e.g., whether A led to more DO responses than B, C or D.
Initially, I ran LME analyses with random intercepts only. For instance,
I did the following to see whether there was a main effect of prime type.
fit.0 = lmer(Response~1+(1|Subject)+(1|Item), family=binomial)
fit.p = lmer(Response~Prime+(1|Subject)+(1|Item), family=binomial)
anova (fit.0, fit.p)
Then, I did pairwise comparison by changing the reference level for
Prime, e.g.,
fit.p = lmer(Response~relevel(Prime,ref="B")+(1|Subject)+(1|Item),
family=binomial)
It seems that all the levels differed from each other. In particular,
the comparison between C and D results in Estimate = -1.02, SE = .32, Z
= -3.21, p < .01.
But it seems I have to consider whether the slope for Prime differs
across subjects or item (at least this is a requirement from JML). So
the following is the way I considered whether a random slope should be
included in a model. I wonder whether I did the correct thing. I first
determined whether subject random slope should be included by comparing
the following two models.
fit.p = lmer(Response~Prime+(1|Subject)+(1|Item), family=binomial)
fit.ps = lmer(Response~Prime+(Prime+1|Subject)+(1|Item), family=binomial)
anova (fit.p, fit.ps)
I did the same thing about item random slopes.
fit.p = lmer(Response~Prime+(1|Subject)+(1|Item), family=binomial)
fit.pi = lmer(Response~Prime+(1|Subject)+(Prime+1|Item), family=binomial)
anova (fit.p, fit.pi)
The subject random slope had a significant effect, so I included it in
the final model (e.g., fit.ps). But pairwise comparison returned
something that is different from my initial analyses (when random slope
was not considered). That is, the comparison between C and D became only
marginally significant (Estimate = -.85, SE = .47, z = -1.79, p = .07).
It is a bit strange because the 9% difference between B and C turned out
to be significant, but the 12% difference between C and D was not.
Or did I do anything wrong in the analyses?
Thanks,
Garry
More information about the ling-r-lang-L
mailing list