[R-lang] Re: Random slopes in LME

Levy, Roger rlevy@ucsd.edu
Sat Feb 19 13:02:10 PST 2011


Hi Gary,

On Feb 19, 2011, at 12:48 PM, Zhenguang Cai wrote:

> Hi Roger,
> 
> Thanks a lot for the comments. The following is a summary of the full 
> model. I did not have by-item random slope as it turned out not to be 
> significant. As you can see below, there was a higly significant 
> difference between b and c, but only marginally sig. difference between 
> c and d.
> 
> 
>> summary (fit.cs.c)
> Generalized linear mixed model fit by the Laplace approximation
> Formula: Response ~ relevel(Condition, ref = "c") + (Condition + 1 | 
> Subject) +      (1 | Item)
>   AIC   BIC logLik deviance
> 584.2 651.9 -277.1    554.2
> Random effects:
> Groups  Name        Variance Std.Dev. Corr
> Subject (Intercept) 2.62870  1.62133
>         Conditionb  5.67510  2.38225   0.534
>         Conditionc  1.53212  1.23779   0.032  0.862
>         Conditiond  0.82153  0.90638   0.987  0.389 -0.132
> Item    (Intercept) 0.81090  0.90050
> Number of obs: 672, groups: Subject, 28; Item, 24
> 
> Fixed effects:
>                               Estimate Std. Error z value Pr(>|z|)
> (Intercept)                      2.1952     0.5120   4.287 1.81e-05 ***
> relevel(Condition, ref = "c")a  -2.2338     0.4086  -5.467 4.58e-08 ***
> relevel(Condition, ref = "c")b   2.2251     0.5560   4.002 6.29e-05 ***
> relevel(Condition, ref = "c")d  -0.8504     0.4741  -1.794   0.0729 .
> ---
> Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
> 
> Correlation of Fixed Effects:
>                    (Intr) relvl(Cndtn,rf="c") rlvl(Cndtn,rf="c")b
> relvl(Cndtn,rf="c") -0.631
> rlvl(Cndtn,rf="c")b  0.269 -0.027
> rlvl(Cndtn,rf="c")d -0.331  0.678               0.138

I think it'll be easier to interpret what's going on if you apply relevel within the random effects part too, i.e. 

 Response ~ relevel(Condition, ref = "c") + (relevel(Condition, ref = "c") |  Subject) +      (1 | Item)



> 
>> 
> 
> My further question is: Can you determine a main effect by model 
> comparison when random slopes are included? For instance, with the 
> following models:
> 
> fit.0 = lmer (y ~ 1 + (1|subject)+(1|item))
> fit.x = lmer (y ~ x + (1+subject)+(1|item))
> fit.xs = lmer (y ~ x + (1+x|subject)+(1|item))
> 
> If I want to determine whether x has a sig. effect, which the following 
> do I do?
> 
> anova (fit.0, fit.x)

This works on the assumption that the intercepts-only random-effects structure is correct (often false)

> anova (fit.0, fit.xs)

This doesn't test only for a main effect of x. You'd probably want something like

> fit.0xs = lmer (y ~ 1 +(1+x|subject)+(1|item))
> anova(fit.0xs,fit.xs)

which one might describe as an "omnibus test" for the effect of x.

Best

Roger

> 
> thanks,
> 
> garry
> 
> 
> 于 2011-2-19 20:27, Levy, Roger 写道:
>> Hi Gary,
>> 
>> On Feb 13, 2011, at 10:17 AM, Zhenguang Cai wrote:
>> 
>>> Hi all,
>>> 
>>> I have a question concerning random slopes in mixed effects modeling.
>>> So I ran a structural priming experiment with a 4-level variable (prime
>>> type, A, B, D and D). The dependent variable is response construction
>>> (DO dative vs. PO dative).  The following is a summary of the experiment
>>> results.
>>> 
>>> Prime		A	B	C	D
>>> DOs		85	24	38	59
>>> POs		82	144	128	109
>>> % of DOs	.51	.14	.23	.35
>>> 
>>> I am interested in whether different prime types induced different
>>> priming, e.g., whether A led to more DO responses than B, C or D.
>>> Initially, I ran LME analyses with random intercepts only. For instance,
>>> I did the following to see whether there was a main effect of prime type.
>>> 
>>> fit.0 = lmer(Response~1+(1|Subject)+(1|Item), family=binomial)
>>> fit.p = lmer(Response~Prime+(1|Subject)+(1|Item), family=binomial)
>>> anova (fit.0, fit.p)
>>> 
>>> Then, I did pairwise comparison by changing the reference level for
>>> Prime, e.g.,
>>> 
>>> fit.p = lmer(Response~relevel(Prime,ref="B")+(1|Subject)+(1|Item),
>>> family=binomial)
>>> 
>>> It seems that all the levels differed from each other. In particular,
>>> the comparison between C and D results in Estimate = -1.02, SE = .32, Z
>>> = -3.21, p<  .01.
>>> 
>>> But it seems I have to consider whether the slope for Prime differs
>>> across subjects or item (at least this is a requirement from JML).
>> 
>> Let me emphasize something here: this requirement from JML is very sensible, because failing to include random slopes in an analysis of this type can lead to badly anti-conservative analyses if you are really interested in recovering reliable fixed effects, but the true generative process underlying your data really does have substantial variation in sensitivity to different conditions across subjects and/or items.
>> 
>> 
>>> So
>>> the following is the way I considered whether a random slope should be
>>> included in a model. I wonder whether I did the correct thing. I first
>>> determined whether subject random slope should be included by comparing
>>> the following two models.
>>> 
>>> fit.p = lmer(Response~Prime+(1|Subject)+(1|Item), family=binomial)
>>> fit.ps = lmer(Response~Prime+(Prime+1|Subject)+(1|Item), family=binomial)
>>> anova (fit.p, fit.ps)
>>> 
>>> I did the same thing about item random slopes.
>>> 
>>> fit.p = lmer(Response~Prime+(1|Subject)+(1|Item), family=binomial)
>>> fit.pi = lmer(Response~Prime+(1|Subject)+(Prime+1|Item), family=binomial)
>>> anova (fit.p, fit.pi)
>>> 
>>> The subject random slope had a significant effect, so I included it in
>>> the final model (e.g., fit.ps). But pairwise comparison returned
>>> something that is different from my initial analyses (when random slope
>>> was not considered). That is, the comparison between C and D became only
>>> marginally significant (Estimate = -.85, SE = .47, z = -1.79, p = .07).
>>> It is a bit strange because the 9% difference between B and C turned out
>>> to be significant, but the 12% difference between C and D was not.
>>> 
>>> Or did I do anything wrong in the analyses?
>> 
>> Well, I don't think you've given quite enough information to know.  A couple of points to take into account:
>> 
>> 1) in logit space, 14%-23% is actually a slightly larger effect than 23%-35%:
>> 
>>> logit<- function(x) {
>> +   log(x/(1-x))
>> + }
>>> logit(c(0.14,0.23,0.35))
>> [1] -1.8152900 -1.2083112 -0.6190392
>> 
>> 2) significance levels ultimately reflect signal-to-noise ratio, not just effect size.  For example, larger variance across subjects in the B/C contrast than in the C/D contrast might lead to the pattern you describe.
>> 
>> Perhaps you might include the full output from your lmer() call with random by-subject (and perhaps by-item) slopes and setting C as the reference level?  That would help with the interpretation.
>> 
>> Best
>> 
>> Roger
>> 
>> 
>> --
>> 
>> Roger Levy                      Email: rlevy@ucsd.edu
>> Assistant Professor             Phone: 858-534-7219
>> Department of Linguistics       Fax:   858-534-4789
>> UC San Diego                    Web:   http://idiom.ucsd.edu/~rlevy
>> 
>> 
>> 
>> 
>> 
>> 
>> 
>> 
>> 
>> 

--

Roger Levy                      Email: rlevy@ucsd.edu
Assistant Professor             Phone: 858-534-7219
Department of Linguistics       Fax:   858-534-4789
UC San Diego                    Web:   http://idiom.ucsd.edu/~rlevy












More information about the ling-r-lang-L mailing list