[R-lang] Re: Investigating random slope variance

Levy, Roger rlevy@ucsd.edu
Tue Apr 8 09:58:53 PDT 2014


On Apr 8, 2014, at 3:41 AM, Titus von der Malsburg <malsburg@posteo.de> wrote:

> 
> On 2014-04-07 Mon 23:16, Levy, Roger <rlevy@ucsd.edu> wrote:
>> On Apr 7, 2014, at 8:09 AM, Titus von der Malsburg <malsburg@posteo.de> wrote:
>>> 
>>>   http://users.ox.ac.uk/~sjoh3968/R/effect_of_shrinkage2.png
>> 
>> [...] Though I’m still surprised that there’s so much more shrinkage
>> in the y direction than in the x direction, despite the fact that the
>> random slope standard deviation is so much smaller than that of the
>> random intercept.

Correction: I just had this backwards in my head.  There’s more shrinkage in the y direction precisely *because* the random-slope standard deviation is so much smaller than the random-intercept standard deviation.

> Could this be due to the manipulation being between-subject?

Not obvious to me that this would have an effect, but I think that the puzzle solved itself.

>>> To remind you of the original question: I wanted to know which items are
>>> read significantly faster or slower in the manipulated condition.  Based
>>> on the BLUPs, these are items 25 and 36.  
>> 
>> So, hold on: are you interested in (i) for which items can you
>> conclude with (1-p)% confidence that the total effect of the
>> manipulation is significantly non-zero in a particular direction
>> [item-average slope + item-specific slope], or (ii) for which items
>> can you conclude that their idiosyncratic sensitivity to the
>> manipulation, above and beyond the population-average sensitivity, is
>> significantly non-zero in a particular direction [only item-specific
>> slope]?
> 
> Since there is no significant main effect of the manipulation, (i) and
> (ii) are the same.  Think of the experiment as a corpus study with an
> experimental manipulation.  This manipulation speeds up some items and
> slows down others.  Overall there is no slow-down or speed-up.

Ah, this is a good point, I’d forgotten that you’d taken the fixed effect out of your mixed-effects model!

But the the item-average effect of the manipulation is still in your fixed-effects model.  You might want to reparameterize that as

  log(tt) ~ item / cond + (1 | subj)

for full comparability.

>> Your concern about the correlation coefficient in the random-effects
>> covariance matrix seems reasonable to me.  I don’t know how the new
>> lme4’s ranef() function extracts confidence intervals, but if it does
>> so conditional on the point estimate of the random-effects covariance
>> matrix, then the standard caveat applies that this kind of approach
>> fails to take into account uncertainty in that covariance matrix.  For
>> this reason you might want to consider Bayesian inferential
>> methods.  My textbook-in-progress has some examples of how you can set
>> these up in JAGS, I think Shravan has pedagogical materials now that
>> show how to do this in Stan, and there are R packages that may be
>> useful for this more directly (e.g., MCMCglmm).  That way you can
>> inspect the posterior on the correlation parameter.
> 
> That sounds like they way to go.  (Well, actually I should collect more
> data but that's unfortunately not easily possible in this project.)  So
> it's finally time to learn JAGS or Stan.  At least I can called it
> "work" now instead of procrastination.
> 
> Thank you all for your responses.  I learned something.

Glad to hear this!

Best

Roger




More information about the ling-r-lang-L mailing list