[R-lang] mixed logit models, coding the effects and understanding the parameters

T. Florian Jaeger tiflo at csli.stanford.edu
Wed Apr 8 16:57:23 PDT 2009


Hey Maria,

I suspect you removed some outliers and how do not have a balanced data set
anymore? As David was saying, if you do not have a balanced data set sum
(aka) contrast coding does *not* center your categorical predictors. (You
can center them yourself if you want to. That is probably the reason for the
mismatch. If so, then the intercept-only model should give you the expected
estimate *unless* the random effects are not actually summing up to zero.
That actually does happen (and then they essentially contain part of what
you would expect to be the intercept). It's a good idea to check the
distribution of the random effects anyway.

Unrelated to your problem, have you tried including random slopes for the
two main effects? Seems like a good idea given your data.

Finally, just out of curiosity, are you looking at whether repeated nouns
between prime and target affect priming? You may find Neal Snider's work
interesting in that case. He has looked at how overall prime-target
similarity affects the strength of priming. (he found an effect, but his
study is more general than noun identity; btw, I recall that he once told me
that noun repetition alone did not reach significance).

Florian

On Wed, Apr 8, 2009 at 6:54 PM, David Reitter <reitter at cmu.edu> wrote:

> Hi Maria,
>
> good to hear from you. Just briefly for lack of time:
>
> On Apr 7, 2009, at 5:28 AM, Maria Carminati wrote:
>
>> Generalized linear mixed model fit using Laplace
>> Formula: poresp ~ primec * nounrepc + (1 | subject) + (1 | item)
>>  Data: verbdiff
>>
>
>  THERE WERE OVERALL 872 SUCCESSES AND 302 FAILURES IN THE EXPT, SO ODDS
>> SHOULD BE 872/302=2.88 or (in probability space) .74/.26 = 2.85;
>> THIS SHOULD GIVE A LOG OF ODDS OF APPROX 1.05, BUT THE INTERCEPT
>> PREDICTED BY THE MODEL  IS MUCH HIGHER (1.66)
>>
>
> You have a random intercept for subjects (and one for items) fitted
> there...
> I would fit a fixed effects model and check that first.  I'm not sure if,
> given the groups defined for your random terms, all data points are weighted
> equally (as they are in your max likelihood probability above).
> (Finally, by coding your binary factors as -0.5,0.5, you don't necessarily
> center the means at 0 - unless your design is balanced, what I almost
> suspect.  If their means aren't 0, you wouldn't expect the fitted intercept
> to work out the way you're suggesting.)
>
> Also, what happens if you take the non-significant terms out?
>
> > primec:nounrepc  -0.2138     0.3224  -0.663    0.507
>
> Pity this one didn't work.  Where these low-frequency nouns?  Unless your
> design controlled their frequency, you could try adding terms for the noun
> log-frequency (from a corpus)...
>
>
> Best
> - David
>
> --
> Dr. David Reitter
> Department of Psychology
> Carnegie Mellon University
> http://www.david-reitter.com
>
>
> _______________________________________________
> R-lang mailing list
> R-lang at ling.ucsd.edu
> http://pidgin.ucsd.edu/mailman/listinfo/r-lang
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://pidgin.ucsd.edu/pipermail/r-lang/attachments/20090408/bacad35b/attachment.htm>


More information about the R-lang mailing list