Hi everyone,<br> I have been using weighted empircal logit linear regression (Barr, 2008) to analyze data from a number of agreement error production experiments. (Just as a side note, I have run into lots of problems trying to use logit mixed models for this data as errors are extremely rare: certain conditions produce essentially no errors and all other conditions rarely have higher than 15% error rates. If anyone has a better solution than the emp.logit please let me know!)<br>
<br>That being said, I am running what is essentially a meta-analysis. I have data from 5 experiments and 104 different items (some of which appear in multiple experiments, some only appear in a single experiment). My model has two continuous predictors and two random effects (experiment and item). <br>
<br>lmer(emp.logit ~ IV1 + IV2 + (1|item) + (1|exp), data, weights)<br><br>When I run the model, my estimates, standard errors, and t-values all appear reasonable (i.e., comparable to other single random effect models I have run using this technique on similar data). There is no colinearity or anything else to suggest that something is wrong. But when I use pvals.fnc() to compute CIs and p values for the estimates, I find that the experiment random effect has a std. dev. of 0.0000 (5.0e-11 to be exact), and this seems to inflate the CI of the intercept estimate (t = 17, but it's only marginal significant w/ pvals from MCMC). If I run the same model excluding the experiment random effect, estimates do not change and the CIs and p values for the intercept appear normal. Strangely (or maybe not) the two models have the exact same log likelihoods.<br>
<br>Is this just an extreme example of a random effect not being necessary?<br><br>And, more on the conceptual end of things, why would a near-zero st.dev. of a random effect inflate CIs w/MCMC sampling?<br><br><br>Thanks in advance,<br>
<br>Maureen Gillespie<br>Northeastern University<br>