[R-lang] lmer : unexpected weights and p-values

ozge gurcanli gurcanli@cogsci.jhu.edu
Fri May 28 12:13:28 PDT 2010


Dear R-lang-ers

I have been using lmer package recently (thanks to your guidance by  
this list) to analyze my data. It works great for one of my data  
sets, I have very similar results to what I get from bayesglm function.
However, in my other data set, I have very unexpected estimates and p- 
values, which are totally different from bayesglm results. I was  
wondering whether you could help me with the problem.

Let me summarize my data. I am looking at the linguistic responses  
given to a set of movies that involve spatial relations. In  
particular, I look at the NP type, distinct NP (2 separate nps) vs.  
conjoint NP. I ask whether the choice of NP change as a function of  
two scene properties A and B. It is not a full 2x2 factorial design.  
The distribution of movies in a stimuli set is given below.

  	A1	A2
B1	6	8
B2	10	empty

This is how I code the variables:

Response variable , NPs , 1 vs 0
Fixed factor A1 vs A2: 1 vs 0
Fixed factor B1 vs B2: 1 vs 0

This is what the data looks like:

subject	NP	A	B
1	9	0	1	0
2	12	0	1	0
3	7	0	1	1
4	7	1	0	0
5	5	1	1	0
6	1	1	1	0

This the the command I use:

lmer(NP ~ A + B + (1|subject), family=binomial, data )

This is what I get. The p values are 1 :

Generalized linear mixed model fit by the Laplace approximation
Formula: NP ~ A + B + (1 | subject)
    Data: rg3
    AIC   BIC logLik deviance
  140.7 155.4 -66.35    132.7
Random effects:
  Groups  Name        Variance   Std.Dev.
  subject (Intercept) 1.8494e-20 1.3599e-10
Number of obs: 288, groups: subject, 12

Fixed effects:
               Estimate Std. Error z value Pr(>|z|)
(Intercept)  1.252e-01  4.357e+03 2.9e-05    1.000
A        	-1.473e-07  4.357e+03   0.000    1.000
B     		2.044e+01  3.445e+03   0.006    0.995

Correlation of Fixed Effects:
          (Intr) Fit
A      -1.000
B	 -0.791  0.791


And this is what I get from bayesglm, which is a good model in terms  
of predicting the actual distribution:

Call:
bayesglm(formula = NP ~ A + B, family = binomial, data )

Deviance Residuals:
      Min        1Q    Median        3Q       Max
-1.23404   0.06540   0.06540   0.08486   1.12181

Coefficients:
             Estimate Std. Error z value Pr(>|z|)
(Intercept)   	0.6539     1.6882   0.387 0.698521
A        	 	-0.5217     1.6776  -0.311 0.755840
B	     		 5.4927     1.5151   3.625 0.000289 ***
---
Signif. codes:  0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1

(Dispersion parameter for binomial family taken to be 1)

     Null deviance: 249.64  on 287  degrees of freedom
Residual deviance: 133.74  on 288  degrees of freedom
AIC: 139.74

Number of Fisher Scoring iterations: 17




Also, the logical likehood test results are almost the same for the  
two models

lmer: 'log Lik.' -66.36, bayesglm 'log Lik.'  -66.35



One thing that comes to mind is the possibility of individual  
differences. However, this possibility is eliminated; I have checked  
individual responses one by one. Participants behave very similarly.

Given how the lmer model works above made me think that there is a  
bug. Do you know how to correct this problem? Or do you think I  
should change the way I code the variables?

Thanks in advance

Oezge G.




-------------- next part --------------
An HTML attachment was scrubbed...
URL: https://mailman.ucsd.edu/mailman/private/ling-r-lang-l/attachments/20100528/d50b89ca/attachment.html 


More information about the ling-r-lang-L mailing list