[R-lang] Re: dealing with failure to converge in an lmer

Levy, Roger rlevy@ucsd.edu
Wed Nov 28 23:35:04 PST 2012


On Nov 28, 2012, at 11:31 PM PST, Levy, Roger wrote:

> Hi Holger,
> 
> This is a very belated response to your query, but I have been thinking about this very good question off and on since you posted it.  Given that your proximate goal here is trying to demonstrate the *absence* of an effect, the relevant question is the degree of statistical power of the test, namely whether there are parameter settings under which a random-intercepts model is ever less powerful than a random-slopes model (note that this is different than the question of whether either model is anti-conservative).  I can think of one rather diabolical set of circumstances under which the random-intercepts model is indeed less powerful than the random-slopes model.  This can happen when you are doing an omnibus test for the effect of a factor with more than two levels.  Suppose that the within-subjects factor x has levels a, b, c, with the following by-subjects random slope variances and fixed effects (I use dummy coding here)
> 
>   variance  fixed effect
> a   small       beta1
> b   small       beta2
> c   large       beta2
> 
> where beta2 != beta1.  Here, the random-slope model can in principle be more powerful than the random-intercept model, because the random-slope model can pick up the heteroscedasticity of the by-subject variances and thus discriminate the overall difference between a and b.  (The difference between a and c will be hard to discriminate from chance derived from the large by-subject variance in c.)  The random-intercept model can't do this because it has to attribute the same across-subjects variance to all conditions (along, of course, with correlation 1 among conditions), which can wipe out its ability to detect the difference between a and b.
> 
> The following code illustrates an example of this type of anticonservativity:

Clarification: I didn't mean to write "anti-conservativity" here -- rather, difference in power.

> 
> library(lme4)
> library(mvtnorm)
> M <- 24
> dat <- expand.grid(subj=factor(paste("S",1:M)),x=factor(c("a","b","c")),measurement=1:2)
> 
> Sigma <- diag(3)
> Sigma[3,3] <- 100
> f <- function(seed) {
>    set.seed(seed)
>    b <- rmvnorm(M,c(0,0,0),Sigma)
>    y <- with(dat,ifelse(x=="a",1,0)+b[cbind(as.numeric(subj),as.numeric(x))]+rnorm(nrow(dat)))
>    m <- lmer(y~x+(1|subj),dat)
>    m0 <- lmer(y~1+(1|subj),dat)
>    p.ri <- anova(m0,m)[["Pr(>Chisq)"]][2]
>    M <- lmer(y~x+(x|subj),dat)
>    M0 <- lmer(y~1+(x|subj),dat)
>    p.rs <- anova(M0,M)[["Pr(>Chisq)"]][2]
>    return(c(ri=p.ri,rs=p.rs))
> }
> 
> system.time(res <- sapply(1:100,f))
> apply(res,1,function(x) mean(x<0.05))
> 
> The results I get are that the random-intercept model detects the effect of x 37% of the time, the random-slope model detects it 76% of the time.  (There's one singular-convergence warning but that doesn't affect the overall picture.)
> 
> Now, all this being said, this is not a demonstration that these circumstances could hold in a situation where the random-slope model is also unlikely to converge.  Additionally, this is a rather extreme set of circumstances that one could probably identify visually in a factorially designed experiment (look for big differences in by-subject and/or by-item variability across condition).  Therefore, in practice it seems to me that your argument is probably fairly reasonable.
> 
> Best
> 
> Roger
> 
> 
> 
> 
> On Oct 22, 2012, at 2:24 AM PDT, Holger Mitterer wrote:
> 
>> Dear list,
>> 
>> I am analyzing a data set with two multi-level factors (4 and 5 levels)
>> varying over subjects and (partly) over items. As the data set is not
>> completely balanced and the dependent variable is yes/no,
>> I am using an lmer with a logistic linking function.
>> 
>> I ran into the problem of nonconvergence for the maximal model
>> if I include the interaction in the random effects
>> (which is not that suprising, because the number of random slopes
>> to be estimated for a 4x5 desgin is quite large). 
>> 
>> So I tried the following to get at least a glimpse at the possibility
>> of a significant interaction: I ran two models (one with and one
>> without the interaction) and only random intercepts for subject and item:
>> 
>> model1 = lmer(response ~ factor1*factor2 + (1|subject) + (1|item), ...)
>> model2 = lmer(response ~ factor1+factor2 + (1|subject) + (1|item), ...)
>> anova(model1,model2)
>> 
>> The model comparison is far from significant. As this is an anti-conservative
>> test, I should be in a safe position to assume that there is no interaction and
>> go on to test the "main" effects of these factors. 
>> 
>> Would the users of the list agree?
>> 
>> Kind regards,
>> Holger
>> 
>> 
>> 
>> 
>> 
>> 
>> - - - - - - - - - - - - - - - - - - - - - - - - 
>> Holger Mitterer, Ph.D.
>> Max-Planck-Institut für Psycholinguistik
>> Postbus 310
>> 6500 AH Nijmegen
>> The Netherlands
>> Phone: (+31) (0)24 - 3521375
>> Fax: (+31) (0)24 - 3521213
>> - - - - - - - - - - - - - - - - - - - - - - - -
>> 
>> 
>> 
> 




More information about the ling-r-lang-L mailing list