[R-lang] comparing regression coefficients

James S. Adelman J.S.Adelman at warwick.ac.uk
Fri Sep 7 06:09:14 PDT 2007


On Fri, Sep 07, 2007 at 08:58:32PM +0800, Lngmyers wrote:
> 
> All this talk about p-values for LME and mcmc reminds me of an old 
> question. To compare the sizes of the coefficients within a single 
> ordinary linear regression model, we can standardize them (by 
> multiplying each by sd(x)/sd(y)) and look at the difference in their 
> sizes. But we're not allowed to test whether this difference is 
> statistically significant. I don't know enough math to know why not.
> 
> Why couldn't we test the null hypothesis by resampling? Compute the 
> standardized regression coefficients for each new sample, and count how 
> many samples show a difference at least as large as the difference for 
> the actual data.
> 
> Is there any literature on this? Any a priori objections?

If I've understood your question correctly, you are asking about a linear
regression model with response, say z, and two predictors x and y:

K: z = a  +  mx  +  ny  +  error

and you wish to know whether H0: m=n.  If so, anova(lm(z~x+y),lm(z~I(x+y)))
should be valid under the usual conditions.
-- 
James S. Adelman,
Department of Psychology,    
University of Warwick,   
COVENTRY,         ()  ascii ribbon campaign - against html e-mail  
CV4 7AL.          /\  www.asciiribbon.org   - against proprietary attachments


More information about the R-lang mailing list