[R-lang] Re: Grammaticality judgments

Ambridge, Ben Ben.Ambridge@liverpool.ac.uk
Sat Oct 16 05:03:17 PDT 2010


Certainly - here's an example:

Pretty much all theories of the acquisition of the English past-tense predict that the more phonologically similar a novel verb is to a class of existing irregulars, the greater the acceptability of a novel irregular past-tense form (e.g, spling --> splung). I collected acceptability judgments using the 5-point scale and analysed the data with lmer, including verb and participant as random effects. The prediction was confirmed, replicating the finding of similar production tasks (e.g., Albright and Hayes, 2003). So whatever the in-principle problems, it seems clear that the Likert-scale rating task was measuring something sensible.

Perhaps that's a bad example, as it's arguably not exactly "grammaticality" that's being measured - here's another: In several studies, I found that the higher the frequency of the verb, the lower the acceptability of overgeneralization errors (e.g., *The funny joke laughed/giggled/chortled the man). Again, this replicates the findings of production studies (e.g., Brooks, Tomasello, Dodson & Lewis, 1999). 

I know many linguists think that grammaticality is a binary phenomenon, but it seems wrong to me to start out from that assumption. It's an empirical question - Give participants the opportunity to provide graded judgments and see whether they take it, or just use the ends of the scale. Data from studies that take this approach suggest that grammaticality is a graded phenomenon. To maintain the binary-phenomenon view, one would have to argue that all these findings are spurious and caused by subjects rating something other than grammaticality.

-----Original Message-----
From: Daniel Ezra Johnson [mailto:danielezrajohnson@gmail.com] 
Sent: 16 October 2010 12:41
To: Ambridge, Ben
Cc: r-lang@ling.ucsd.edu; kylebgorman@gmail.com; Bill Haddican
Subject: Re: [R-lang] Grammaticality judgments

I'm glad that this discussion is continuing and becoming more fundamental.
You say your results are consistent with theoretical predictions,
but that the theory doesn't provide an account of what you've measured?
Perhaps a brief example of a result-theory pair would be helpful.
I'm coming from a naive perspective between those who think the idea of
gradient grammaticality is self-evident and those who think it's quite silly.

Dan

> Whilst I accept all the previously-raised shortcomings of this method in
> principle, in practice, if a graded judgment task produces a pattern of
> judgments that is (a) entirely consistent with the predictions of relevant
> linguistic theories and (b) corroborated by findings from other paradigms
> (e.g., elicited production, spontaneous speech), I feel that we can be
> confident that the task is measuring something useful, even if we don't know
> precisely what that is. All my papers analyse the graded-judgment data using
> ANOVA or regression (lmer) and yielded a pattern of results that made sense
> in terms of the theories under investigation and the data obtained using
> other paradigms (and - from a pragmatic perspective - no reviewer or editor
> has ever objected to this analysis). Of course, a magnitude estimation task
> is preferable where this is possible, but my studies mainly focus on
> children, for whom a simpler task is required.
>
>
>
> I've also written a book chapter on the paradigm that I hope some may find
> interesting and/or useful. It - and the papers mentioned above - can be
> downloaded from
> http://pcwww.liv.ac.uk/~ambridge/Downloadable%20Publications.htm
>
>
>
> Ben Ambridge
>
> University of Liverpool



More information about the ling-r-lang-L mailing list