<OT> New Posting: ROA-883
roa at ruccs.rutgers.edu
roa at ruccs.rutgers.edu
Sun Nov 19 21:50:10 PST 2006
ROA 883-1106
Biases and Stages in Phonological Acquisition
Anne-Michelle Tessier <amtessier at ualberta.ca>
Direct link: http://roa.rutgers.edu/view.php3?roa=883
Abstract:
This dissertation presents Error-Selective Learning, an
error-driven model of phonological acquisition in Optimality
Theory which is both restrictive and gradual. Together these
two properties provide a model that can derive many attested
intermediate stages in phonological development, and yet
also explains how learners eventually converge on the target
grammar.
Error-Selective Learning is restrictive because its ranking
algorithm is a version of Biased Constraint Demotion (BCD:
Prince and Tesar, 2004). BCD learners store their errors
in a table called the Support, and use ranking biases to
build the most restrictive ranking compatible with their
Support. The version of BCD adopted here has three such
biases: (i) one for high-ranking Markedness (Smolensky 1996)
(ii) on for high-ranking OO-Faith constraints (McCarthy
1998); Hayes 2004); and (iii) one for ranking specific IO-Faith
constraints above general ones (Smith 2000; Hayes 2004).
Error-Selective Learning is gradual because it uses a novel
mechanism for introducing errors into the Support. As errors
are made they are not immediately used to learn new rankings,
but rather stored temporarily in an Error Cache. Learning
via BCD is only triggered once some constraint has caused
too many errors to be ignored. Once learning is triggered,
the learner chooses one best error in the Cache to add to
the Support -- an error that will cause minimal changes
to the current grammar.
The first main chapter synthesizes the existing arguments
for this BCD algorithm, and emphasizes the necessity of
the Support's stored errors. The subsequent chapter presents
Error-Selective Learning, using cross-linguistic examples
of attested intermediate stages that can be accounted for
in this approach. The next chapter compares ESL to a well-known
alternative, the Gradual Learning Algorithm (GLA: Boersma,
1997, 1998; Boersma and Hayes, 2001), and argues that the
GLA is overall not well-suited to learning restrictively
because it does not store its errors, and because it cannot
reason from errors to rankings in the way that BCD does.
The final chapter presents an artificial language learning
experiment, designed to test for high-ranking OO-faith in
children's grammar, whose results are consistent with the
biases and stages of Error-Selective Learning.
Comments:
Keywords: learnability, phonological acquisition, constraint demotion, learning algorithms, subset principle
Areas: Phonology,Learnability,Language Acquisition
Type: PhD Dissertation
Direct link: http://roa.rutgers.edu/view.php3?roa=883
More information about the Optimal
mailing list