<OT> New Posting: ROA-780

roa at ruccs.rutgers.edu roa at ruccs.rutgers.edu
Mon Oct 24 17:11:28 PDT 2005


ROA 780-1005

Non-convergence in the GLA and variation in the CDA

Joe Pater <pater at linguist.umass.edu>

Direct link: http://roa.rutgers.edu/view.php3?roa=780


Abstract:
Given a linguistic theory with constraints/parameters/rules
of any generality, learners are faced with what Dresher
(1999) calls the credit problem. In terms of Optimality
Theory (OT: Prince and Smolensky 1993/2004), there is usually
more than one constraint that favors an optimal form over
any one of its competitors, and that can be ranked by the
learner over the constraints preferring the competitor.
One of the most attractive properties of the constraint
demotion algorithm (CDA: Tesar and Smolensky 1998) is that
it finds a ranking that correctly deals with the learning
data (if one exists), without directly stipulating a solution
to the credit problem (cf. Dresher 1999). In this paper,
I show that an alternative algorithm for learnability in
OT, the Gradual Learning Algorithm (GLA: Boersma 1997, 1998,
Boersma and Hayes 2001), does not share this property: it
fails to converge when presented with a set of data that
has multiple interacting instantiations of the credit problem.
The advantage of the GLA over the CDA is that it handles
variation; in the second part of the paper, I sketch an
approach to the learning of variation that makes use of
the inconsistency detection properties of the CDA (Tesar 1998).

Comments: 
Keywords: learnability, gradual learning algorithm, constraint demotion algorithm, variation
Areas: Phonology, Learnability
Type: Squib

Direct link: http://roa.rutgers.edu/view.php3?roa=780


More information about the Optimal mailing list