<OT> New Posting: ROA-675

roa at ruccs.rutgers.edu roa at ruccs.rutgers.edu
Fri Aug 20 10:26:41 PDT 2004


ROA 675-0804

Probabilistic Learning Algorithms and Optimality Theory

Frank Keller <keller at inf.ed.ac.uk>
Ash Asudeh <asudeh at csli.stanford.edu>

Direct link: http://roa.rutgers.edu/view.php3?roa=675


Abstract:
 This paper provides a critical assessment of the Gradual
Learning Algorithm (GLA) for probabilistic optimality-theoretic
grammars proposed by Boersma and Hayes (2001). After a short
introduction to the problem of grammar learning in OT, we
discuss the limitations of the standard solution to this
problem (the Constraint Demotion Algorithm by Tesar and
Smolensky (1998)), and outline how the GLA attempts to overcome
these limitations. We point out a number of serious shortcomings
with the GLA approach: (a) A methodological problem is that
the GLA has not been tested on unseen data, which is standard
practice in research on computational language learning.
(b) We provide counterexamples, i.e., data sets that the
GLA is not able to learn. Examples of this type actually
occur in experimental data that the GLA should be able to
model. This sheds serious doubt on the correctness and convergenc
e of the GLA. (c) Essential algorithmic properties of the
GLA (correctness and convergence) have not been proven formally.
This makes it very hard to assess the validity of the algorithm.
(d) We argue that by modeling frequency distributions in
the grammar, the GLA conflates the notions of competence
and performance. This leads to serious conceptual problems,
as OT crucially relies on the competence/performance distinction.

Comments: In Linguistic Inquiry 33:2, 225-244, 2002.
Keywords: probabilisitc learning, gradual learning algorithm, gradience, cumulativity
Areas: Syntax,Phonology,Computation,Learnability
Type: Journal Article

Direct link: http://roa.rutgers.edu/view.php3?roa=675



More information about the Optimal mailing list