<OT> New Posting: ROA-929

roa at ruccs.rutgers.edu roa at ruccs.rutgers.edu
Mon Oct 8 07:26:44 PDT 2007


ROA 929-1007

The Benefits of Errors: Learning an OT Grammar with a Structured Candidate Set

Tamas Biro <birot at nytud.hu>

Direct link: http://roa.rutgers.edu/view.php3?roa=929


Abstract:
We compare three recent proposals adding a topology to OT:
McCarthy's Persistent OT, Smolensky's ICS and Biro's SA-OT.
(During the comparison, the idea of simulated annealing
is also presented in a simple way.) To test their learnability,
constraint rankings are learnt from SA-OT's output. The
errors in the output, being more than mere noise and correspondin
g to performance errors, follow from the topology (by being
local optima). Thus, the learner has to reconstruct her
competence having access only to the teacher's performance,
which includes errors. In a pilot experiment with a toy
grammar, we employ Recursive Constraint Demotion (RCD) followed
by the Gradual Learning Algorithm (GLA).


Also available in the ACL Anthology: http://acl.ldc.upenn.edu/W/W
07/W07-0611.pdf

Comments: Published in: Proceedings of the Workshop on Cognitive Aspects of Computational Language Acquisition, pages 81�88, Prague, Czech Republic, June 2007. (Copyright: Association for Computational Linguistics.)
Keywords: simulated annealing, Persistent OT, ICS, SA-OT, learning, learnability, RCD, GLA, competence and performance
Areas: Computation,Learnability,Language Acquisition
Type: Conference Proceedings Chapter

Direct link: http://roa.rutgers.edu/view.php3?roa=929


More information about the Optimal mailing list