<OT> New Posting: ROA-767
roa at ruccs.rutgers.edu
roa at ruccs.rutgers.edu
Fri Aug 5 09:15:56 PDT 2005
ROA 767-0805
A Robbins-Monro type learning algorithm for an entropy maximizing version of stochastic Optimality Theory
Markus Fischer <fischer74 at onlinehome.de>
Direct link: http://roa.rutgers.edu/view.php3?roa=767
Abstract:
The object of the present work is the analysis of the convergence
behaviour of a learning algorithm for grammars belonging
to a special version -- the maximum entropy version -- of
stochastic Optimality Theory. Stochastic Optimality Theory
is like its deterministic predecessor, namely Optimality
Theory as introduced by Prince and Smolensky, in that both
are theories of universal grammar in the sense of generative
linguistics.
We give formal definitions of basic notions of stochastic
Optimality Theory and introduce the learning problem as
it appears in this context. A by now popular version of
stochastic Optimality Theory is the one introduced by Boersma,
which we briefly discuss. The maximum entropy version of
stochastic Optimality Theory is derived in great generality
from fundamental principles of information theory. The main
part of this work is dedicated to the analysis of a learning
algorithm proposed by Jaeger (2003) for maximum entropy
grammars. We show in which sense and under what conditions
the algorithm converges. Connections with well known procedures
and classical results are made explicit.
Comments:
Keywords: stochastic OT, probabilistic OT, learning algorithm, maximum entropy
Areas: Formal Analysis,Learnability,Computation
Type: Masters Dissertation
Direct link: http://roa.rutgers.edu/view.php3?roa=767
More information about the Optimal
mailing list