<html>
<body>
<div align="center"><font size=4><b>The UCSD Department of Cognitive
Science is pleased to announce a talk by<br><br>
<br>
</font><font size=6>Ian Fasel Ph.D.<br><br>
<br>
</font><h4><b>The University of Texas at Austin<br>
Department of Computer Sciences<br><br>
</b></h4><font face="arial" size=4><b>Wednesday, February 20, 2008 at
12pm<br>
Cognitive Science Building, room 003<br><br>
<br>
</font><font size=5>"How to Build a <i>Robot</i> Baby: Computational
Models of Development and Learning"<br><br>
</font></div>
In this talk, I describe a program of research for understanding learning
and development by building robots and virtual agents that must solve
many of the same real-world problems the brain has to solve. The
ambition of the research is to provide a detailed understanding of the
information processing problems faced by the brain, and to develop
general, mathematically grounded techniques by which these problems might
be solved. In the process, not only do we get useful, working systems
that advance the state-of-the-art in machine learning and robotics, but
we also gain new ways to test specific hypotheses of human learning and
development.<br><br>
The first part of the talk focuses on learning to detect objects in
real-time with little or no external supervision. The main contribution
is a new machine learning technique called “Segmental Boltzmann Fields”
(SBFs), which is a general probabilistic framework for learning both
visual objects and other types of “objects” in different sensory domains
which may have extent in time instead of (or as well as) space. I will
then describe an infant robot which, using simple auditory contingencies
as the only cue to determine when the visual field probably contains or
does not contain a caregiver, is able to autonomously learn an accurate
“person” visual category from only a few minutes worth of experience,
suggesting that an innate face concept is not necessary to explain
results from Johnson et al. (1991) which showed neonatal preferences for
sketch faces. Next I will discuss recent work on learning a variety of
other perceptual skills, such as touch sensation and auditory mood
detection, on a number of different robots, including an android covered
in flexible skin sensors. Finally I will conclude with a look
forward to how we might answer some new questions in development,
highlighted by recent and ongoing work, including learning in the
presence of a benevolent caregiver or teacher.<br>
</b></body>
</html>