[lingtalks] Ian Fasel Talk, Wed. Feb. 20th at 12pm

Steven Ford sford at cogsci.ucsd.edu
Tue Feb 19 15:01:47 PST 2008


The UCSD Department of Cognitive Science is pleased to announce a talk by


Ian Fasel Ph.D.



The University of Texas at Austin
Department of Computer Sciences


Wednesday, February 20, 2008 at 12pm
Cognitive Science Building, room 003


"How to Build a Robot Baby: Computational Models of Development and Learning"

In this talk, I describe a program of research for understanding learning 
and development by building robots and virtual agents that must solve many 
of the same real-world problems the brain has to solve. The ambition of the 
research is to provide a detailed understanding of the information 
processing problems faced by the brain, and to develop general, 
mathematically grounded techniques by which these problems might be solved. 
In the process, not only do we get useful, working systems that advance the 
state-of-the-art in machine learning and robotics, but we also gain new 
ways to test specific hypotheses of human learning and development.

The first part of the talk focuses on learning to detect objects in 
real-time with little or no external supervision. The main contribution is 
a new machine learning technique called "Segmental Boltzmann Fields" 
(SBFs), which is a general probabilistic framework for learning both visual 
objects and other types of "objects" in different sensory domains which may 
have extent in time instead of (or as well as) space. I will then describe 
an infant robot which, using simple auditory contingencies as the only cue 
to determine when the visual field probably contains or does not contain a 
caregiver, is able to autonomously learn an accurate "person" visual 
category from only a few minutes worth of experience, suggesting that an 
innate face concept is not necessary to explain results from Johnson et al. 
(1991) which showed neonatal preferences for sketch faces. Next I will 
discuss recent work on learning a variety of other perceptual skills, such 
as touch sensation and auditory mood detection, on a number of different 
robots, including an android covered in flexible skin sensors.  Finally I 
will conclude with a look forward to how we might answer some new questions 
in development, highlighted by recent and ongoing work, including learning 
in the presence of a benevolent caregiver or teacher.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://pidgin.ucsd.edu/pipermail/lingtalks/attachments/20080219/18109345/attachment.htm 


More information about the Lingtalks mailing list