Abstract
We are pursuing the hypothesis that visual exploration and learning in young infants is achieved by producing gaze-sample sequences that are sequentially predictable. Our recent analysis of infants' gaze patterns during image free-viewing (Schlesinger & Amso, 2013) provides support for this idea. In particular, this work demonstrates that infants' gaze samples are more easily learnable than those produced by adults, as well as those produced by three artificial-observer models. In the current study, we extend these findings to a well-studied object-perception task, by investigating 3-month-olds' gaze patterns as they view a moving, partially-occluded object. We first use infants' gaze data from this task to produce a set of corresponding center-of-gaze (COG) sequences. Next, we generate two simulated sets of COG samples, from image-saliency and random-gaze models, respectively. Finally, we generate learnability estimates for the three sets of COG samples by presenting each as a training set to an SRN. There are two key findings. First, as predicted, infants COG samples from the occluded-object task are learned by a pool of simple recurrent networks faster than the samples produced by the yoked, artificial-observer models. Second, we also find that resetting activity in the recurrent layer increases the network’s prediction errors, which further implicates the presence of temporal structure in infants’ COG sequences. We conclude by relating our findings to the role of image-saliency and prediction-learning during the development of object perception.
Recommended Citation
Schlesinger, Matthew, Johnson, Scott P. and Amso, Dima. "Prediction-learning in Infants as a Mechanism for Gaze Control during Object Exploration." (May 2014).
Comments
Published in Frontiers in Psychology, Vol. 5 (May 2014) at doi: 10.3389/fpsyg.2014.00441