Projects

all projects are highly collaborative, names are in alphabetical order

Babies and Machines and Visual Object Learning

Team: Sven Bambach, Jeremy Borjon, Elizabeth Clerkin, David Crandall, Hadar Karmazyn, Braden King, Lauren Slone, Linda B Smith, Umay Suanda, Chen Yu

Project 1: An Egocentric Perspective on Visual Object Learning in Toddlers and Exploring Inter-Observer Differences in First-Person Object Views using Deep Learning Models
Led by: Sven Bambach and David Crandall
In this project we evaluate how the visual statistics of a toddler's first-person view can facilitate visual object learning. We use raw frames from head cameras on parents and adults to train machine learning algorithms to recognize toy objects, and show that toddler views lead to better object learning under various training paradigms. This project led to a conference paper and talk at IEEE International Conference on Development and Learning and Epigenetic Robotics (ICDL-EPIROB) 2017
In this project we explore the potential of using deep learning models as tools to study human vision with head-mounted cameras. We train artificial neural network models based on individual subjects that all explore the same set of objects, and visualize the neural activations of the trained models, demonstrating that data from subjects who created more diverse object views resulted in models that learn more robust object representations. Our paper on this project has been accepted to the Workshop on Mutual Benefits of Cognitive and Computer Vision, which is part of the IEEE International Conference on Computer Vision 2017 conference.



Project 2: What's the relevant data for toddlers' word learning?
Led by: Linda B Smith, Umay Suanda, Chen Yu
This project addresses a heavily debated issue in early language learning: do toddlers learn new words by exploiting a few highly informative moments of hearing an object's name (i.e., moments when the referent is transparent) or by aggregating many, many moments (including less informative ones). We examined the auditory (moments when parents named an object) and visual (the visual properties of the referent and other objects from head-mounted cameras) signal from observations of parent-toddler play with novel objects. We then analyzed this signal in relation to how well toddlers remembered which object names went with which objects. Micro, event-level analyses and simulation studies revealed that toddlers' learning is best characterized by a learning process that involves aggregating information across many moments. Although this learning process may be slow and error-prone in the short run, it builds a more robust lexicon in the long run.






Project 3: HOME (Home-like Observational Multisensory Environment)
Led by: Jeremy Borjon, Linda B Smith, Chen Yu
HOME refers to a room with a state-of-the-art home-like environment equipped with cutting-edge sensing and computing technology. This environment allows us to measure and quantify parent-infant interactions in real time across multiple modalities and is specifically designed to encourage spontaneous, natural behavior in a context consistent with the clutter and noise of a typical home. The room is equipped to wirelessly capture head-mounted eye tracking, autonomic physiology, and track the motion of participants as they interact and behave in the room. This environment is the first of its kind to be primarily utilized for basic research to understand the fundamental principles of social interaction. Since HOME is meant to resemble an ordinary home environment, the findings will be readily and directly applicable to the real world.








Project 4: Toddler's Manual and Visual Exploration of Objects during Play
Led by: Lauren Slone, Linda B Smith, Chen Yu
This project aims to characterize the nature of toddler's early visual experiences of objects and their relation to object manipulation and language abilities. Using head-mounted eye tracking, this study objectively measures individual differences in the moment-to-moment variability of visual instances of the same object, from infants' first-person views. One finding from this research is that infants who generated more variable visual object images through manual object manipulation at 15 months of age had larger productive vocabularies six months later. This is the first evidence that image-level object variability matters and may be the link that connects infant object manipulation to language development.

View Papers








Multidimensional encoding of brain connectomes.

Team: Cesar Caiafa, Franco Pestilli, Andrew Saykin, Olaf Sporns

In this project we developed new computational methods for encoding brain data to support learning of brain network structure using neuroimaging data. We use multidimensional arrays and compress both brain data and computational models in compact data representations that can represent the anatomical relationship in the data and the model. These multidimensional models are light-weight and allow efficient anatomical operations to study the large-scale networks of the human brains. To date, this project has led to a conference paper and spot-light talk at NIPS (Neural Information Processing Systems, Caiafa, Saykin, Sporns and Pestilli NIPS 2017) and a Nature Scientific Reports article (Caiafa and Pestilli NSR 2017).

View Papers

Visual learning and symbol systems: Math and Letter Development

Team: Rob Goldstone, Karin James, Tyler Marghetis, Sophia Vinci-Booher

Project 1: Visual experience during reading and the acquisition of number concepts
Led by: Rob Goldstone, Tyler Marghetis

A fundamental component of numerical understanding is the 'mental number line,' in which numbers are conceived as locations on a spatial path. Around the world, the mental number-line takes on different forms - for instance, sometimes going left-to-right, sometimes right-to-left - but the origins of this variability is not yet completely understood. Here, combining big data (a corpus of four million books) and a targeted dataset (a small corpus of children's literature), we are modeling early and lifelong visual experience with written numbers, to see whether low-level visual exposure to written numbers can account for the high-level structure and form of the mental number-line.

Project 2: Learning algebra and self-generated perceptual variability
Led by: Rob Goldstone, Tyler Marghetis

In a series of lab experiments, we are exposing adults to a new computer-based algebra system. As they explore this new system, they self-generate a stream of sensorimotor information about algebraic notations. Results indicate that the best learners are those who generate highly variable streams of perceptual information - suggesting that self-generated perceptual variability might be a critical component in learning higher-level mathematical skills.





Project 3: Brain systems supporting letter writing and letter perception
Led by: Karin James, Sophia Vinci-Booher

Handwriting is a complex visual-motor behavior that leads to changes in visual perception and in the brain systems that support visual perception. Understanding the mechanisms through which handwriting contributes to these developmental changes is the focus of the project. The project integrates functional and diffusion MR imaging to understand the relationship between visual-motor behaviors, such as handwriting, and developmental changes in brain function, brain structure, and perception.

View Papers