Indiana University Research & Creative Activity

Mind/Brain

Volume 30 Number 2
Spring 2008

<< Table of Contents



Chen Yu and Linda Smith
Chen Yu and Linda Smith
Photo© Tyagan Miller

Learning What Dog (and everything else) Means

by Tracy James

When is a dog a "dog" to a 1-year-old?

Is it when a boy's parent deliberately pats the pooch on the head and says "doggie," maybe twice for good measure? Or is it when the tot, so recently just a babe, realizes the smelly, lively companion is not a cat or a chew toy or a ball, airplane, tree, or his sister--all things he has seen in the past while hearing the word "dog?"

It's amazing to truly consider how young children learn language. Starting from scratch, and with limited cognitive, perceptual, and social capabilities, young children across different cultures learn their native vocabulary effortlessly.

How do they learn language so easily? This is an especially vexing question for scientists. Not even the most powerful computers in the world, with the most sophisticated software, can come close to learning language merely from hearing it used, as young children do. So the question of just when (and how) toddlers figure out just what "dog" means is a complicated and important one.

Pint-size Miners

Linda Smith and Chen Yu are exploring answers to that question. Their theory--that the very young are adept at learning groups of words all at once--is groundbreaking. It presents an alternative to the conventional one-word-at-a-time learning process that scientists have adopted for decades. It also brings together human learning and machine learning in ways that advance both fields.

Smith is Chancellor's Professor and chair of the Department of Psychological and Brain Sciences at Indiana University Bloomington. To her and her colleague Yu, assistant professor in the Department of Psychological and Brain Sciences, more is better when it comes to how little ones learn language. Taking advantage of the rich and complex data surrounding most of us every day, a human's brain may learn, not by mastering one bit at a time, but by data mining across all experiences of words and objects.

Data mining, usually computer assisted, involves sorting and analyzing massive amounts of raw data to find relationships, correlations, and ultimately, useful information. Data mining techniques are most often used in business or financial contexts, and more recently, in a wide range of research fields such as biology and chemistry. Smith and Yu are investigating whether the human brain does its own data mining--automatically accumulating large amounts of data minute by minute, hour by hour, day by day. They want to know if this phenomenon contributes to a "system" approach to language learning that helps explain the ease with which 2- and 3-year-olds can learn words.

Toddlers are surrounded by words. Parents, for example, direct 300 to 400 utterances to their children--per hour. Add to this chatty siblings, TV, talking toys, and the busy environments of many children, toted by fearless parents to sporting events, parks, grocery stores, middle school musicals, and beyond. Yu and Smith argue that the more words tots hear, and the better the data for any individual word, the better the children's brains can begin to simultaneously put together and rule out word-object pairings, and learn what's what. They don't just learn some words one by one--they learn lots of words concurrently in short periods of time.

Smith has been focusing on language acquisition since the early 1980s. She says the old theory isn't wrong. The "most amazing thing" she can show someone about child word-learning is the same thing she would have shown 30 years ago, she says. A 2-and-a-half-year-old sees a tractor for the first time and is told, "Look, it's a tractor." She hears the word maybe three times as the tractor plows in a field.

"From that moment on, the child would use the word ‘tractor' right in just about every situation, whether it's a big tractor, little tractor, whether its yellow or green," says Smith. "This phenomenon is remarkable, how children manage to learn a word from one single experience. You name one object, and they seem to know it."

Children begin learning language around the age of 9 months, Smith says, but they do not show the remarkable "one-experience" word-learning until they are at least 2 years old. Smith and Yu suspect that 12- to 18-month-olds could be laying the foundation for their later language-learning prowess by data mining and tapping cross-situational statistics--sophisticated techniques, considering the kids can barely talk. Nonetheless, this theory could explain why most kids are such quick word learners by the age of 3.

"The question Chen and I ask ourselves," says Smith, "is: Are kids exploiting all this data, could they be simultaneously mining across all of this data? The idea is that there's all this structure and data--words and images--and that if children are learning structured data, many of the things we thought were hard about word-learning may prove to be easy. It's very exciting."

Fast-track Learners

Smith and Yu have published studies demonstrating how adults and 12- to 14-month-olds can learn groups of words in a matter of minutes. A $1 million National Institutes of Health grant will fund this research for another five years.

The researchers' recent studies took place in Yu's Computational Cognition and Learning Lab on the first floor of the psychology building at IU Bloomington. In a tiny closet-sized room, study participants sat in front of an eye tracker--a computer calibrated to trace where a person's eyes look on a monitor when images are displayed. Youngsters sat on their mothers' laps. Using another computer just outside of the room, researchers watched exactly where the young participants looked. The data was captured and analyzed by computer programs written by Yu, who earned a doctorate in com-puter science from the University of Rochester in 2004.

"It's fascinating. All this technology allows you to ask new questions, things you might have thought to ask 30 years ago but everyone was clueless about how to approach it," says Smith.

In one of the studies, published in the journal Cognition, Yu and Smith attempted to teach 28 12- to 14-month-olds six words by showing them two objects at a time on a computer monitor while two pre-recorded words were read to them. No information was given regarding which word went with which image. After viewing various combinations of words and images, however, the children were surprisingly successful at figuring out which word went with which picture.

In the adult version of the study, adults were taught 18 words in just six minutes. Instead of viewing two images at a time, they were simultaneously shown three or four, while hearing the same number of words. The adults, like the children, learned significantly more than would be expected by chance. Many of the adult subjects indicated they were certain they had learned nothing and were "amazed" by their success. Yu and Smith wrote in the journal Psychological Science, "This suggests that cross-situational learning may go forward non-strategically and automatically, steadily building a reliable lexicon."

Yu and Smith call this kind of automatic learning "statistical learning." The term refers to how our minds are able to analyze complex information from the environment and extract patterns and regularities without even being aware of it. To explain this statistical approach, Smith draws parallels with machine learning, using the example of language translation computer programs. If a machine is used to translate Russian to English, for example, the computer cannot be taught word-for-word translations because the languages are so different. Instead, many known and complicated translations are fed into the computer program--say, the texts of the U.S. Constitution and the novel War and Peace, in both English and Russian. The computer program looks at the underlying connections between the texts' words. The more words, the more opportunities for connections.

With word learning, young children are making connections between all the words they hear and all the objects they see. So, instead of figuring out how youngsters learn "dog," for example, scientists need to figure out how they make sense of all of the words. "We're testing whether this kind of statistical learning works," Yu says.

Likening kids' brains to computers is a rough comparison, however, when it comes to acquiring basic cognitive skills, such as recognizing everyday objects or speaking a native language.

A baby's brain is a much more powerful computational device than a machine--it's well known, for example, that children learn their native language and other languages much quicker than adults. Yu's and Smith's studies using the eye tracker provide large amounts of data to help researchers look for clues to how youngsters and adults learn differently.

Machine Brains

Yu and Smith are also studying what factors may affect the children's learning, such as what they touch or the guidance they receive from their parents. In another part of the lab, they use a one-of-a-kind system to try to recreate a child's natural environment, but in a sterile enough situation for a computer to make sense of it.

A parent and child sit at a table performing a simple language-learning task involving the toys. The child and parent both wear headbands upon which video cameras are mounted so researchers can record what the participants see. A third camera is mounted above the table and captures a bird's-eye view of the activities. Outside of the white, curtained-off activity area, computer screens display videos from the three cameras, showing dramatically different views. In the past, Smith says, she would have watched a single video of the parent-child interaction and taken educated guesses--filtered by her "adult assumptions"--at what the child focused on. The unique system Yu and Smith have devised leaves little to guessing. Computer programs--again written by Yu and graduate student Alfredo Pereira--capture, code, and analyze who is looking at and touching what. Just six minutes of activity involving 20 children can generate almost 300,000 frames for coding and analysis.

"The computer doing this makes it completely clean and unbiased," Smith says, "and the results are surprising, not always what you'd think."

It's all about the hands for the youngsters, Yu says. They tend to have only one object in view at time, moving their bodies and hands so the toy takes up most of their frame of vision. If mom is holding two things and asking the child to look at one, the child might be looking at something completely different.

Smith and Yu's findings also contribute to research involving artificial intelligence and machine learning. In fact, one of Yu's passions is to someday develop anthropomorphic machines that learn and use information in human-like ways.

Smith and Yu say advances in human learning and artificial intelligence are interrelated. Advances in machine learning provide useful computational tools to help scientists understand underlying learning mechanisms that children possess. New discoveries about how children learn inform scientists who are building intelligent machines, such as human-like robots. Yu says it is rare for the fields of human development and machine learning to talk back and forth, but at IU Bloomington, such interdisciplinary work is a fundamental component of the Cognitive Science Program and of research done by a sizable portion of the psychology faculty.

Yu finds the interface between human and machine learning exciting. If he and others can identify key factors of statistical learning and how it can be manipulated, they might be able to make learning languages easier, through training DVDs and other means, for children and adults. The learning mechanisms used by the children to learn words also could be used to further machine learning.

Smith says the fusion of human development, informatics, and machine learning is inevitable, considering how rapidly the world is changing and how technological advancements are influencing research.

"Chen is dragging cognitive development into the 21st century. He's taking us into the future," she says, as they sit together in Yu's lab. "My developmental psychology colleagues have a lot of expertise in children and learning. We're giving a lot back, but he's taking us to the future. We are definitely not dragging our feet. These are exciting times."

Tracy James is media relations specialist in the IU Office of University Communications and a Bloomington-based freelance writer.