Indiana University Research & Creative Activity


Volume 30 Number 2
Spring 2008

<< Table of Contents

Larry Yaeger
Larry Yaeger
Photo courtesy Larry Yaeger

John Beggs
John Beggs
Photo © Tyagan Miller

What's a Mind Made Of?

by David Bricker

What used to be fodder for philosophers is now squarely within the sights of neuroscientists who believe they are on the verge of answering the brain/mind question.

It has been nearly 250 years since Luigi Galvani, Italian naturalist and physician, first noticed frog legs twitch when subjected to electricity. Galvanisubsequently learned that the frog body was lined with electrical circuits controlling the movement of muscles. Galvani could not have known at the time that his own mind was the product of such electrical stimulations. That connection between brain and mind would not be made for another 100 years--by phrenologists, of all people. Galvani is nevertheless credited as having first recognized that nerve cells are unlike any other cell in the body--wires made not of metal but of organic matter, a chimera of life and unlife.

Until recently, it was expected that meticulous study of nerve cells would lead to a conquering of the brain and, shortly thereafter, a conquering of the mind and its most salient property, consciousness. "Of course, the mind is a product of the brain!" Nobel Prize-winner Eric Kandel recently said in Scientific American. "How could it not be?"

Easy for Kandel to say. He is most famous for his work with the not particularly bright sea slug Aplysia. Inasmuch as sea slugs have minds, the conceptual divide between sea slug brains and minds must be awfully small. For Kandel, a sea slug mind could be described as a summary of its memories--whether to turn left or right when presented with a chemical stimulus, for example.

Kandel and other neuroscientists have done a stupendous job of figuring out how nerve cells work. We know how nerve cells exhale and inhale ions to generate waves of electrical current. We know how cells modulate the genes that produce neurotransmitters. We know what most of those neurotransmitters do. We know how brain cells forge new connections with one another. We know that unused connections wither and die.

It is time, many neuroscientists think, to put that knowledge to use in addressing one of the most fundamental issues ever confronted by science: What allows us to transcend the mere circuitry of our brains?

What We Mean by Mind

"We are definitely on the verge of moving most parts of this question from the realm of philosophy to the realm of science. Extending beyond ‘mere' intelligence to the realm of cognition, introspection, and self-awareness is likely to take a bit longer, though," says Larry Yaeger, a professor of informatics at Indiana University Bloomington.

Yaeger's career projects demonstrate his unflinching interest in cognition, both natural and artificial. He helped design the second-generation handwriting recognition software ("the one that worked," he says) for the world's first commercial personal digital assistant--the Apple Newton. He wrote the software that allowed Koko, a gorilla who had been taught American Sign Language, to speak to researchers with a computer-generated voice. He even made a cameo appearance in the movie Terminator 2--James Cameron's way of thanking Yaeger for his contributions to perfecting the movie's portrayal of artificial intelligence. (Not only did he get a little screen time, but the T2 character Miles Dyson was modeled after Yaeger.)

What we mean by "mind," Yaeger explains, is subjective. In the vernacular, mind has a dozen different meanings. Among neuroscientists and cognitive scientists, however, the word tends to take on one of two meanings. For a few, mind is simply memory, a specific brain function both humans and sea slugs can accomplish (the sea slug actually having an advantage because it is not as likely to misplace its car keys). For most specialists, however, mind is something broader; it is synonymous with consciousness (or awareness), the emergent property of brain function that distinguishes higher animals, but also memory, and all the processes that string together stimulus, synthesis, and response.

"If mind is an emergent property--and most of us agree that's exactly what it is--then it's a matter of figuring out what processes are responsible for it," says John Beggs, a neurophysicist in the Department of Physics at IU Bloomington. "We now know a lot about why the brain acts so mysteriously."

Beggs studies "brains in a dish," which isn't quite as gruesome as it sounds. After he seeds specially prepared Petri dishes with nerve cells, Beggs watches the cells self-organize into living neural networks. He then records the activity of these networks with hundreds of microelectrodes to see how the cells "talk" to each other. Sometimes he observes a cascade or "avalanche" of activity that is just strong enough to keep going, without being so strong that it spreads through the entire network. This suggests that the network is balanced at a "critical point" where it is able to effectively send signals back and forth without exploding into a seizure.

"We are learning some pretty interesting things," says Beggs, who is also a member of IU's Biocomplexity Institute--a research center one might say is dedicated to studying emergent properties ( "For one, it appears the brain is built to be probabilistic, not deterministic, like computers. What this means is, when you put in the same input in two different instances, you might get two different outcomes, and you won't really be able to predict which one you'll get."

That might seem counterproductive for any sort of machine, mechanical or biological. What if your alarm clock worked that way? Or your car brakes?

But Beggs says this uncertainty "can actually be a good thing. The unpredictability plays a role in generating variability. Just like in evolution, where new organisms are produced through variation, new behaviors can sometimes be generated by neuronal unpredictability."

Another thing neuroscientists have learned, Beggs says, is that the amount of randomness seems controlled.

"There seems to be this critical point in probabilistic brain function," he says. "If the synapse activity is too ordered, the network can't learn new things. If it's too random, the network can't hold on to the things that it has learned."

As if defining what we mean by mind wasn't hard enough, we are faced with a daunting paradox. How does something we think of as incredibly ordered and meaningful--consciousness--arise from a neurological Pachinko machine?

"Even though individual neurons seem to be unreliable, when you put millions of them together, their average responses can be remarkably precise," Beggs says. "It's like a contest at a fair, where everyone tries to guess the weight of a cow. Although no single person manages to correctly guess the weight, the average of all the people's guesses is exactly correct! So, the brain can have it both ways--a little randomness for variability and learning, and amazing precision when crowds of neurons work together. We clearly do not work like digital, binary computers."

In a backwards sort of way, it seems Beggs and other scientists have arrived at the answer to a question they didn't quite ask--not, "What is it that makes us human?" but rather, "Why don't humans act like robots?"

Whole Brains from Scratch

One way to test our understanding of brain function and its many emergent properties is to convert our best theories into models, combine the models, then use computer simulations to see what we get. That's the work Larry Yaeger does.

Both Beggs and Yaeger approach many of the same questions, but in disparate ways. Whereas Beggs' work builds neuron clusters from scratch, Yaeger builds whole brains from scratch. The brain cells Beggs studies are made of organic matter, while the brain cells that computer engineer Yaeger studies are made from 1s and 0s.

Yaeger writes software simulations that model the behavior and evolution of brains. The brains aren't entirely discombobulated. Each one is tied to a polygonal creature--what Yaeger calls an agent--that is dependent on its (digital) environment and can even interact with other agents. Call it artificial intelligence or artificial life. Yaeger's primary simulation Polyworld is really both (

Based on a growing body of literature that demonstrates how evolution has set the ground rules for brain development, Yaeger presupposes that natural selection not only brings complexity to brains but optimizes them for certain environmental conditions.

"Let's assume recent models of neurons themselves are approximately correct," Yaeger says. "How do you wire up those neurons to produce intelligent behaviors? Evolution in an ecological context is what makes connections meaningful, so development of the correct wiring for natural intelligence must have relied on evolution, nature's most powerful search engine. It only makes sense, then, to bring natural selection to bear on wiring up simulated neurons to produce artificial intelligence."

Random natural processes like mutation and crossover create variety in species--in brain cell number, in the amount of branching in brain architecture, in the amount of randomness in brain signal, etc. But then natural selection exerts its effect, culling whatever doesn't work.

It is this continual evolutionary process of generating genetic variation and then removing it that can lead to increased complexity in brain size and architecture, Yaeger says.

But complexity is just the beginning of Yaeger's simulations. As the simulations run longer and the agents become more complex, he has grown accustomed to seeing new and unexpected behaviors from his agents.

"Every person who's ever written an ALife program has a story to tell about how they ran a simulation and saw something they just didn't expect," Yaeger says. "For example, evolution led my agents to develop niches I didn't realize existed, or to take on behaviors I didn't know they could perform. Keep in my mind, I wrote this program."

Unanticipated results, Yaeger says, were ultimately the result of variations in the behavior of a single agent interacting with its environment or other agents. As agents take on new behaviors, the combination of new interactions grows exponentially.

So perhaps it is not complexity, per se, that gives rise to the mind, but the interaction of new functions taken on by a burgeoning neurological architecture.

"If I can observe recognizable learning behaviors in these simulated beings, what we learn could very well be translatable to real organisms," Yaeger says. "And in the meantime we can learn things about a sort of general situated intelligence along the way. To paraphrase Langton, our study of ‘mind as it might be' will inform our study of ‘mind as it is.'"

So far, Yaeger's longest running simulation lasted about two weeks, or 30,000 time steps. Agents live about 275 time steps on average, and mature and reproduce after 25 time steps, so two weeks of simulation works out to about 1,200 generations.

"Another thing the simulation taught me is how sensory inputs drive behavior, producing very complex behaviors from fairly simple architectures," Yaeger says. "I definitely said, ‘Wow, I can do something seemingly very complex with a few dozen neurons.'"

With a faithful and functional model of brain activity in hand, Yaeger is keen to see a more complex version produced through evolutionary and developmental simulations. But would such a brain be human?

"This is a difficult question to answer, and we need to be very clear about exactly what elements of a ‘human mind' we are attempting to simulate in the machine," Yaeger says. "On the one hand, I think we are close to having enough computer power to produce fairly sophisticated levels of intelligence, modeled after human and other animal minds, in machines."

"On the other hand," he continues, "such models will not be truly, fully human. I do not think we are close to having the capacity to model a specific or even fully human mind in the computer. Humans are subject to many specific influences that an otherwise viable model of intelligence might necessarily, or willfully, leave out."

Even if this brain is not intelligent in exactly the way humans are, as long as it functions, Yaeger believes the construction of a computerized brain will be an important step forward for both A.I. and our understanding of what intelligence and awareness require.

"I believe we are gaining enough of an understanding about how networks of neuron-like units process information to conceivably produce intelligent behaviors and reasoning processes in machines," Yaeger says. "We're not there yet, but I don't see any insurmountable roadblocks."

Yaeger and Beggs may take different approaches to understanding the mind, but both researchers feel their work is informed by the other. Beggs and Yaeger agree that modeling the particulars of brain function and an overarching model of complex cognition are important as scientists parse the human mind.

We may not like what scientists find. We may well be confounded by it. But one thing seems certain: How we think of ourselves will be forever changed.

David Bricker is a media specialist for Indiana University and a freelance writer in Bloomington.