Indiana University Research & Creative Activity


Volume 30 Number 2
Spring 2008

<< Table of Contents

Douglas Hofstadter
Douglas Hofstadter
Photo courtesy Indiana University

book jacket

The Elusive Apple of My 'I'

by Douglas Hofstadter

Douglas Hofstadter is Distinguished Professor of Cognitive Science and Computer Science at Indiana University Bloomington. A mathematician and physicist, Hofstadter's first book, Gödel, Escher, Bach: an Eternal Golden Braid, won the Pulitzer Prize and American Book Award in 1980. For decades, Hofstadter has been one of the world's most creative and innovative pioneers in the exploration of consciousness. The following excerpts are taken from Chapter 13 in Hofstadter's ninth and newest book, I Am a Strange Loop. (Copyright © 2007. Reprinted by arrangement with Basic Books (, a member of the Perseus Books Group. All rights reserved.)

The Patterns that Constitute Experience

By our deepest nature, we humans float in a world of familiar and comfortable but quite impossible-to-define abstract patterns, such as: "fast food" and "clamato juice", "tackiness" and "wackiness", "Christmas bonuses" and "customer service departments", "wild goose chases" and "loose cannons", "crackpots" and "feet of clay", "slam dunks" and "bottom lines", "lip service" and "elbow grease", "dirty tricks" and "doggie bags", "solo recitals" and "sleazeballs", "sour grapes" and "soap operas", "feedback" and "fair play", "goals" and "lies", "dreads" and "dreams", "she" and "he"--and last but not least, "you" and "I".

Although I've put each of the above items in quotation marks, I am not talking about the written words. Nor am I talking about the observable phenomena in the world that these expressions "point to". I am talking about the concepts in my mind and your mind that these terms designate, or, to revert to an earlier term, about the corresponding symbols in our respective brains.

With my hopefully amusing little list (which I pared down from a much longer one), I am trying to get across the flavor of most adults' daily mental reality--the bread-and-butter sorts of symbols that are likely to be awakened from dormancy in one's brain as one goes about one's routines, talking with friends and colleagues, sitting at a traffic light, listening to radio programs, flipping through magazines in a dentist's waiting room, and so on. My list is a random walk through an everyday kind of mental space, drawn up in order to give a feel for the phenomena in which we place the most stock and in which we most profoundly believe (sour grapes and wild goose chases being quite real to most of us), as opposed to the forbidding and inaccessible level of quarks and gluons, or the only slightly more accessible level of genes and ribosomes and transfer RNA--levels of "reality" to which we may pay lip service but which very few of us ever think about or talk about.

And yet, for all its supposed reality, my list is pervaded by vague, blurry, unbelievably elusive abstractions. Can you imagine trying to define any of its items precisely? What on earth is the quality known as "tackiness"? Can you teach it to your kids? And please give me a pattern-recognition algorithm that will infallibly detect sleazeballs!

. . .

A Pearl Necklace I Am Not

To begin with, for each of us, the strange loop of our unique "I" –ness resides inside our own brain. There is thus one such loop lurking inside the cranium of each normal human being.

Actually, I take that back, since, in Chapter 15, I will raise this number rather drastically. Nonetheless, saying that there is just one is a good approximation to start with.

When I refer to "a strange loop inside a brain", do I have in mind a physical structure--some kind of palpable closed curve, perhaps a circuit made out of many neurons strung end-to-end? Could this neural loop be neatly excised in a brain operation and laid out on a table, like a delicate pearl necklace, for all to see? And would the person whose brain had thus been "delooped" thereby become an unconscious zombie?

Needless to say, that's hardly what I have in mind. The strange loop making up an "I" is no more a pinpointable, extractable physical object than an audio feedback loop is a tangible object possessing a mass and a diameter. Such a loop may exist "inside" an auditorium, but the fact that it is physically localized doesn't mean that one can pick it up and heft it, let alone measure such things as its temperature and thickness! An "I" loop, like an audio feedback loop, is an abstraction--but an abstraction that seems immensely real, almost physically palpable, to beings like us.

. . .

The Supposed Selves of Robot Vehicles

I was most impressed when I read about "Stanley", a robot vehicle developed at the Stanford Artificial Intelligence Laboratory that not too long ago drove all by itself across the Nevada desert, relying just on its laser rangefinders, its television camera, and GPS navigation. I could not help asking myself, "How much of an ‘I' does Stanley have?"

In an interview shortly after the triumphant desert crossing, one gung-ho industrialist, the director of research and development at Intel (you should keep in mind that Intel manufactured the computer hardware on board Stanley), bluntly proclaimed: "Deep Blue [IBM's chess machine that defeated world champion Garry Kasparov in 1997] was just processing power. It didn't think. Stanley thinks."

Well, with all due respect for the remarkable collective accomplishment that Stanley represents, I can only comment that this remark constitutes shameless, unadulterated, and naïve hype. I see things very differently. If and when Stanley ever acquires the ability to form limitlessly snowballing categories such as those in the list that opened this chapter, then I'll be happy to say that Stanley thinks. At present, though, its ability to cross a desert without self-destructing strikes me as comparable to an ant's following a dense pheromone trail across a vacant lot without perishing. Such autonomy on the part of a robot vehicle is hardly to be sneezed at, but it's a far cry from thinking and a far cry from having an "I".

At one point, Stanley's video camera picked up another robot vehicle ahead of it (this was H1, a rival vehicle from Carnegie-Mellon University) and eventually Stanley pulled around H1 and left it in its dust. (By the way, I am carefully avoiding the pronoun "he" in this text, although it was par for the course in journalistic references to Stanley, and perhaps also at the AI Lab as well, given that the vehicle had been given a human name. Unfortunately, such linguistic sloppiness serves as the opening slide down a slippery slope, soon winding up in full anthropomorphism.) One can see this event taking place on the videotape made by that camera, and it is the climax of the whole story. At this crucial moment, did Stanley recognize the other vehicle as being "like me"? Did Stanley think, as it gaily whipped by H1, "There but for the grace of God go I?" or perhaps "Aha, gotcha!" Come to think of it, why did I write that Stanley "gaily whipped by" H1?

What would it take for a robot vehicle to think such thoughts or have such feelings? Would it suffice for Stanley's rigidly mounted TV camera to be able to turn around on itself and for Stanley thereby to acquire visual imagery of itself? Or course not. That may be one indispensable move in the long process of acquiring in "I", but as we know in the case of chickens and cockroaches, perception of a body part does not a self make.

. . .

A Counterfactual Stanley

What is lacking in Stanley that would endow it with an "I", and what does not seem to be part of the research program for developers of self-driving vehicles, is a deep understanding of its place in the world. By this I do not mean, of course, the vehicle's location on the earth's surface, which is given to it down to the centimeter by GPS; it means a rich representation of the vehicle's own actions and its relations to other vehicles, a rich representation of its goals and its "hopes". This would require the vehicle to have a full episodic memory of thousands of experiences it had had, as well as an episodic projectory (what it would expect to happen in its "life", and what it would hope, and what it would fear), as well as an episodic subjunctory, detailing its thoughts about near misses it had had, and what would most likely have happened had things gone some other way.

Thus, Stanley the Robot Steamer would have to be able to think to itself such hypothetical future thoughts as, "Gee, I wonder if H1 will deliberately swerve out in front of me and prevent me from passing it, or even knock me off the road into the ditch down there! That's what I'd do if I were H1!" Then, moments later, it would have to be able to entertain counterfactual thoughts such as, "Whew! Am I ever glad that H1 wasn't so clever as I feared--or maybe H1 is just not as competitive as I am!"

An article in Wired magazine described the near-panic in the Stanford development team as the desert challenge was drawing perilously near and they realized something was still very much lacking. It casually stated, "They needed the algorithmic equivalent of self-awareness", and it then proceeded to say that soon they had indeed achieved this goal (it took them all of three months of work!). Once again, when all due hat-tips have been made toward the team's great achievement, one still has to realize that there is nothing going on inside Stanley that merits being labeled by the highly loaded, highly anthropomorphic term "self-awareness".

The feedback loop inside Stanley's computational machinery is good enough to guide it down a long dusty road punctuated by potholes and lined with scraggly saguaros and tumbleweed plants. I salute it! But if one has set one's sights not just on driving but on thinking and consciousness, then Stanley's feedback loop is not strange enough--not anywhere close. Humanity still has a long ways to go before it will collectively have wrought an artificial "I".