Ghost in the Black Box

Matthew Hutson (Matthew_Hutson@brown.edu)

In Bladerunner, Ghost in the Shell,2001, and other popular science-fiction movies, we encounter beings that challenge our definitions of life and of consciousness. Human characters in these movies interact with other, human-engineered, characters, and the division between the two fades. This accentuates strongly the fears, desires, and questions many of us have about the future of our relationships with artificial intelligence and technology in general.

In Bladerunner, Deckard (Harrison Ford) comes across Replicants, human-engineered cyborgs who look exactly like humans. They can act like humans, talk like humans, kiss like humans, and in a few cases carry on an extended conversation like humans. Rachel (Sean Young), a Replicant, character survives an interview consisting of 100 questions asked by Deckard, before he figures her out. (For a transcription and discussion of related dialogue, see Dan Parke's essay, "Dire Need for Definition.") His job, as a "bladerunner," is to track down replicants and kill them, because they are seen as dangerous, but he can't bring himself to shoot Rachel. She's too human.

Ghost in the Shell shows more cyborgs that look and act just like humans. Other characters cannot tell one apart from a real human until they put a bullet through the cyborg's head and watch the circuitry fly. The cyborgs, just as in Bladerunner, have even been programmed with appropriated memories to increase the realism. Listening to a trash-collector talk about his wife and kids would not set off a robot-detector alarm in my head, but then this just extends the level to which we expect cyborgs might permeate society. (For a discussion of the practical barriers standing in the way of a conscious computer, see my essay on quantum consciousness. )

What signs do we look for in our interactions with a possibly conscious being? Obviously we are still at a stage where relative versatility in language processing is quite impressive, whether in computers or primates, but computers are hurtling forward in their abilities to converse flexibly, and eventually they may simulate/actuate higher-level processes such as emotions. Thus we have a theoretical dilemma: how can we tell simulated comprehension/emotion/consciousness from actual comprehension/emotion/consciousness?

This raises an interesting possibility: perhaps simulated consciousness and actual consciousness are the same thing. The word "conscious," just like any other word, is part of a common language many people share. We use it to describe a shared experience (not a shared experience of consciousness, but a shared experience of what the word "conscious" means.) This is just like the word "blue": my individual experience of the color blue may be different than yours, but we use the word "blue" to describe the same things. Thus we do not use "blue" to mean [our respective individual experiences of blue], but rather to mean [a quality that makes those who use the word "blue" describe something as "blue."] This is circular, I know, but then isn't much of language based upon this process? Indeed, isn't much of culture the same type of feedback loop? So anyway, perhaps to be conscious means to seem conscious.

Let me put this in real-world terms. The mind is a black box, so to speak. We input a question, and it outputs a response, but we have no idea what goes on inside. Imagine that there are two real black boxes sitting next to each other, each large enough to hold a human. One contains a human, one a computer. After carrying on an extensive conversation with each of them (by writing questions on a slip of paper and putting them in a slot, and then receiving another slip with a typewritten response) you still cannot figure out which one is the human. Similar experiments have already been done, by the way, and there have been some ambiguous situations. From standing outside of the two black boxes, what reason do you have to say that one is conscious and the other is not?

Computers have been programmed to use qualitative words, and primates have been trained to perform sign language. The difficult task is deciding whether a monkey is forming abstract relationships in his head, or simply repeating behavioral responses for a reward. At one point in 2010, Hal says that he's scared, and we meet with the same dilemma. Is Hal frightened, does he merely think he's frightened, or is he untruthfully saying that he's frightened?

The second option above is one not often considered in these matters, but Bladerunner and Ghost in the Shell present characters who are genuinely confused about their identity (or at least simulate genuine confusion, if there is, in fact, a difference.) Planted with false memories, they are not told of their unnatural creation. They do not believe their own histories until they see direct evidence that their private memories have been forged.

This reminds me of a Storyspace text in which I explained that madness is invisible. Do "insane" people even realize that they're insane? Do autistics sit around and think to themselves, "Man this sucks. I hate being autistic!"? If we can't tell whether these cyborgs are conscious, what makes us think that they can tell? This joke illustrates the principle:

A man goes to a Psychologist and says, "Doc I got a real problem, I can't stop thinking about sex."

The Psychologist says, "Well let's see what we can find out," and pulls out his ink blots. "What is this a picture of?" he asks.

The man turns the picture upside down then turns it around and states, "That's a man and a woman on a bed making love."

The Psychologist says, "very interesting," and shows the next picture. "And what is this a picture of?"

"That's a man and a woman on a bed making love."

The Psychologists tries again with the third ink blot.

"That's a man and a woman on a bed making love."

The Psychologist states, "Well, yes, you do seem to be obsessed with sex."

"Me!?" demands the patient. "You're the one who keeps showing me the dirty pictures!"

There's another issue that plays a role here, particularly in Bladerunner: humans have a habit of anthropomorphising things. If a computer shows even the slightest sign of character, we can't help but attach a personality to the machine. We see this in cars, too, which don't even approach intelligence. As Lora Schwartz writes in a discussion of sci-fi movies, "When a character acts human and looks human, displays emotion, has fallibilities as well as strengths, the audience empathizes with that person. The audience seems to consider human anyone who reminds them of themselves."

Simon Penny, an artist and engineer, came to speak at Brown a few weeks ago, and he presented some of his robotic projects he has worked on. While showing a video of people interacting with one of his odd-looking robots, he told us that he tried to avoid making the robot look like any existing animal, because empathetic interpolation clouds our perceptions of the machine. Despite this effort, people still played with it as if it were alive. I turned to the person next to me and whispered, "Oh, it's so cute!", because it was, well, really cute.

Harrison Ford's bladerunner meets Sean Young's replicant in Bladerunner, and we can tell that he wants to believe she's humanlike, no doubt because she's also really hot. One could argue that this is a misogynist relationship, as Deckard fills in the blank while taking her hot body. She's like an advanced, motorized inflatable doll.

Baudrillard, however, would say that this makes no difference, as simulacra in the form of a projected personality can be more real than an existing personality. I have experienced something like this first hand. Rather than feigning consciousness, I feign coolness. Years ago, I was a spazz, and I became a natural attractor of bullies. I usually had a few very close friends and a wide array of stronger, bigger peers who loved to hate me. As I matured, I gained more friends and fewer enemies, but I still had the conditioned reflex of passivity deep inside. I showed more signs of confidence and hipness in my interactions with others, despite my internal self-doubt, and apparently people bought into my act. To make a long story short, I still often feel surprise at my abilities to fool people into thinking that I'm cool. In conversations with friends on the football team, for example, I must ignore the trained impulse to take a subservient role, and I tell myself in my head, "Just be yourself and act natural. If you present yourself with any signs of frail confidence, they'll find out that you're a nerd and it will be over." A friend asked me one day how being myself and somehow fooling everyone into thinking that I'm cool is any different from actually being cool. Good point.


[To other discussions of this topic by members of English 111, Cyberspace and Critical Theory, Spring 1998.]


Website Overview Cyborg OV Body OV Theory