People seem to be afraid of “artificial intelligence”1—but is it machine intelligence or machine consciousness that we fear? Because we already have examples of several kinds of intelligence.
For example, the computer program or system called “Watson” can emulate a human brain’s capability of assembling clues and storing information on a variety of levels—word association, conceptual similarity, sensory similarity—to play a mean game of Jeopardy. Watson is remarkably intelligent, but no one is claiming that the machine can think in the sense of being conscious. For another example, the artificial helper Siri in your smartphone is almost able to pass the Turing test2—if you’re willing to believe you’re talking to the proverbial “dumb blonde”—but Siri is neither particularly intelligent nor was she ever meant to be conscious.
Intelligence is a spectrum. It measures an organism’s ability to perceive, interpret, determine, and act. And this process can be graded on a curve.
Consider the amoeba. It can perceive and identify the chemical trail of a potential bacterial food source, follow it, and consume it. The amoeba doesn’t make a decision about whether or not to follow the trail. It doesn’t decide whether or not it’s hungry. The amoeba’s choice of food and the decision to hunt it down are determined solely by chemical receptors built into the organism’s cell membrane.3 The amoeba’s hunting strategy is the most basic form of stimulus-response mechanism.
You wouldn’t call an amoeba smart, except in comparison to the bacteria it hunts. Bacteria are opportunists whose survival strategy is that of flotsam: multiply like hell and hope some of your daughter cells land on a food-like substance. If they land on barren ground, they die. Or, if the substance isn’t all that food-like but has some potential for nourishment, hope that maybe some future generation will evolve to digest it. This level of intelligence gives new meaning to the term “passive aggression.”4
With multi-cellular organization came multi-tasking. This new kind of creature developed about 500 million years ago, during the Cambrian explosion, probably by diversification of cell types within colonies of single-celled organisms. With some cells taking on specialized perception roles, such as light and chemical seeking, while others took over the functions of digestion and reproduction, the organism became more efficient. It also needed an executive function, at first to communicate between these activities and ultimately to coordinate and control them.
An ant can see a leaf with its compound eyes, approach it on six functionally coordinated limbs, and cut it with hinged jaws. Moreover, the ant can evaluate a number of nearby leaves and make a selection for right size, weight, and tastiness. It’s still a question whether an ant can see or sense food and choose not to take it.5 Certainly, the ant has a built-in “hierarchy of needs,” whereby attack by a hostile species or imminent danger of, say, drowning in a raindrop will override its duty to forage. How much free will the ant has to decide “Fight first, forage later” or even, “Kick back and take the day off” is a matter of debate and subject to the human tendency to anthropomorphize other species. But it’s clear that insects can learn, remember, and communicate. Bees can find a field of flowers, remember its location, fly back to the hive, and communicate to other bees the direction and distance to this potential food supply. That’s a pretty sophisticated stimulus-response mechanism!6 These activities and capabilities are shared by many animals, even among human beings.
On the spectrum of intelligence that runs from amoebas to humans, dogs are clearly somewhere in the middle, but tending toward the human end of the spectrum. Dogs can coordinate their activities through communication and even form social relationships and bond with one another on the basis of loyalty and affection. Within these groups they develop expectations, engage in disputes about hierarchy, and then may either submit or choose to leave the pack, depending on their predetermined natures and their accustomed status as either alpha or beta individuals. In isolation, a dog can make its own decisions about liking and distaste, trust and distrust, safety and danger. Dogs raise their young through the shared responsibilities of a family subgroup: mothers nurse while fathers hunt. They can choose to alter their territorial behavior, such as by migrating with a herd of prey. And they can develop trusting relationships with other species, such as by becoming domesticated and forming a pseudo-pack with human beings. If an alien spaceship landed on a planet whose highest life-form was the wolf pack, the aliens would have to conclude that they had discovered intelligent life.
But the question of free will still remains. Can an ant or bee decide to subvert the social order and challenge the colony’s queen? Can it decide to leave the hive after a dispute or in order to find a better life? Can the insect override its instinctual—perhaps even hard-wired—drives to forage, fight invaders, or serve its hierarchical position because other members of the colony have abused it or hurt its feelings? Obviously not. But dogs, cattle, and other social animals can make these choices, although perhaps not willingly or eagerly, and usually only under strong compulsion or in response to immediate need. Humans, on the other hand, practically live in this meta-world of individual choices, personal feelings and preferences, and divided allegiances.
Now we come upon the issue of consciousness. Unlike intelligence, which seems to be a spectrum from simple stimulus-response mechanisms to complex, multi-valued reasoning, consciousness would appear to be a step function. An organism either has it or not, but its awareness may present itself in varying degrees.
If you obstruct an ant or bee in its pursuit of a leaf or flower, it will persist, repeatedly bump up against you, and try to get around you. If you keep blocking it successfully, however, the insect will eventually lose interest, turn aside, and pursue some other food source. What it will not do is take your obstruction personally, get angry, and plot revenge against you. If you cut off an insect’s limb, it will register visible distress and feel some analog of physical pain, but it won’t face the dejection of a life in reduced circumstances, deprived of the opportunities available to healthy, six-legged insects. If you kill it, the ant’s or bee’s last sensation will be darkness, with nothing of the existential crisis that death evokes in human beings.
If you frustrate or disappoint a dog, it will register anger or despair.7 If it becomes injured or sick, it not only registers pain but also demonstrates a negative emotional state that any human would recognize as depression. If a canine companion dies, the dog exhibits a sense of loss. When faced with sudden danger and perhaps the imminence of death, the dog exhibits a state we would call fear or even terror. The dog has an awareness of itself and the creatures around it. The dog is conscious of being alive and has some elemental notion of health and sickness, life and death, that an ant or bee does not register.
But is this awareness also self-awareness? It’s a commonplace that dolphins, elephants, some apes, and all human beings will recognize themselves in a mirror. If you place a mark on a dolphin or adorn it with a piece of clothing, the creature will go over to a mirror to check out how it looks. Elephants can use paint and brush to draw pictures of other elephants. These animals understand the difference between themselves and others of their kind. A dog, on the other hand, cannot not comprehend a mirror. If it sees itself in reflection, it thinks it has encountered a strange new dog. So while a dog has a first level of consciousness compared to an ant or bee, it is not fully self-aware, which is the second level of consciousness possessed by dolphins, elephants, apes, and humans.8
It is this ability to consider oneself apart from all others, to reflect upon one’s own thoughts and desires, to have hopes and fears and also to think about them, to consider one’s actions and their consequences both for oneself and for one’s group, and to ponder the nature of existence that is at the core of human-scale intelligence. A human being is not just intelligent but also knows he or she is intelligent. A human naturally worries about how his or her mind, nature, opportunities, and chances compare with others, and cares about his or her place in the society or hierarchy. A human being understands relative time states like past, present, and future because the person can see him- or herself in conditions and situations that no longer persist but did once, or that have not yet arrived but toward which all current trends point. A human being is constantly self-referential, considering his or her own life and nature, while a dog is merely happy to be alive, and an ant or bee—or an amoeba—has no conception of the difference between life and any alternative.
Any computer program yet written my emulate, simulate, or even exhibit the qualities we associate with mere intelligence: perception, interpretation, decision, and initiation of action. None so far has reached the scale of internal complexity where dog-like awareness arises, let alone the self-awareness that would allow the machine to consider its own actions in the abstract and make choices based on self-perception, feelings of pride or shame, or anything like a moral stance in the universe.9 But I don’t say that this level of awareness can’t happen, and I believe it may arrive sooner than we think.
And if—or when—it does, then we will no longer be dealing with a machine. Then the question of carbon-based versus silicon-based life form will no longer apply. We will be dealing with a fellow traveler who will behold the infinite with a sense of wonder. We will be dealing with a creature much like ourselves.
1. See the last part of my blog post Hooray for Technology from January 4, 2015, discussing the meme that artificial intelligence will be detrimental to humankind.
2. The Turing test involves a human being asking or writing out any set of questions he or she can think of, passing them blindly to an unseen and unknown subject, and evaluating the subject’s answers. If the human cannot tell whether the respondent is another human being or a machine, then if it happens to be a machine, that machine might as well be—by Turing’s definition—intelligent.
It’s a fascinating problem, but quite soon after Turing proposed the test in a 1950 paper, several people were writing computer programs like ELIZA and PARRY that could pass it with a human interlocutor, and none of the computers of the time had the capacity to actually approach human-scale thinking. None of the machines available today does, either.
3. See Protein Compass Guides Amoebas Toward Their Prey in Science Daily> from October 26, 2008. Interestingly, a similar mechanism drives cells of the human immune system to track down bacterial targets.
4. But compared to a virus, the bacterium is a genius. Viruses can’t even breed or evolve until they happen to land on a host with a working genetic mechanism they can hijack. Viruses are pirate flotsam.
5. That’s a question with some people, too.
6. For more on insect intelligence, see Insect Brains and Animal Intelligence in the online resource Teaching Biology.
7. My wife tells the story of her first dog, a little poodle, and a rainy day when she was pressed for time and had to cut short the dog’s daily walk. She may even have yelled at him when he balked at getting back into the car. Upon returning home, he walked straight into her bedroom, jumped up on the bed, and pooped right in the middle of the bedspread. If that wasn’t a calculated act of revenge, I don’t know what else to call it.
8. However, a dog can be made to feel foolish. My aunt was a poodle breeder, groomer, and competitor at prestigious dog shows, including the Westminster Kennel Club. Once, to compete in a Funniest Dog contest, she clipped one of her white poodles in oddly shaped tufts and dyed them red, green, and blue with food coloring. She always insisted that dog acted depressed because it knew how foolish it looked.
9. Such as viewing humanity as an enemy and, like Skynet, “deciding our fate in a microsecond.”
No comments:
Post a Comment