A recent article from Science Alert1 suggests that animals must have some level of self-awareness in order to project different choices into a future state and make decisions. For example, rats in a maze will pause and consider different possible pathways. Without some sense of self, the animal would not be able to distinguish a projected choice from the memory of an actual experience. This suggests that awareness begins with a sense of the difference between the knower and the known. One might also consider this sense the starting point, the far end of the spectrum, of self-awareness.
Finding that point is useful to writers and thinkers, like me, who ponder the human condition and the issue of creating a mechanical intelligence that has human-like capability.2 Certainly, artificial intelligence is more than simply a test of raw computing power or problem solving. Awareness and reflection—and the self-doubt and second-guessing to which they can lead—would only get in the way of straight-line computational speed and accuracy. But when we look for a mimic of the human mind, we look for that elusive quality of self-awareness.
This is more than the ability of a machine to create verbal responses that a human being cannot distinguish from those of another living intelligence—the Turing test. We already have programs that can do that, and they are not particularly intelligent and certainly not self-aware.3 For a program to become intelligent on the terms we mean by “artificial intelligence,” it would have to be able to distinguish among past experiences, current operations and future projections. It would have to be anchored in time, as human beings are.
We recently learned about a friend’s cat who had lost the use of its eye in a fight and would soon lose the eyeball as well. My immediate reaction was that most animals adapt to these injuries rather well because they are not burdened by the existential angst of considering their maimed and reduced state: “Oh, no! I’ll never see properly again! And, oh, I look so hideous!” If a cat that loses its depth perception because it has lost an eye even thinks of its handicap at all, it would be with a sense of befuddlement that arises only when the loss becomes immediately apparent: “Gee, it used to be easier to gauge the height of that cabinet. But this time I fell short. What gives?” This is akin to the puzzlement of an animal that’s used to going in and out of a “doggie door” that suddenly becomes blocked: “Gee, there used to be an opening here!”
Animals may be able—like the rats in the Science Alert article—to distinguish between present decisions and past experiences, but they don’t live in a conscious web of time. They don’t live in much of a past, other than to know that the cabinet used to be of a more accurate height or the doggie door used to swing open. Perhaps a dog, seeing its owner start to fill the tub, can recall the unpleasant experience of bath time, or seeing the owner put on his or her coat and reach for the leash, can anticipate the joy of a walk. But they don’t delve into and reflect on past circumstances and actions, and so they can have few regrets … although a dog can experience a limited sense of shame when the owner catches it in known bad behavior and will live with that sense of self-doubt or self-loathing for a few minutes—or until something distracts the dog, like a good shake. Similarly, the animal does not have much, if any, sense of the future. The anxiety the dog displays when its owner leaves the house without taking the leash and the dog is the generalized anxiety of separation from the adopted pack leader, not an internal estimation of hours or days left alone by the window or the possibility of absolute abandonment.
This opens a new thought, however. How could our dogs, cats, horses, and other pets love us—which I’m sure they do—without a primal sense of self-awareness? The affection they feel is more than just the instinctive following and responding to a known food source upon which the animal has come to depend. The relationship between pet and human includes trust, play routines, demonstrated affection, and emotional involvement—all of which require some sense of self. The animal can distinguish between its situation and that of another independent being. It forms a bond that reflects its own internal emotional state, an awareness of the other’s emotional state, and a sense of the difference between lover and loved one. This is analogous to understanding the difference between knower and the known.
The IBM computer program “Watson” could compete at the television game Jeopardy! because it could explore the nuances of language, word meaning, historical relationships, causality, and human concepts such as puns and word-play. It had command lines that would drive it forward through its comparative processes. And it had weighting factors that would help it decide when it was detecting a relationship based on logic, historical connection, or word association. It had incredible skill at answering the riddles on the TV show, and—if one thinks about the current company product offering called “Watson Analytics®,” these same techniques are now being used in mining complex data and answering human-inspired questions without specific programming and or resort to a structured query language.
But is the Watson machine aware? Does it know that it’s a Jeopardy! champion? And if someone were to tell the program of this fact, would it distinguish between its new status and any other fact in its capacious database? That is, does Watson know what Watson is? Can it know this in the same way that a dog knows it’s different from a chair or another dog or a new human being, and so can place itself in a mentally or emotionally projected relationship of trust or fear, pack dominance or dependence, fight or play, in relation to other animate beings? … Right now, I don’t think so, but at what point might Watson cross over to thinking about itself as a unique identity?
We humans live in a web of time. We have a past, present, and future—and invest a lot of our brain power and emotional stability in examining ourselves in all three temporal domains. We experience exquisite existential questions involving complex tenses represented by the words “might have,” “must have,” “could have,” and “should have” in dealing with the past; “may,” “must,” “can,” and “shall” in the present; and “might,” “must,” “could,” and “should” in the future. We can see ourselves in the past in relation to the deeper past when we employ the pluperfect tense (as in “I had been tried for murder”), and we can anticipate a possible but not certain condition with the future perfect (as in “I will have been tried for murder”). We swim in time as a fish swims in water, and all of this relates to a being we know, understand, and study as ourselves, our own condition, our cherished relationships and advantages, our perceived qualities and shortcomings, our known failings and our expected about-to-fails. We can also extend this awareness to situations and people outside ourselves and imaginatively outside our experience: other people’s strengths and weaknesses, what they did in the past, what they will think and do in the future, and how we did and will relate to them.
Can a computer think like this? Most programs and most processors exist in the now. Ask them to solve for x, and they will respond “x equals three.” Ask again two minutes later, and you get the same result, taking all the steps to arrive at the solution as before. Only if some human programmer has added the capability to assemble a database of previous problems and their solutions, plus a loop in real time that asks if the problem has been encountered in the past, will the program consider its own history. I don’t know if Watson has such a database and loop, but it might save a lot of time—particularly if the database preserved not just identical problems but parsed them into patterns of similar problems and their possibly similar solutions. “Oh, I’ve seen this one before! I know how to do it!”
The next step would be to program the computer to use its excess processing capacity for posing its own problems, possibly leveraging past experience, solving them in advance, and entering the patterns into another database to be consulted in potential future encounters.4 All of this could be done at about as much cost in processing power as operating an elaborate graphical user interface. But would the computer then have a sense of time approaching human-scale awareness? Probably not. We would still be dealing with a Turing-type simulation of awareness.
So, are we humans at the other end of the spectrum of self-awareness? The rat, the cat, and the dog are just beginning to perceive their own states as knower separated from the known. The more advanced species like dolphins can begin to identify themselves in a mirror; apes can recognize words as relational objects, appreciate social relationships and obligations, and communicate new ideas to other members of their troop; and elephants can draw pictures in two dimensions of objects in the three-dimensional world. So are we humans—the only creatures with which we can communicate in complete sentences for the sheer pleasure of this complex intellectual play—the end state of awareness?
I try to think of a more advanced intelligence—and that’s another part of a science fiction writer’s job. It wouldn’t have just better technology or a faster or more complex recall of data. It might become godlike in terms of how human beings imagine their gods: able to perceive past, present, and future as one continuous and reversible flow, because it stands outside of time; able to know all the answers to all the questions, because it invented both knowledge and questions; able to command space and wield infinite power, because it can apply all the possible mathematical formulas and manipulate their consequences in terms of mass and energy. But is this scope of capability actually better than human awareness? Wouldn’t standing outside of time imply an awareness caught in one permanent, eternal now? Wouldn’t absolute knowledge foreclose all possibility of wonder, desire, and choice? Doesn’t complete control of space, mass, and energy suggest an explosion of energy becoming mass reminiscent of the Big Bang? Gods are not superior to human beings because they are fixed in their potential. If they are at an end point, it is an eternal and unchanging stasis. And what fun is there in that?
No, if the rat in the maze is at the delicious beginning of knowing the difference between “I’ve seen this corner before” and “I wonder what’s around that corner,” then we humans are at the beginning—but nowhere near the end—of discerning, defining, deciding, and determining the shape of the maze for ourselves. And that’s a powerful place to be.
1. See Fiona MacDonald, “Humans aren’t the only animals that are self-aware, new study suggests,” Science Alert, June 18, 2015.
2. See, for example, the story line of ME: A Novel of Self-Discovery and certain motifs of the near future in Coming of Age.
3. See Intelligence or Consciousness? from February 8, 2015.
4. Something like this is part of the SIPRE approach to defensive driving in anticipation of the React step. See SIPRE as a Way of Life from March 13, 2011.
No comments:
Post a Comment