So I’ve been following advances in artificial intelligence (AI) ever since I wrote my first of two novels about a self-aware computer virus. The current computer science isn’t up to actual self-awareness yet—more like at the stage of complex routines that can establish a goal, accumulate new information, hold it in long-term memory, and act on and repeatedly modify that information. This can simulate a living intelligence in terms of the Turing test, but it still might not be self-aware.
My little dog Sally is not self-aware, either. That is, unlike humans, dolphins, and perhaps elephants, she does not perceive of herself and have consciousness of herself as separate from the reality she inhabits. The test of this is the mirror: when human beings look in a mirror, they recognize themselves, associate the image with what they think of as “me,” and may touch their face or hair to improve that image. If you put a mirror in a dolphin’s pool, then strap a funny hat or put a mark on the dolphin’s head, the animal will go to the mirror to check out its “own” image. But if my Sally approaches a mirror—we have several floor-length mirrors around the apartment complex—she either pays no attention to the small dog coming toward her or is eager or shy about meeting that dog. For her, the image is “other,” not herself.
And yet, Sally has a definite personality. I was reminded of that this morning. We had gone out through the back of the complex for her early walk, where she had dutifully completed her business. There were other dogs out there, she knew, and she is shy about meeting any dog bigger than herself. When we got back inside, at the elevator lobby, I discovered that the up and down buttons on that floor were not working: we would have to go outside again to an external stairway and climb to the second-floor lobby, where I knew the buttons would work. And at the outside door, Sally balked. She would not go back out again. She didn’t have any more business to do; there were big dogs out there; this was not the morning routine; and because … why? It took a lot of coaxing, tugging on her leash—to the point of pulling her collar up around her ears, which she hates—and encouraging noises on my part, until she relented, wagged her tail, and came along willingly.
Clearly, something in her perception of the situation had changed. She either overcame her fear of whatever dogs were outside, or she decided that I—her buddy, walk coordinator, and the fellow holding the other end of the leash—knew best about the decision to go back outside, or she remembered the last time we went out that way because the elevator buttons didn’t work—although she could hardly understand the concepts of “button” and “work.” Whatever, she had a new thought and, after her initial stubbornness, came with me happily.
I’ve been watching the development of agile robots on four legs—dog surrogates—with extensible necks and a jaws-like pincer that can open doors and carry objects. I read recently that some of the Boston Dynamics robots have been assigned to patrol parks and warn any humans they see violating social distancing. That’s pretty sophisticated activity. And the four “paws” of the robot dog are better at moving cross-country than wheels would be. That got me to wondering what it would be like to have an artificially intelligent dog instead of my Sally. It would probably be a demon at defined tasks like patrolling the complex perimeter, herding sheep—if I had any—or fetching my slippers. It would bark less and obey more. And it would never, ever leave a puddle in the bay window. That, and it would never need to be taken for a walk on a leash several times a day in order to do that business, either.
But a robot dog would still be a machine: purpose built, goal oriented, and just a complex of embedded responses. It might be programmed to wag its tail and lick my hand. It might even be taught to use human speech and say endearing things. And it might—if you kind of unfocused your eyes and willingly suspended your disbelief—serve as a therapeutic presence when I was sad or depressed, and be programmed to detect these states and act accordingly. But it would not be “intelligent” in the way a dog is, with its own quirky personality, its own mind about why it will or won’t go through a door, its own reasons for barking its head off at a noise upstairs, its own silly, toothy grin and hind-leg dancing when I come home, and its own need to put its chin on my knee after we climb into bed. It wouldn’t shed—but it wouldn’t be warm, either.
I’m no expert on artificial intelligence, although I can fake it in science fiction. But right now, AI expert systems are perfectly acceptable as automated drivers, button sorters, pattern recognizers, data analysts, and yes, four-legged park rangers. We value them in these roles because they function reliably, give predictable answers and known responses, and they never balk at going through a door because … why? If they did have that endearing quirkiness—the tendency to give inexplicable, self-centered, unexpected responses to unusual situations—and occasionally left a puddle of oil on the floor, we would value them less.
However sophisticated their programming, robot dogs would not be our companions. And much as we liked them, we would not love them.
No comments:
Post a Comment