Sunday, March 17, 2024

Robots

Boston Dynamics robot

I am still interested in artificial intelligence, although there have been notable failures that were recently publicized. Some of the large language models (LLMs) tend to bloviate, hallucinate, and outright make up facts when they can’t confirm a reference. (Compliance with user’s request first, accuracy second.) And some of the art programs can’t get human hands right or, in a more embarrassing story, one program was secretly instructed to offer mixed-race presentations of historical figures like the American Founding Fathers or soldiers of the Third Reich. (Compliance with programming rules first, accuracy second.) But these are easily—or eventually—corrected mistakes. The game is in early innings these days.

I have more hope for business applications, like IBM’s Watson Analytics, which will sift through millions of bytes of data—with an attention span and detail focus of which no human being is capable—looking for trends and anomalies. And I recently heard that one law firm has indeed used its LLM to write drafts of legal briefs and contracts—normally the work of junior associates—with such success that the computer output only needed a quick review and editing by a senior associate. That law firm expects to need fewer associates in coming years—which is, overall, going to be bad for beginning lawyers. But I digress …

So far, all of these artificial intelligence faux pas have had minimal effect on human beings, and users are now forewarned to watch out for them. Everything, so far, is on screens and in output files, and you open and use them at your own risk. But what happens when someone begins applying artificial intelligence to robots, machines that can move, act, and make their mistakes in the real world?

It turns out, as I read in a recent issue of Scientific American, that a firm is already doing this. A company called Levatas in Florida is applying artificial intelligence to existing robot companies’ products for inspection and security work. The modified machines can recognize and act on human speech—or at least certain words—and make decisions about suspicious activity that they should investigate. Right now, Levatas’s enhanced robots are only available for corporate use in controlled settings such as factories and warehouses. They are not out on the street or available for private purchase. So, their potential for interaction with human beings is limited.

Good!

Back in 1950, when this was all a science fiction dream, Isaac Asimov wrote I, Robot, a collection of short stories about machines with human-scale bodies coupled with human-scale reasoning. He formulated the Three Laws of Robotics, ingrained in every machine, that was supposed to keep them safe and dependable around people:

1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.

2. A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.

3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.

That seems like a pretty neat and complete set of rules—although I wonder about them in real life. Morality and evaluation of the consequences of action are more than a simple application of the Ten Commandments. For example, if your robot guard turns on and kills my flesh-and-blood dog, to whom I am emotionally attached, does that cause me psychological harm, in conflict with the First Law? A robot—even one with a fast-acting positronic brain—might labor for milliseconds and even freeze up in evaluating that one. Or on the hundred or thousand permutations on the consequences of any common action.

But still, the Three Laws are a place to start. Something like them—and applied with greater force than the rule set that gave Google customers a George Washington in blackface—will be needed as soon as large language models and their kin are driving machines that can apply force and pressure in the real world.

But then, what happens when the robot night watchman (“night watchmachine”?) lays hands on an obvious intruder or thief, and the miscreant shouts, “Let go of me! Leave me alone!” Would that conflict with the Second Law?

I think there’s a whole lot of work to be done here. Robot tort lawyers, anyone?

No comments:

Post a Comment