I wrote last week about artificial intelligence and its applications in business information and communications: that the world would speed up almost immeasurably. There is, of course, a further danger: that humans themselves would in many cases forget how to do these tasks and become obsolete themselves.
Yes, we certainly believe that whatever instruments we create, we will still be able to command them. And so far, that is the case. But the “singularity” some thinkers are proposing suggests that eventually the machines will be able to create themselves—and what then?
We already have computer assisted software engineering (CASE), in which complex programming is pre-written in various task-oriented modules, segments of code designed for specific purposes. These modules perform routine operations found in all sorts of programs: sorting data, keeping time, establishing input and output formats, and so on. Programmers no longer need to write every line of their code in the same way that I am writing this text, by pushing down individual keys for every word and sentence. Instead, programmers now decide the process steps they want to invoke, and the CASE machine assembles the code. It’s as if I could specify the necessary paragraphs required to capture my ideas, and the software assembler did the rest. And isn’t this something like how the large language models (LLMs) behind applications like ChatGPT operate?
My concern—and that of many others involved with this “singularity”—is what happens when the machines are able to create themselves. What if they take control of CASE software, for which the machines themselves determine the process steps using large language processing? What if they can design their own chips, using graphics capability and rolling random numbers to try out new designs in silico before committing them to physical production in a chip foundry? What if they control those foundries using embedded operations software? What if they distribute those chips into networked systems and stand-alone machines through their own supply chains? … Well, what inputs will the humans have then?
Similarly, in the examples I noted last week, what happens when business and communications and even legal processes become fully automated? When the computer in your law office writes your court brief and then, for efficiency’s sake, submits it to a judicial intelligence for evaluation against a competing law firm’s automatic challenge as defendant or plaintiff, what inputs will the humans have? Sure, for a while, it will be human beings who have suffered the initial grievance—murder, rape, injury, breach of contract—and submitted their complaints. But eventually, the finding that Party A has suffered from the actions of Party B will be left up to the machines, citing issues raised by their own actions, which will then file a suit, on their own behalf, and resolve them … all in about fifteen seconds.
When the machines are writing contracts with each other for production, selecting shipping routes and carriers, driving the trains and trucks that deliver the products, stocking the warehouses, and distributing the goods, all against their own predictions of supply and demand for the next quarter, the next year, or even then next ten years, what inputs will the humans have? It will be much faster to let the machines determine where the actual people live, what they need and want, and make decisions for them accordingly, so that all the human population needs to do is express its desires—individually, as convenient, to the big computer in the cloud.
And once humans are content to let the machines do the work, make the decisions, plan the outputs, and make things happen … will the human beings even remember how?
That’s what some of us fear. Not that the machines will do the work, but that human beings will find it so convenient that we will forget how to take care of ourselves. Do you think, when you put in a search request to Google, or ask Siri or Alexa a question, that some human person somewhere goes off and looks up the answer? Of course not. The machine interprets your written or spoken words, checks its own interpretation of them against context—and sometimes against the list of possible responses paid for by interested third parties—and produces a result. In such a world, how many of us will still use—or, eventually, be able to use—an encyclopedia, reference book, or the library’s card catalog to find which book has our answer? For starters, how many of us would want to? But eventually, finding references will be a lost art. And at what point will people even remember that the card catalog is arranged alphabetically—or was it numerically, according to the Dewey decimal system?—and know what letter comes after “K”?
Frank Herbert recognized this problem in the Dune novels. In the prehistory to the series that begins in 10,191 AD, he envisions a time about five thousand years earlier that computers and robots once became so common and practical that human beings needed to do almost nothing for themselves. People became dependent and helpless, and the species almost died out. Only a war to end the machines, the Butlerian Jihad, ended the process under the maxim “Thou shalt not create a machine in the likeness of a human mind.” That commandment attained religious force and shaped the succeeding cultures. Only the simplest clockwork mechanisms were then allowed to control machines.
In the Dune stories, the Butlerian Jihad gave rise to the Great Schools period. Humans were taught again how to use their minds and bodies and expand their skills. Complex computations, projections, and planning were performed by the human computers, the Mentats. Physical skills, nerve-muscle training, and psychological perception were the province of the female society of the Bene Gesserit, along with secret controls on human breeding. Scientific discovery and manipulation, often without concern for conventional morals or wisdom, were taken over by the Bene Tleilax. And interstellar navigation was controlled by the Spacing Guild.
My point is not that we should follow any of this as an example. But we should be aware of the effect that generations of human evolution have built into our minds. We have big brains because we had to struggle to survive and prosper in a hostile world. Human beings were never meant to be handed everything they needed without some measure of effort on our part. There never was a Golden Age or Paradise. Without challenge we do not grow—worse, without challenge we wilt and die. Humans are meant to strive, to fight, to look ahead, and to plan our own futures. As one of Herbert’s characters said, echoing Matthew 7:14, “The safe, sure path leads ever downward to destruction.”
That is the singularity that I fear: when machines become so sophisticated, self-replicating, and eventually dominating that they take all the trouble out of human life. It’s not that they will hate us, fight us, and eliminate us with violence, as in the Terminator movies. But instead, they will serve us, coddle us, and smother us with easy living, until we no longer have a purpose upon the Earth.
Go in strength, my friends.
No comments:
Post a Comment