I’m not really an advocate of what some prognosticators call “the singularity.” This is supposed to be the point at which artificial intelligence approaches human cognitive abilities, becomes sentient, and irrevocably changes things for the rest of us. Or, in the words of the first Terminator movie, “decides our fate in a microsecond.”
Right now—and in my opinion for the foreseeable future—“artificial intelligence” is a misnomer. That is, it really has nothing to do with what we humans call intelligence, or a generalized capability for dealing with varied information, navigating the complexities of independent life, and weighing the burdens and responsibilities of being a single, self-aware entity. These programs don’t have the general intelligence that some psychologists refer to as the “g-factor,” or simply “g.”
Instead, every application that is currently sold as artificially intelligent is still a single-purpose platform. Large language models (LLMs)—the sort of AI that can create texts, have conversations, and respond seemingly intelligently to conversational queries (Alan Turing’s rather limited definition of intelligence)—are simply word-association predictors. They can take a string of words and, based on superhuman analysis of thousands of texts, predict what the next likely word in the string should be. A human making a request for a piece of its “writing” sets the parameters of whether the LLM should create a legal brief or a science fiction story and determines the intended content. The rest is just word association.
But the large language models can’t draw pictures or create videos. That’s another platform filled with another universe of examples, all existing images in its allowed database, and driven by rules about perspective, shading, colors, and contrasts, rather than words and synonyms, grammatical rules, and systems of punctuation. And, in similar fashion, the analytic platforms designed to run complicated business operations like fleet maintenance, product and material inventories, accounting, and financing all have their own databases and rules for manipulating them—and none of them can write stories or paint pictures.
The difference between artificially intelligent applications and earlier database software is that you can program these systems in English, giving the platform “prompts” rather than having to frame inquiries using software-defined inputs and asking questions that are tediously specific. If you are not telling the language model to write something or the graphics model to draw something, you’re probably asking the operations model to detect trends and find anomalies, or you’re setting the parameters for its operation, like telling the inventory application not to release for sale any item that’s been on the shelf more than six months, or telling the purchasing agent not to pay more than fifty dollars for a contracted item.
So, think of these applications as single-purpose programs with which you can interact by typing your prompts and without having to understand exactly what you’re looking for and how the program works. In terms of the antique databases, you don’t have to prepare a “structured query,” where to find all of your customers who live on Maple Street, you need to enter “Maple Street,” because if you don’t limit it in some way, then you will get everyone on Maple Drive, Maplehurst Street, Maplewood Drive, and so on. The old programs required a bit of expertise to operate. With the new ones, you just chat.
But still, as advanced as they are, the current crop of artificial intelligences is nowhere near human scale. If I had to guess, I would say their interconnectivity and processing power are somewhere between those of an ant and a spider. Both can be remarkably resilient, create novel patterns, and do things that surprise you, but their general awareness is about that of a pocket watch.
But that doesn’t mean AI applications won’t change your world and don’t have the capacity to be remarkably destructive.
In my early career as a science fiction writer, in the early 1990s, I wrote a novel about an artificially intelligent computer spy, ME. It was a program in Lisp (standing for “List processing”) software that could infiltrate computer systems, steal information or perform other mayhem, and then slip away. All fantasy, of course, because a program in Lisp can’t operate inside just any computer system. And ME had a form of generalized intelligence and was conversational enough to tell its own story. But I digress …
The point is, when some programmer, probably a hacker, figures out how to make the AI models independent of the complicated chips and massive power supplies they need to run—that is, when these things become portable—then look out. Just like physical viruses, data duplicates. Rather than having to launch one attack at a time or send out a determined number of phishing emails, a smart program—spider smart, not human smart—will be able to launch thousands of hacks through multiple channels at once. Think of a denial-of-service blitz run by an intelligence with focus and persistence. Think of a social media bot that can wear a thousand different faces, each chosen to be attractive to the intended recipient, hold a hundred different conversations at once, and pick your profile and your pocket clean in a microsecond.
Or think about just everyday operations, without any evil intent. Imagine Company A’s procurement, supply chain, inventory, billing, customer service, and legal affairs departments all run by an interconnected series of spider-smart AI platforms. And then this hands-off system begins to negotiate with Company B’s mirrored platforms. Humans will no longer be part of the company’s operation and the business-to-business exchanges, except for very distant chats to set parameters and establish the risk tolerance. For the rest, it will be deals, price points, contracts, and delivery schedules all signed and sealed in a microsecond. What fun, eh? Then you can fire about 95% of your back-office staff.
Except, except … these machines have no common sense, no g-factor to look beyond immediate data and ask if there might be a problem somewhere. And the smarter the machines get—say, spider evolves to field mouse—the more subtle their algorithms and reactions will become. “More subtle” in this case means “harder to detect and understand.” But they still won’t be aware of what they’re doing. They won’t be able to “test for reasonableness”—or not at more than a superficial level.1
And that’s where the singularity comes in. Not that human beings will be eliminated—other than those workers in the back office—but we will no longer have control of the operations and exchanges on which we depend. The machines will operate in microseconds, and their screwups will happen, be over, and the effects trailing off into infinity before any human being in a position of authority can review and correct them. The consequences of a world run by spider-smart intelligences will become … unpredictable. And that will be the singularity.
Then, at some point, after it all collapses, we’ll be forced back to counting on our fingers.
1. And, like mice and other living organisms, these bots will inevitably carry viruses—traveling bits of clingy software that they will know nothing about—that can infect the systems with which they interact. Oh, what fun!