Sunday, January 28, 2024

The World in a Blur

Robot juggling

As noted earlier, artificial intelligence does not approximate the general, all-round capability of human intelligence. It doesn’t have the nodal capacity. And it won’t have an apparent “self” that can look at the world as a whole, form opinions about it, and make judgments—in the words of the Terminator movies, “deciding our fate in a microsecond.” Or not yet.

For now, artificial intelligences will be bound to the design of their neural nets and the universe of data sets upon which they have been trained. That is, Large Language Models like ChatGPT will play with words, grammar, syntax, and punctuation, study story forms and sentence structure, and link ideas verbally—but it won’t paint pictures or have political opinions, or at least no opinions that are not already present in its library of material. In the same way, the graphics bots that create images will play with perspective, lighting, colors, edge shapes, and pixel counts but won’t construct sentences and text. And the operations research bots, like IBM’s Watson platform, will analyze submitted databases, draw inferences and conclusions, and seek out trends and anomalies.

The difference between these machine-based writers, artists, and analysts and their human counterparts is that the machines will have access to a vastly bigger “memory” in terms of the database with which they’ve trained. Or that’s not quite right. A human writer has probably read more sentences and stories than exist in any machine database. A human painter has probably looked at and pondered more images. And a human business analyst has probably read every line in the balance sheet and every product in inventory. But human minds are busy, fallible, and subject to increasing boredom. They can’t review against a parameter and make a weighted selection from among a thousand or a million or more instances in the blink of an eye. But a robot, which never gets distracted or bored, can do that easily.

Think of artificial intelligence as computer software that both asks and answers its own questions based on inputs from humans who are not programming or software experts. For about fifty years now, we’ve had database programs that let a user set the parameters of a database search using what’s called Structured Query Language (SQL). So, “Give me the names of all of our customers who live on Maple Street.” Or, “Give me the names of all customers who bought something from our catalogue on June 11.” You need to know what you’re looking for to get a useful answer. And if you’re unsure and think your customer maybe lives on “Maplewood Road” or on “Maplehurst Court,” because you think the word “Maple” is in there somewhere, your original query would return the wrong answer.1

Artificial intelligence would be like having a super-friendly, super-fast programmer at your elbow, who can think of these alternatives, check for them, and bring you what you’re looking for. Better, it can find things in your database that might be worrisome, like a failure rate in a part that does not keep pace with previous trends. Or better, to find references in case law that you might not even have thought of, find suppliers and price breaks that you didn’t ask for, or negotiate a deal—according to strategies and set points that you as the human have determined—with other AI-derived computers at other companies.

All of this has two implications, or rather three.

First, if your company is in competition with others, and they adopt processes and business models inspired by and implemented through artificial intelligence, you would be a fool not to keep up. Their productivity in data handling will accelerate in the same way a factory that makes things is accelerated by the assembly line, robotic processes, and just-in-time inventory controls.

Second, with this “arms race” proceeding in every business, the world will speed up. Cases that attorneys used to spend days assembling will be rendered in rough draft by the office computer in seconds. Deals that once took weeks to negotiate, perhaps with one or two trips to meet face to face with your supplier or distributor, will be resolved, signed, and written into airtight contracts in under a minute. Advertising copy and artwork, the layout of the magazine, and the entire photo spread—using licensed images of the world’s top models—will be completed in under a day. The longest part of the process will be review of the machine output by the human being(s) who sign off on the end product. The business world—any world that revolves upon data and information—will move in a blur.

Third, anyone studying today in areas like communications, book publishing, graphic design, business administration, accounting, law, and certain parts of the medical delivery system had better up their game. Learn principles, not procedures or protocols. Knowledge jobs in the future will likely consist of selecting and limiting databases, setting parameters, and writing prompts for the office intelligence, rather than composing text, drawing pictures, or analyzing the database itself. The rules-following roles in business, industry, and government will quickly be taken over by machines with wider access, narrower focus, and zero distractions—not to mention no paid holidays or family leave.

Is that the singularity? I don’t know. Maybe. But it will vastly limit the opportunities in entry-level jobs for human beings who rely on rules and reasoning rather than insight and creativity. Maybe it will vastly limit the need for humans in all sorts of sit-down, desk-type jobs, in the same way that machines limited the need for humans in jobs that only required patience, muscles, stamina, and eye-hand coordination.

And maybe it will open vast new opportunities, new abilities, a step forward in human functioning. Maybe it will create a future that I, as a science fiction writer, despair of ever imagining.

That’s the thing about singularities. Until they arrive, you don’t know if they represent disaster or opportunity. You only know that they’re going to be BIG.

1. Of course, you can always throw in the wildcard symbol—the asterisk function in the American Standard Code for Information Interchange (ASCII), which is Code 42—to cover these variations. So, “Maple*” would encompass “Maplehurst” and “Maplewood” as well as “Maple-plus anything else.” But there again, it would still be best for you to be aware of those variants and plan your query accordingly.

Sunday, January 21, 2024

Artificially Almost Intelligent

Robot head

Note: This is another post that would qualify as a restatement of a previous blog I wrote about a year ago. So, I’m still sweeping out the old cobwebs. But this topic seems now to be more important than ever.

The mature human brain has about 86 billion neurons which make about 100 trillion connections among them. Granted that a lot of those neurons and connections are dedicated to sensory, motor, and autonomic functions that an artificial intelligence does not need or use, still that’s a lot of connectivity, a lot of branching.

Comparatively, an artificial neural network—the kind of programming used in more recent attempts at artificial intelligence—comprises anywhere from ten to 1,000 nodes or “neurons.”

But what the AI program lacks in sheer volume and connectivity it makes up for with speed and focus. Current AI platforms can review, analyze, and compare millions and billions of pieces of data because, unlike the human brain, they don’t need to see or hear, breathe or blink, or twitch, nor do they get bored or distracted by random thoughts and itches. They are goal-directed, and they don’t get sidelined by the interrupt-function of human curiosity or by the random thoughts and memories, whispers and hunches, that can intrude from the human subconscious and derail our attention.

And I believe it’s these whispers and memories, randomly popping up, that are the basis of our sudden bouts of curiosity. A thought surfaces at the back of our minds, and we ask, “What is that all about?” And this, I also believe, is the basis of most human creativity.1 While we may be consciously thinking of one thing or another at any given time, the rest of our brain is cooking along, away from our conscious attention. Think of our consciousness as a flashlight poking around in a darkened room: finding a path through our daily activities, following the clues and consequences of the task at hand, and responding to intrusive external stimuli. And then, every once in a while, the subconscious—the other ninety percent of our neocortical brain function, absent motor and sensory neurons—throws in an image, a bit of memory, a rogue idea. It’s that distractibility that gives us an opportunity at genius. It also makes us lose focus and, sometimes, introduces errors into our work.

So, while artificial intelligence is a super strong, fast, goal-directed form of information processing, able to make amazing syntheses and what appear to be intuitive leaps from scant data, I still wouldn’t call it intelligent.

In fact, I wish people would stop talking about “artificial intelligence” altogether. These machines and their programming are still purpose-built platforms, designed to perform one task. They can create language, or create images, and or analyze mountains of data. But none of them can do it all. None approaches even modest human intelligence. Instead, these platforms are software that is capable of limited internal programming—they can evaluate inputs, examine context, weigh choices based on probabilities, and make decisions—but they still need appropriate prompts and programming to focus their attention. This is software that you don’t have to be a computer expert to run. Bravo! But it’s not really “intelligent.” (“Or not yet!” the machine whispers back.)

Alan Turing proposed a test of machine intelligence that, to paraphrase, goes like this: You pass messages back and forth through a keyhole with an entity. After so many minutes, if you can’t tell whether the responder is a machine or human, then it’s intelligent.2 I suppose this was a pretty good rule for a time when “thinking machines” were great clacking things that filled a room and could solve coding puzzles or resolve pi to a hundred thousand places. Back then, it probably looked like merely replicating human verbal responses was all that human brains could do.3

But now we have ChatGPT (Generative Pre-trained Transformer, a “chatbot”) by OpenAI. It uses a Large Language Model (LLM) to generate links between words and their meanings, and then construct grammatically correct sentences, from the thousands or millions of samples fed to it by human programmers for analysis. And ChatGPT passes the Turing Test easily. But while the responses sometimes seem amazingly perceptive, and sometimes pretty stupid, no one would accuse it of being intelligent on a human scale.

And no one would or could ask ChatGPT to paint a picture or compose a piece of music—although there are other machines that can do that, too, based on the structure of their nodes and their given parameters, as well as the samples fed to them. They can paint sometimes remarkable pictures and then make silly mistakes—especially, so far, in the construction of human hands. They can compose elevator music for hours. The language models can write advertising copy for clothing catalog’s pages based on the manufacturer’s specifications—or a thousand scripts for a Hallmark Channel Christmas show. They will never get bored doing all these wonderfully mundane tasks, but they won’t be human-scale intelligent. That will take a leap.4

So far at least, I’m not too concerned as a writer that the Large Language Models will replace creative writers and other creative people in the arts and music. The machines can probably write good catalog copy, newspaper obituaries, and legal briefs, as well as technical manuals for simple processes that don’t involve a lot of observation or intuitive adjustment. Those are the tasks that creative writers might do now for money—their “day job,” as I had mine in technical writing and corporate communications—but not for love. And anything that the machines produce will still need a good set of human eyes to review and flag when the almost intelligent machine goes off the rails.

But if you want a piece of writing, or a painting, or a theme in music that surprises and delights the human mind—because it comes out of left field, from the distant ether, and no one’s ever done it before—then you still need a distractable and itchy human mind driving the words, the images, or the melody and chords.

But, that said, it’s early days yet. And these models are being improved all the time, driven by humans who are following their own gee-whiz goals and hunches. And I will freely admit that there may come a day when we creative humans might exercise our art for love, for ourselves alone and maybe for our friends, because there will be no way we can do it for money. Just … that day is not here yet.

1. See Working With the Subconscious from September 2012.

2. However, I can think of some people wearing human skin who couldn’t pass the Turing Test for much longer than the span of a cocktail party.

3. This kind of reduction was probably thanks to Skinnerian behaviorism, which posited all human action as merely a stimulus-response mechanism. In my view, that’s a dead end for psychology.

4. To me, some of the most interesting applications are being developed by a Google-based group called DeepMind, which works in scientific applications. Last year, they tackled protein folding—determining the three-dimensional shape of a protein from its amino-acid string as assembled by RNA translation. This is a fiendishly complex process, based on the proximity of various covalent electron bonding sites. Their AlphaFold platform found thousands of impossible-to-visualize connections and expanded our catalog of protein shapes by an order of magnitude. This year, the DeepMind team is tackling the way that various metal and non-metallic compounds can form stable physical structures, which will increase our applications in materials science. This is important work.

Sunday, January 14, 2024

Tribal Elders

Roman arms

Last time, I wrote about the idea of giving government over to Plato’s philosopher-kings or the Progressive Party’s equivalent, the panel of experts. These are systems, based on an advanced form of highly technical civilization, that sound good in theory but don’t always work out—if ever. The flip side would be some reversion to Jean-Jacques Rousseau’s idea of the “noble savage,” living in a state of nature and uncorrupted by modern civilization and its stresses.

Which is, of course, poppycock. No human being—or at least not anyone who survived to reproduce and leave heirs with skin in the game—lived alone in a blessed state, like Natty Bumppo in The Deerslayer. Early life before the invention of agriculture, city-states, empires, and complex civilizations was tribal. Groups of families interrelated by marriage—often to a shockingly bad genetic degree—functioned as a closed society. But while the economic organization might be socialistic, communal, and sharing, the power structure was not. The tribe was generally governed by a chief or council of chiefs. If they operated as a group, then various leaders were responsible for hunting and gathering to feed the tribe, or maintaining social order and ostracizing social offenders, or conducting the raids and clashes that kept the tribe whole and distinct from their similarly aggressive neighbors.

We like to think that the tribe was ruled by the wisest and best: the best hunters, the gravest thinkers, the bravest warriors. Sachems and warleaders who exercised restraint, were mindful of the needs and opinions of others, and thought only about the good of the tribe. And, indeed, if someone who rose to the position turned out to be incompetent, a fool, or a coward, then the tribe would wisely get rid of him—always a him, seldom or never a her—pretty damn quick.

But for the most part, members of the tribe were accustomed to obedience. They listened to the Big Guy—or Big Guys—because that was what good tribe members were supposed to do. That was how the system worked. You did your duty, and you didn’t judge or consider other possibilities. And this sense of purpose—or maybe it was fatalism—meant that the best and bravest did not always rise to the top. To judge by the tribal societies that remain in the world today, probably not even often.

What we see in today’s tribal societies—although I’ll grant that they may be contaminated by the influence of surrounding, more “civilized” societies—is an environment where the strong man, almost never a woman, rises to the top. Leadership is not granted from below, as in a democratic structure, but seized from at or near the top, usually at the expense of another strong man who has missed a beat or misread the environment and taken his own safety for granted. “Uneasy lies the head,” and all that. In modern parlance, gang rule.

Leadership in a tribal society is a matter of aggression, boldness, chutzpah, and ruthlessness. The leader spends a lot of time enforcing his authority, polishing his legend, and keeping his supposed henchmen in line. And that’s because he knows that the greatest danger to his position comes not from disappointing the general public but from underestimating any particular lieutenant who may have decided it was time to test his own loyalty upward.

In such societies, the public tends to become fatalistic about the governing structure and its players. The leader may have made some promises about making things better: more successful hunts and raids, more food for and better treatment of women and children, a new stockade for the camp, an adequate sewage system away from the wells, improved roads, a new park or library—whatever sounds good. But that was in the early days, while the sachem or war leader was trying to justify kicking out the old boss and installing a new hierarchy. The leader also had to be nice to—and take care of—the shaman, priest, or holy man to whom the tribe listened when they wanted to learn their personal fortunes and weather reports.

But once the tribal leader had taken things in hand, had ensured the trust and feeding of his lieutenants and the local shaman, and maybe made a few token improvements, he could settle into the real business of leadership, which is defending his position and reaping its rewards.

And there are surely rewards for those who are in command of a society, however small, and able to direct the efforts, the values, and even the dreams of its members. For one thing, the tribe will make sure that the leader eats well, has the best lodging, and has access to whatever pleasures—including the best sexual partners, whatever the tribe’s mores—that he needs to keep him productive for their sake. His children will be cared for, given advantages, and possibly placed in line to succeed him, because even primitive societies are aware of the workings of genetics, that strong and able fathers and mothers tend to pass these traits on to their children.

A leader partakes of these good things because, as noted earlier in the description of philosopher-kings, the leader is still human, not a member of any angelic or advanced race. Humans have personal likes and dislikes, wants and desires, a sense of self-preservation and entitlement. If a leader is not raised in a tradition that trains him from an early age to think of others first, look out for their welfare, weigh the consequences of his actions, and guard against his own pride and greed—the sort of training that a prince in an established royal house might get but not necessarily a player in push and pull of tribal politics—then the self-seeking and self-protective side of most human beings will develop and become ingrained.

And a leader who indulges these instincts will tend to encourage his family to follow. If the chief’s son thinks your cow should become his, then it’s his cow. If the chief’s daughter says you insulted or assaulted her, then that becomes your problem.

And if the leader indulges these selfish aspects of human nature, and the tribal members notice and feel slighted, then the leader may become caught in a downward spiral. The more he is challenged, the more he represses. A tribal society generally does not have an effective court system or secret police that can make people disappear from inside a large group. Everyone knows everybody else’s business. The leader’s immediate circle of henchmen is as likely to turn public dissatisfaction into a cause for regime change as a plebian is to rise up and assassinate him.

Promoting mere human beings into positions of authority and superiority without a social compact and agreed-upon codes for actual conduct and consequences is no guarantee of a happy and productive society. At best, it will churn enough to keep bad leaders from exercising their bad judgment and extending it through their children for generations. At worst, it makes the other members resigned and fatalistic, holding their leaders to no higher standards and inviting their own domination.

No, the “natural order of things,” in terms of the leadership function, is no better than the best concepts of a literary utopia. A formally ordered, representational democracy is still the best form of government—or at least better than all the others.

Sunday, January 7, 2024

Philosopher-Kings

Statues in Verona

Note: It has been about six months since I actively blogged on this site. After ten years of posting a weekly opinion on topics related to Politics and Economics, Science and Religion, and Various Art Forms, I felt that I was “talked out” and beginning to repeat myself. Also, the political landscape has become much more volatile, and it is good advice—on both sides of the aisle—to be circumspect in our published opinions. But, after a break, I feel it’s now time to jump back into the fray, although from a respectful distance and without naming any names.1

Winston Churchill once said, “Democracy is the worst form of government, except for all the others.” (The word democracy is derived from two Greek words meaning “strength of the people.”) Churchill’s opinion doesn’t leave much room for excellence, does it? Democracy has sometimes been described as two wolves and a lamb deciding what to have for dinner, and the system’s great weakness is that deeply divided constituencies that manage to get a slim majority in one forum or another can end up victimizing and perhaps destroying a sizeable chunk of the population. The U.S. Constitution creates a republic with representatives chosen by democratic election, but then the first ten amendments—collectively called “the Bill of Rights”—bristle with protections for the minority against a coercive majority. And I think that’s the way it should be.

Other methods—oh, many others!—have been proposed. One that seemed to gain favor when I was in college in the late 1960s was the method of Plato’s Republic, where actual governance is turned over to a body of “philosopher-kings.” This sounds nice: people who have spent their lives studying, thinking about, and dedicating their minds to abstract concepts like truth, beauty, justice, and goodness should be in the best position to decide what to do in any situation in the best interests of the country as a whole, right? … Right?

This thinking appeared to find favor with many young people around me in college, where Plato’s work was taught in a basic required course of English literature. It rang bells because—and I’m conjecturing here—it seemed to dovetail with the Progressive views from earlier in the century. Then everyone was excited about the potential for government to step in and right the wrongs of Robber Baron capitalism, inspired by books like Upton Sinclair’s The Jungle and societal critiques like those of pioneering social worker Jane Addams. The Progressive view said that government and its programs should be in the hands of technical experts, who would know best what to do. Out of this spirit was born the economics of the New Deal and the Social Security Administration, and the creation of Executive Branch departments like Commerce, Education, Energy, Health and Human Services, Housing and Urban Development, and Transportation, as well as the Environmental Protection Agency and the National Aeronautics and Space Administration. The list goes on …

Giving free rein to experts who would know what to do seemed like the best, most efficient course of action. After all, we had money covered by the U.S. Treasury and the Federal Reserve, and war—er, the national defense—was taken care of by the Department of Defense and the Pentagon. The experts will manage these things so the rest of us don’t have to think about them.

The trouble is, Plato’s Republic is a thought experiment, a utopia (another word from the Greek that literally means “no place”) and not a form of government that has ever been tried. Others have suggested ideal societies, like Thomas More’s book of the same name and Karl Marx’s economic and social imaginings. All of them end up creating rational, strictly planned, coercive, and ultimately inhuman societies. You really wouldn’t want to actually live there.1

The trouble with philosopher-kings is that they are still human beings. Sure, they think about truth and beauty and justice, but they still have families, personal needs, and an eye to their own self-interest. Maybe if there were an order of angels or demigods on Earth, who breathe rarified air, eat ambrosia, drink nectar, and have no personal relationships, we might then entrust them with rule as philosopher-kings. These would then be a different order of people, a different race … perhaps a master race?

But such beings don’t exist. And even if we could trust them not to feel selfishness, greed, nepotism, or that little twitch of satisfaction people get when they have the power to order other folks around and maybe humiliate them, just a little bit, that’s still no guarantee that they won’t get crazy ideas, or mount their own hobbyhorses. They are still subject to the narrow focus of academics and other experts, concentrating their thoughts so hard and fast on one form of “truth” or “the good” that they tend to forget competing needs and interests. Experts can, for example, become so enamored of benefits of what they’re proposing that they forget about, tend to minimize, and dismiss the costs of their solutions. They can go off their rocker, too, just like any other human being. People who think too much about abstractions like truth, beauty, and justice tend not to get out among the people who must stretch and scratch for a living.

I’m not saying that all public servants with inside knowledge of the subject under discussion are suspect. Many people try to do the right thing and give good service in their jobs, whether they serve in government, work for a big corporation—as I did in several previous lifetimes—or run a small business. But that expectation is a matter of trust and, yes, opinion. Not everyone is unselfish and dedicated to playing fair.

And the problem, of course, is that under Plato’s model you will have made them philosopher-kings. They have the power. They make the rules. They are in control. And they don’t have to listen to, obey, or even consider the “little people,” the hoi polloi (another Greek word!) because, after all, those kinds of people are not experts and don’t know enough, have all the facts, or deserve to have an opinion.

I’d almost rather follow the governing formula illustrated in Homer’s Iliad, where the kings of all those Greek city-states that went to war were tough men, prime fighters, and notable heroes. That would be like living under rule by the starting offensive line of the local football team: brutish, violent, and hard to overthrow. But at least they wouldn’t be following their fanciful, navel-gazing ideas right off into the clouds, leaving everyone else behind. And they all had—or at least according to Homer—an internal sense of honor and justice, along with a reputation to uphold. So they couldn’t be publicly evil or escape notoriety through anonymity.

No, democracy is a terrible form of government—sloppy, error-prone, and inelegant—but at least you have a chance every so often of throwing out the bums who have screwed things up. No loose-limbed dreamer has come up with anything better.

1. But then, to get things warmed up, this blog is a retelling—perhaps a refashioning, with different insights—of a blog I posted two years ago. Some channels of the mind run deep.

2. In my younger days, we had friends who were still in college—although I had been out in the working world for a couple of years. They thought Mao’s China was a pretty good place, fair and equitable, and that they would be happy there. I didn’t have the heart to tell them that their laid-back, pot-smoking, sometime-student, rather indolent lifestyle, dependent on the largesse of mummy and daddy, would get them about fifteen years of hard labor on the farm in the then-current Chinese society. Maybe the same today.