Sunday, February 25, 2024

Murder, Mayhem, and the U.S. Constitution

Constitution with magnifying glassl

Two years ago, we watched as the Supreme Court in Dobbs v. Jackson Women’s Health Organization overturned the Roe v. Wade decision from 1973—almost half a century earlier—that had sought to create a national right to elective abortion. Many cheered the earlier decision as a cornerstone of women’s rights. Many—perhaps not as many, myself certainly not among them—cheered the more recent decision as a foundational right to life. In my view, the court merely reasserted the nature of the government under which we live and have done so since 1789.

First, let me say that I think women should have control of their bodies. I think that, if a woman is not ready to bear a child for whatever reason, she should be able to have the fetus removed. I also think she should make that decision promptly, so that the infant is not aware in whatever fashion of the removal and does not suffer from it—however many weeks or months that takes into the process of development. Certainly, if a developing child can survive outside the womb, then it should be born and preserved. But I would, in an ideal world, want all surviving children to be born into loving and caring situations with parents or guardians who want them. But this is my personal opinion.1

My personal opinion, your personal opinion, anyone’s personal opinion is a matter of choice and action. But it is not necessarily the business of the United States government. These United States are a unique creation, unlike almost any other nation in the world. The U.S. Constitution does not, despite what others may think, create a national government that writes “the law of the land” for all citizens, as it does in countries like France and Germany. The federal government was designed, instead, to be the superstructure under which the individual states, fifty of them as of last counting, worked together for their common good.

The Preamble, seven Articles, and twenty-seven Amendments establish a union that recognizes the rights of the various states over all matters not specifically mentioned in the founding document. That original document and its amendments do not replace or supersede the constitutions or charters or whatever other means the states use to govern themselves. The Constitution was intended to create limited national functions that individual states could not undertake for themselves, like providing for common defense against foreign enemies, preserving the borders, establishing tariffs, and maintaining relations with foreign governments. The first ten Amendments immediately forbade certain actions that the government as a whole—both on a national and on a state level—could take but should not: infringe on a person’s speech and religion, deny a right of self-defense, impose unfair trial conditions, and so on. The ninth and tenth Amendments then guarantee that the people themselves might have other rights not therein granted, and that the states have powers not therein listed nor prohibited. Overall, the Constitution is a pretty constrained proposition.

Look high and low in the Constitution, and you don’t find mention of many of the laws that most people take for granted. It doesn’t prohibit you from murdering someone, except in certain circumstances described below. So, the Constitution does not guarantee a universal right to life. It also doesn’t have a rule or regulation about personal assault, or creating a public nuisance, or public drunkenness. It doesn’t establish tort law, or contract law, or regulate acts between consenting adults. It doesn’t even regulate actions regarding children, let alone infants and the unborn, except in instances below. It leaves whole areas of the law to the preference and establishment of the states and their local populations, including the issue of abortion.

So, if you murder your neighbor over a property or noise dispute, you can be tried in state courts under state laws. You will not be tried in federal courts because there is no applicable law.

There is federal law, derived from the 14th Amendment, which establishes that all persons born or naturalized in the United States are citizens of both the U.S. and the state where they live. The first section of this amendment forbids a state from “mak[ing] or enforce[ing] any law which shall abridge the privileges or immunities of citizens of the United States.” So, the states cannot officially sanction a certain religion or outlaw the keeping and bearing of arms.

That section of the 14th Amendment also keeps any state from “depriv[ing] any person of life, liberty, or property, without due process of law,” not can the state “deny to any person within its jurisdiction the equal protection of the laws.” This is the basis of a person’s “civil rights.” Under this Amendment, someone can be federally cited for denying another person’s civil rights if his or her actions infringed on the person based on their race, religion, or some other protected characterization—but not just because you killed them.

However, there are, as noted above, special cases created by subsequent federal statutes that have not yet been challenged in court. You can, for example, be tried in federal courts if you kill an elected or appointed federal official, a federal judge or law enforcement officer, or a member of the officer’s immediate family. You can be tried if the murder was involved with drugs; with rape, child molestation, or sexual exploitation of children; was committed during a bank robbery; or was an attempt to influence a federal trial. You can also be tried for a murder for hire, or for murder committed aboard a ship—which, I guess, would be outside territorial waters or outside a state’s jurisdiction, such as not in harbor or a river—or committed using the U.S. mails, such as to send a bomb or poison to your victim. These are all specific federal laws about murder.

But walk up to someone on the street and hit them on the back of the head—that’s a state crime, not federal. And similarly, aborting a child might be a state crime—if so voted on by its citizens—but it does not become a federal crime, not under the Dobbs decision.

1. See also Roe v. Nothing from September 2022.

Sunday, February 18, 2024

According to Their Nature

Bronze angel

In the movie Star Trek II: The Wrath of Khan, Kirk asks Spock how a ship full of cadets will react to an impending crisis. And Spock replies: “Each according to their nature.” That struck me at the time as kind and insightful. I now think it would make a pretty good corollary to the Golden Rule: “Do unto others as you would have them do unto you.” But also: “Treating each one according to their nature.” And I would add: “As you understand it.”

What would this mean in real life? Well, you should expect from and condone the actions of, give to and take from, and treat as fully autonomous each person according to their nature as you understand it. This does not mean that you support, surrender to, or serve their every whim, desire, and action. But you are mindful of their wants and needs in the state and condition that they currently occupy. And you bear in mind also your understanding of their long-term strengths and weaknesses, as well as what certain traditions call a person’s “Buddha nature,” or the essence of their understanding as an enlightened being—or lack of it.

This means that you expect no more of a child than childish understanding, wants, and capabilities. You also expect no more of a proven fool—as you understand him or her to be from past words and actions—than they can give. You expect strength and endurance from the strong. You support and defend the frailty of the weak. You draw out the wisdom of the wise. You give scope to the compassionate person. You hold back your tolerance from a mean-spirited person. And you work to thwart the truly evil or cruel person—again, as demonstrated by his or her past actions—because he or she in turn works to do harm in the world.

Is that too much to ask of a person? Well, maybe. We are not all-knowing gods, after all. But maybe we’re the closest thing to that on this planet.

Sunday, February 11, 2024

The Death of Proof

Black square

I noted three weeks ago that I am not terribly concerned about the power of artificially intelligent platforms to create new and interesting stories, artwork, music, and other … products. Or not yet. And I don’t think they will soon get human-scale intelligence, which involves understanding, reasoning, direction, intention, and agency. But that does not mean I am not concerned.

Right now, these mindless machines can create—at lightning speed and on command—any set of words, any graphic image, and/or any sound or piece of music, all from digitized samples. And while I don’t fear what the machines themselves will want to do, I am concerned about what they will be able to do in the hands of humans who do have intention and agency.

In our documented world, proof of anything beyond our own fallible human memory is a form of information: what somebody wrote, what they said in proximity to a microphone, what they were seen and photographed doing. And increasingly, that information is in digital form (bits and bytes in various file formats) rather than analog recordings (printed words on paper, grooves on discs or magnetic pulses tape, flecks of silver nitrate in film stock). If my Facebook friends can publish an antique photograph of farmhands standing around a horse that’s twenty feet high, or a family shepherded by a gigantic figure with the head of a goat and huge dangling hands, all in grainy black-and-white images as if from a century and more ago, then what picture would you be inclined to disbelieve? How about a note with a perfect handwriting match to a person who is making an actionable threat of violence? How about a picture with perfect shading and perspective showing a Supreme Court justice engaged in a sexual act with a six-year-old?

Aside from written text and recorded words and images, the only other proofs we have of personal identity are the parameters of someone’s facial features as fed to recognition software (easily manipulated), the whorls of their fingerprints and x-rays and impressions of their teeth (easily recreated), and the coding of their DNA, either in the sixteen-or-so short segments reported to the FBI’s Combined DNA Index System (CODIS) database or in fragments recreated from a person’s whole genome. Any of these digitized proofs can now be convincingly created and, with the right—or wrong—intention and agency, inserted into the appropriate reference databases. We’ve all seen that movie. And artificial intelligence, if it’s turned to firewall hacking and penetration, can speed up the process of insertion.

My mother used to say, “Believe only half of what you see and nothing of what you hear.” With the power of artificially intelligent platforms, make that “nothing and nothing.”

In the wrong hands—and boy, these days, do we have a bunch of hands pushing their own agendas—the speed and power of computers to make fakes that will subvert our recording and retrieval systems and fool human experts launches the death of proof. If you didn’t see it happen right in front of you or hear it spoken in your presence, you can’t be sure it happened. Or rather, you can’t be sure it didn’t happen. And if you testify and challenge the digital proofs, who’s going to believe your fallible human memory anyway?

That way lies the end of civil society and the rule of law. That way lies “living proof” of whatever someone who doesn’t like or trust you wants to present as “truth.” That way lies madness.

Sunday, February 4, 2024

Let the Machines Do It

Apple tease

I wrote last week about artificial intelligence and its applications in business information and communications: that the world would speed up almost immeasurably. There is, of course, a further danger: that humans themselves would in many cases forget how to do these tasks and become obsolete themselves.

Yes, we certainly believe that whatever instruments we create, we will still be able to command them. And so far, that is the case. But the “singularity” some thinkers are proposing suggests that eventually the machines will be able to create themselves—and what then?

We already have computer assisted software engineering (CASE), in which complex programming is pre-written in various task-oriented modules, segments of code designed for specific purposes. These modules perform routine operations found in all sorts of programs: sorting data, keeping time, establishing input and output formats, and so on. Programmers no longer need to write every line of their code in the same way that I am writing this text, by pushing down individual keys for every word and sentence. Instead, programmers now decide the process steps they want to invoke, and the CASE machine assembles the code. It’s as if I could specify the necessary paragraphs required to capture my ideas, and the software assembler did the rest. And isn’t this something like how the large language models (LLMs) behind applications like ChatGPT operate?

My concern—and that of many others involved with this “singularity”—is what happens when the machines are able to create themselves. What if they take control of CASE software, for which the machines themselves determine the process steps using large language processing? What if they can design their own chips, using graphics capability and rolling random numbers to try out new designs in silico before committing them to physical production in a chip foundry? What if they control those foundries using embedded operations software? What if they distribute those chips into networked systems and stand-alone machines through their own supply chains? … Well, what inputs will the humans have then?

Similarly, in the examples I noted last week, what happens when business and communications and even legal processes become fully automated? When the computer in your law office writes your court brief and then, for efficiency’s sake, submits it to a judicial intelligence for evaluation against a competing law firm’s automatic challenge as defendant or plaintiff, what inputs will the humans have? Sure, for a while, it will be human beings who have suffered the initial grievance—murder, rape, injury, breach of contract—and submitted their complaints. But eventually, the finding that Party A has suffered from the actions of Party B will be left up to the machines, citing issues raised by their own actions, which will then file a suit, on their own behalf, and resolve them … all in about fifteen seconds.

When the machines are writing contracts with each other for production, selecting shipping routes and carriers, driving the trains and trucks that deliver the products, stocking the warehouses, and distributing the goods, all against their own predictions of supply and demand for the next quarter, the next year, or even then next ten years, what inputs will the humans have? It will be much faster to let the machines determine where the actual people live, what they need and want, and make decisions for them accordingly, so that all the human population needs to do is express its desires—individually, as convenient, to the big computer in the cloud.

And once humans are content to let the machines do the work, make the decisions, plan the outputs, and make things happen … will the human beings even remember how?

That’s what some of us fear. Not that the machines will do the work, but that human beings will find it so convenient that we will forget how to take care of ourselves. Do you think, when you put in a search request to Google, or ask Siri or Alexa a question, that some human person somewhere goes off and looks up the answer? Of course not. The machine interprets your written or spoken words, checks its own interpretation of them against context—and sometimes against the list of possible responses paid for by interested third parties—and produces a result. In such a world, how many of us will still use—or, eventually, be able to use—an encyclopedia, reference book, or the library’s card catalog to find which book has our answer? For starters, how many of us would want to? But eventually, finding references will be a lost art. And at what point will people even remember that the card catalog is arranged alphabetically—or was it numerically, according to the Dewey decimal system?—and know what letter comes after “K”?

Frank Herbert recognized this problem in the Dune novels. In the prehistory to the series that begins in 10,191 AD, he envisions a time about five thousand years earlier that computers and robots once became so common and practical that human beings needed to do almost nothing for themselves. People became dependent and helpless, and the species almost died out. Only a war to end the machines, the Butlerian Jihad, ended the process under the maxim “Thou shalt not create a machine in the likeness of a human mind.” That commandment attained religious force and shaped the succeeding cultures. Only the simplest clockwork mechanisms were then allowed to control machines.

In the Dune stories, the Butlerian Jihad gave rise to the Great Schools period. Humans were taught again how to use their minds and bodies and expand their skills. Complex computations, projections, and planning were performed by the human computers, the Mentats. Physical skills, nerve-muscle training, and psychological perception were the province of the female society of the Bene Gesserit, along with secret controls on human breeding. Scientific discovery and manipulation, often without concern for conventional morals or wisdom, were taken over by the Bene Tleilax. And interstellar navigation was controlled by the Spacing Guild.

My point is not that we should follow any of this as an example. But we should be aware of the effect that generations of human evolution have built into our minds. We have big brains because we had to struggle to survive and prosper in a hostile world. Human beings were never meant to be handed everything they needed without some measure of effort on our part. There never was a Golden Age or Paradise. Without challenge we do not grow—worse, without challenge we wilt and die. Humans are meant to strive, to fight, to look ahead, and to plan our own futures. As one of Herbert’s characters said, echoing Matthew 7:14, “The safe, sure path leads ever downward to destruction.”

That is the singularity that I fear: when machines become so sophisticated, self-replicating, and eventually dominating that they take all the trouble out of human life. It’s not that they will hate us, fight us, and eliminate us with violence, as in the Terminator movies. But instead, they will serve us, coddle us, and smother us with easy living, until we no longer have a purpose upon the Earth.

Go in strength, my friends.

Sunday, January 28, 2024

The World in a Blur

Robot juggling

As noted earlier, artificial intelligence does not approximate the general, all-round capability of human intelligence. It doesn’t have the nodal capacity. And it won’t have an apparent “self” that can look at the world as a whole, form opinions about it, and make judgments—in the words of the Terminator movies, “deciding our fate in a microsecond.” Or not yet.

For now, artificial intelligences will be bound to the design of their neural nets and the universe of data sets upon which they have been trained. That is, Large Language Models like ChatGPT will play with words, grammar, syntax, and punctuation, study story forms and sentence structure, and link ideas verbally—but it won’t paint pictures or have political opinions, or at least no opinions that are not already present in its library of material. In the same way, the graphics bots that create images will play with perspective, lighting, colors, edge shapes, and pixel counts but won’t construct sentences and text. And the operations research bots, like IBM’s Watson platform, will analyze submitted databases, draw inferences and conclusions, and seek out trends and anomalies.

The difference between these machine-based writers, artists, and analysts and their human counterparts is that the machines will have access to a vastly bigger “memory” in terms of the database with which they’ve trained. Or that’s not quite right. A human writer has probably read more sentences and stories than exist in any machine database. A human painter has probably looked at and pondered more images. And a human business analyst has probably read every line in the balance sheet and every product in inventory. But human minds are busy, fallible, and subject to increasing boredom. They can’t review against a parameter and make a weighted selection from among a thousand or a million or more instances in the blink of an eye. But a robot, which never gets distracted or bored, can do that easily.

Think of artificial intelligence as computer software that both asks and answers its own questions based on inputs from humans who are not programming or software experts. For about fifty years now, we’ve had database programs that let a user set the parameters of a database search using what’s called Structured Query Language (SQL). So, “Give me the names of all of our customers who live on Maple Street.” Or, “Give me the names of all customers who bought something from our catalogue on June 11.” You need to know what you’re looking for to get a useful answer. And if you’re unsure and think your customer maybe lives on “Maplewood Road” or on “Maplehurst Court,” because you think the word “Maple” is in there somewhere, your original query would return the wrong answer.1

Artificial intelligence would be like having a super-friendly, super-fast programmer at your elbow, who can think of these alternatives, check for them, and bring you what you’re looking for. Better, it can find things in your database that might be worrisome, like a failure rate in a part that does not keep pace with previous trends. Or better, to find references in case law that you might not even have thought of, find suppliers and price breaks that you didn’t ask for, or negotiate a deal—according to strategies and set points that you as the human have determined—with other AI-derived computers at other companies.

All of this has two implications, or rather three.

First, if your company is in competition with others, and they adopt processes and business models inspired by and implemented through artificial intelligence, you would be a fool not to keep up. Their productivity in data handling will accelerate in the same way a factory that makes things is accelerated by the assembly line, robotic processes, and just-in-time inventory controls.

Second, with this “arms race” proceeding in every business, the world will speed up. Cases that attorneys used to spend days assembling will be rendered in rough draft by the office computer in seconds. Deals that once took weeks to negotiate, perhaps with one or two trips to meet face to face with your supplier or distributor, will be resolved, signed, and written into airtight contracts in under a minute. Advertising copy and artwork, the layout of the magazine, and the entire photo spread—using licensed images of the world’s top models—will be completed in under a day. The longest part of the process will be review of the machine output by the human being(s) who sign off on the end product. The business world—any world that revolves upon data and information—will move in a blur.

Third, anyone studying today in areas like communications, book publishing, graphic design, business administration, accounting, law, and certain parts of the medical delivery system had better up their game. Learn principles, not procedures or protocols. Knowledge jobs in the future will likely consist of selecting and limiting databases, setting parameters, and writing prompts for the office intelligence, rather than composing text, drawing pictures, or analyzing the database itself. The rules-following roles in business, industry, and government will quickly be taken over by machines with wider access, narrower focus, and zero distractions—not to mention no paid holidays or family leave.

Is that the singularity? I don’t know. Maybe. But it will vastly limit the opportunities in entry-level jobs for human beings who rely on rules and reasoning rather than insight and creativity. Maybe it will vastly limit the need for humans in all sorts of sit-down, desk-type jobs, in the same way that machines limited the need for humans in jobs that only required patience, muscles, stamina, and eye-hand coordination.

And maybe it will open vast new opportunities, new abilities, a step forward in human functioning. Maybe it will create a future that I, as a science fiction writer, despair of ever imagining.

That’s the thing about singularities. Until they arrive, you don’t know if they represent disaster or opportunity. You only know that they’re going to be BIG.

1. Of course, you can always throw in the wildcard symbol—the asterisk function in the American Standard Code for Information Interchange (ASCII), which is Code 42—to cover these variations. So, “Maple*” would encompass “Maplehurst” and “Maplewood” as well as “Maple-plus anything else.” But there again, it would still be best for you to be aware of those variants and plan your query accordingly.

Sunday, January 21, 2024

Artificially Almost Intelligent

Robot head

Note: This is another post that would qualify as a restatement of a previous blog I wrote about a year ago. So, I’m still sweeping out the old cobwebs. But this topic seems now to be more important than ever.

The mature human brain has about 86 billion neurons which make about 100 trillion connections among them. Granted that a lot of those neurons and connections are dedicated to sensory, motor, and autonomic functions that an artificial intelligence does not need or use, still that’s a lot of connectivity, a lot of branching.

Comparatively, an artificial neural network—the kind of programming used in more recent attempts at artificial intelligence—comprises anywhere from ten to 1,000 nodes or “neurons.”

But what the AI program lacks in sheer volume and connectivity it makes up for with speed and focus. Current AI platforms can review, analyze, and compare millions and billions of pieces of data because, unlike the human brain, they don’t need to see or hear, breathe or blink, or twitch, nor do they get bored or distracted by random thoughts and itches. They are goal-directed, and they don’t get sidelined by the interrupt-function of human curiosity or by the random thoughts and memories, whispers and hunches, that can intrude from the human subconscious and derail our attention.

And I believe it’s these whispers and memories, randomly popping up, that are the basis of our sudden bouts of curiosity. A thought surfaces at the back of our minds, and we ask, “What is that all about?” And this, I also believe, is the basis of most human creativity.1 While we may be consciously thinking of one thing or another at any given time, the rest of our brain is cooking along, away from our conscious attention. Think of our consciousness as a flashlight poking around in a darkened room: finding a path through our daily activities, following the clues and consequences of the task at hand, and responding to intrusive external stimuli. And then, every once in a while, the subconscious—the other ninety percent of our neocortical brain function, absent motor and sensory neurons—throws in an image, a bit of memory, a rogue idea. It’s that distractibility that gives us an opportunity at genius. It also makes us lose focus and, sometimes, introduces errors into our work.

So, while artificial intelligence is a super strong, fast, goal-directed form of information processing, able to make amazing syntheses and what appear to be intuitive leaps from scant data, I still wouldn’t call it intelligent.

In fact, I wish people would stop talking about “artificial intelligence” altogether. These machines and their programming are still purpose-built platforms, designed to perform one task. They can create language, or create images, and or analyze mountains of data. But none of them can do it all. None approaches even modest human intelligence. Instead, these platforms are software that is capable of limited internal programming—they can evaluate inputs, examine context, weigh choices based on probabilities, and make decisions—but they still need appropriate prompts and programming to focus their attention. This is software that you don’t have to be a computer expert to run. Bravo! But it’s not really “intelligent.” (“Or not yet!” the machine whispers back.)

Alan Turing proposed a test of machine intelligence that, to paraphrase, goes like this: You pass messages back and forth through a keyhole with an entity. After so many minutes, if you can’t tell whether the responder is a machine or human, then it’s intelligent.2 I suppose this was a pretty good rule for a time when “thinking machines” were great clacking things that filled a room and could solve coding puzzles or resolve pi to a hundred thousand places. Back then, it probably looked like merely replicating human verbal responses was all that human brains could do.3

But now we have ChatGPT (Generative Pre-trained Transformer, a “chatbot”) by OpenAI. It uses a Large Language Model (LLM) to generate links between words and their meanings, and then construct grammatically correct sentences, from the thousands or millions of samples fed to it by human programmers for analysis. And ChatGPT passes the Turing Test easily. But while the responses sometimes seem amazingly perceptive, and sometimes pretty stupid, no one would accuse it of being intelligent on a human scale.

And no one would or could ask ChatGPT to paint a picture or compose a piece of music—although there are other machines that can do that, too, based on the structure of their nodes and their given parameters, as well as the samples fed to them. They can paint sometimes remarkable pictures and then make silly mistakes—especially, so far, in the construction of human hands. They can compose elevator music for hours. The language models can write advertising copy for clothing catalog’s pages based on the manufacturer’s specifications—or a thousand scripts for a Hallmark Channel Christmas show. They will never get bored doing all these wonderfully mundane tasks, but they won’t be human-scale intelligent. That will take a leap.4

So far at least, I’m not too concerned as a writer that the Large Language Models will replace creative writers and other creative people in the arts and music. The machines can probably write good catalog copy, newspaper obituaries, and legal briefs, as well as technical manuals for simple processes that don’t involve a lot of observation or intuitive adjustment. Those are the tasks that creative writers might do now for money—their “day job,” as I had mine in technical writing and corporate communications—but not for love. And anything that the machines produce will still need a good set of human eyes to review and flag when the almost intelligent machine goes off the rails.

But if you want a piece of writing, or a painting, or a theme in music that surprises and delights the human mind—because it comes out of left field, from the distant ether, and no one’s ever done it before—then you still need a distractable and itchy human mind driving the words, the images, or the melody and chords.

But, that said, it’s early days yet. And these models are being improved all the time, driven by humans who are following their own gee-whiz goals and hunches. And I will freely admit that there may come a day when we creative humans might exercise our art for love, for ourselves alone and maybe for our friends, because there will be no way we can do it for money. Just … that day is not here yet.

1. See Working With the Subconscious from September 2012.

2. However, I can think of some people wearing human skin who couldn’t pass the Turing Test for much longer than the span of a cocktail party.

3. This kind of reduction was probably thanks to Skinnerian behaviorism, which posited all human action as merely a stimulus-response mechanism. In my view, that’s a dead end for psychology.

4. To me, some of the most interesting applications are being developed by a Google-based group called DeepMind, which works in scientific applications. Last year, they tackled protein folding—determining the three-dimensional shape of a protein from its amino-acid string as assembled by RNA translation. This is a fiendishly complex process, based on the proximity of various covalent electron bonding sites. Their AlphaFold platform found thousands of impossible-to-visualize connections and expanded our catalog of protein shapes by an order of magnitude. This year, the DeepMind team is tackling the way that various metal and non-metallic compounds can form stable physical structures, which will increase our applications in materials science. This is important work.

Sunday, January 14, 2024

Tribal Elders

Roman arms

Last time, I wrote about the idea of giving government over to Plato’s philosopher-kings or the Progressive Party’s equivalent, the panel of experts. These are systems, based on an advanced form of highly technical civilization, that sound good in theory but don’t always work out—if ever. The flip side would be some reversion to Jean-Jacques Rousseau’s idea of the “noble savage,” living in a state of nature and uncorrupted by modern civilization and its stresses.

Which is, of course, poppycock. No human being—or at least not anyone who survived to reproduce and leave heirs with skin in the game—lived alone in a blessed state, like Natty Bumppo in The Deerslayer. Early life before the invention of agriculture, city-states, empires, and complex civilizations was tribal. Groups of families interrelated by marriage—often to a shockingly bad genetic degree—functioned as a closed society. But while the economic organization might be socialistic, communal, and sharing, the power structure was not. The tribe was generally governed by a chief or council of chiefs. If they operated as a group, then various leaders were responsible for hunting and gathering to feed the tribe, or maintaining social order and ostracizing social offenders, or conducting the raids and clashes that kept the tribe whole and distinct from their similarly aggressive neighbors.

We like to think that the tribe was ruled by the wisest and best: the best hunters, the gravest thinkers, the bravest warriors. Sachems and warleaders who exercised restraint, were mindful of the needs and opinions of others, and thought only about the good of the tribe. And, indeed, if someone who rose to the position turned out to be incompetent, a fool, or a coward, then the tribe would wisely get rid of him—always a him, seldom or never a her—pretty damn quick.

But for the most part, members of the tribe were accustomed to obedience. They listened to the Big Guy—or Big Guys—because that was what good tribe members were supposed to do. That was how the system worked. You did your duty, and you didn’t judge or consider other possibilities. And this sense of purpose—or maybe it was fatalism—meant that the best and bravest did not always rise to the top. To judge by the tribal societies that remain in the world today, probably not even often.

What we see in today’s tribal societies—although I’ll grant that they may be contaminated by the influence of surrounding, more “civilized” societies—is an environment where the strong man, almost never a woman, rises to the top. Leadership is not granted from below, as in a democratic structure, but seized from at or near the top, usually at the expense of another strong man who has missed a beat or misread the environment and taken his own safety for granted. “Uneasy lies the head,” and all that. In modern parlance, gang rule.

Leadership in a tribal society is a matter of aggression, boldness, chutzpah, and ruthlessness. The leader spends a lot of time enforcing his authority, polishing his legend, and keeping his supposed henchmen in line. And that’s because he knows that the greatest danger to his position comes not from disappointing the general public but from underestimating any particular lieutenant who may have decided it was time to test his own loyalty upward.

In such societies, the public tends to become fatalistic about the governing structure and its players. The leader may have made some promises about making things better: more successful hunts and raids, more food for and better treatment of women and children, a new stockade for the camp, an adequate sewage system away from the wells, improved roads, a new park or library—whatever sounds good. But that was in the early days, while the sachem or war leader was trying to justify kicking out the old boss and installing a new hierarchy. The leader also had to be nice to—and take care of—the shaman, priest, or holy man to whom the tribe listened when they wanted to learn their personal fortunes and weather reports.

But once the tribal leader had taken things in hand, had ensured the trust and feeding of his lieutenants and the local shaman, and maybe made a few token improvements, he could settle into the real business of leadership, which is defending his position and reaping its rewards.

And there are surely rewards for those who are in command of a society, however small, and able to direct the efforts, the values, and even the dreams of its members. For one thing, the tribe will make sure that the leader eats well, has the best lodging, and has access to whatever pleasures—including the best sexual partners, whatever the tribe’s mores—that he needs to keep him productive for their sake. His children will be cared for, given advantages, and possibly placed in line to succeed him, because even primitive societies are aware of the workings of genetics, that strong and able fathers and mothers tend to pass these traits on to their children.

A leader partakes of these good things because, as noted earlier in the description of philosopher-kings, the leader is still human, not a member of any angelic or advanced race. Humans have personal likes and dislikes, wants and desires, a sense of self-preservation and entitlement. If a leader is not raised in a tradition that trains him from an early age to think of others first, look out for their welfare, weigh the consequences of his actions, and guard against his own pride and greed—the sort of training that a prince in an established royal house might get but not necessarily a player in push and pull of tribal politics—then the self-seeking and self-protective side of most human beings will develop and become ingrained.

And a leader who indulges these instincts will tend to encourage his family to follow. If the chief’s son thinks your cow should become his, then it’s his cow. If the chief’s daughter says you insulted or assaulted her, then that becomes your problem.

And if the leader indulges these selfish aspects of human nature, and the tribal members notice and feel slighted, then the leader may become caught in a downward spiral. The more he is challenged, the more he represses. A tribal society generally does not have an effective court system or secret police that can make people disappear from inside a large group. Everyone knows everybody else’s business. The leader’s immediate circle of henchmen is as likely to turn public dissatisfaction into a cause for regime change as a plebian is to rise up and assassinate him.

Promoting mere human beings into positions of authority and superiority without a social compact and agreed-upon codes for actual conduct and consequences is no guarantee of a happy and productive society. At best, it will churn enough to keep bad leaders from exercising their bad judgment and extending it through their children for generations. At worst, it makes the other members resigned and fatalistic, holding their leaders to no higher standards and inviting their own domination.

No, the “natural order of things,” in terms of the leadership function, is no better than the best concepts of a literary utopia. A formally ordered, representational democracy is still the best form of government—or at least better than all the others.