Sunday, November 11, 2018

What Is Good?

Vitruvian blood vessels

I have never concealed the fact that I am an atheist—although I sometimes sail under the flag of agnosticism, the state of “not knowing,” in order to avoid bar fights. I do not accuse or belittle people who have had an actual religious experience, heard the voice of God or the rustle of angel wings, and are guided by the principles of their religion. Peace be unto them. But I never had the experience, and I have neither the genetic makeup nor the cerebral or psychological components necessary to perceive that unseen whisper. But at the same time, I am not in G. K. Chesterton’s famous line, “capable of believing in anything.” I have my own principles, after all.

One of those principles is evolution. I have worked at both a manufacturer of biological pharmaceuticals and a developer of genetic analysis equipment. I know enough biology and have read enough about genetics and cladistics to appreciate that all life on Earth is related. The octopus is not an extraterrestrial alien dropped into this planet’s oceans—as some sources have recently claimed—but is cousin to the squid and the cuttlefish in the class Cephalopoda, just as human beings are cousin to the mouse and the lion in the class Mammalia.

Evolution is not just the “survival of the fittest,” as the popular saying goes. The evolution of a biological organism takes tiny changes in the genetic code, potentially effecting tiny changes in form and function, and then either implements them immediately—especially if the change is harmful or fatal to the bearer—or holds them quietly in the genome as a recessive or alternate copy of the gene until the features engenders can come into play. The DNA/RNA/protein coding system has many built-in safeguards that make most random changes in the code neither immediately fatal nor immediately helpful. For example, each three-codon reading frame, in which three base-pair sequences call for any one of the twenty amino acids used in assembling a protein molecule, usually has several alternate forms for calling each amino acid; so a change in just one of the “letters” will usually still create the intended protein. The system is robust—so that we can still have viable offspring and recognize them as human—and yet just fragile enough that changes are possible over generations.

And those changes and their effects are not necessarily crude, achieving just basic survival or writing off the individual organism with a lethal deletion. The cheetah was not born to limp over the veldt in pursuit of its ambulating prey. Time and the millions of minute alterations to the genetic code governing the cheetah’s musculature, metabolism, and nervous system allow it to lope gracefully and efficiently, outrunning the swiftest antelopes and wildebeests, which are themselves adapted to run just fast enough—most of the time—to elude their predators. Evolution is not just a mechanism of survival but a mechanism of optimization, efficiency, and ultimately of temporary perfection.

I have called DNA the “god molecule,”1 but that is not because I worship it or think it has supernatural powers. The DNA/RNA/protein system is simply the instrument of evolution. It has created not only all the varied life we see on this planet but also, because of the impact that life has had on shaping the atmosphere, seeding the oceans with abundant life, and covering the hills with vegetation and grazing animals that change their erosion patterns, it has changed the surface of our world itself. The original Earth, before the first bacteria and blue-green algae evolved to give it an oxygen-rich atmosphere, was as hostile to our kind of life as the surfaces of Venus or Mars are today.

But the principle of evolution applies to more than just organic structure and function. Most of the structure and function of human society and the approaches in any human endeavor, from technology to the arts, have advanced by a form of social evolution: small—but sometimes large—changes introduced into a complex situation, there to be either discarded, adopted, or further adapted. In rare cases, like the mathematical thinking of a Newton or an Einstein, a single person will make a significant change in human society and history. But for the most part, what one person starts another will then adapt and improve on, so that the seminal invention is lost in a continuous flow of minor and incremental developments. The invention of the stirrup and the wheeled plow, with their migration during the Middle Ages from Asia into Northern Europe, are such examples.

In the same way, the structure of many human social concepts like love, justice, honesty, reciprocity, personal freedom, and other exchanges that we consider “good” and weave into the stories we tell are the products of social evolution. Human families, clans, tribes, city-states, and nations learned over time by piling one experience and its consequences on another that certain strategies of exchange either worked or did not. For example, they settled early on the basic understanding that habitual lying is harmful both to the people who must deal with the liar and ultimately to the liar himself. That fair dealing and reciprocal trade are a better system of exchange than theft and plunder. That hereditary servitude is not proper treatment for any thinking human being, and a society that practices slavery may flourish for a time but will eventually collapse. That love is a stronger bond and lasts longer than hate. And on and on. We learned these “home truths” at our mother’s knee and passed them down through the cultural wisdom of our clan and tribe long before some prophet wrote them on tablets of stone or bronze and suggested they were the teachings of the gods.

This does not mean that dishonesty, plunder, slavery, hatred, and other injustices don’t exist in the world. Or that sometimes these strategies of exchange will not work just fine in some situations—especially if there is no one stronger around to keep you from getting away with them. Ask the Romans, or the Mongols, the Nazis, the Soviets, and any of history’s other bent and crooked societies that have made a bad name for themselves. But thinking human beings, left on their own to study and consider the situation, will conclude that these negative strategies do not work for the long haul or for the greatest good of the greatest number of people.

Not only has human society as a social construct but the human nervous system as a response mechanism evolved in tune with these beneficial strategies. Try taking from a toddler the treat that its mother has given and see if that tiny human brain does not immediately register and react to the unfairness of your action. Hear children on the playground taunting each other—perhaps even with names and descriptions having a superficial gloss of truth—and see if the recipient does not explode with anger at the perceived dishonesty. We all understand how the world works and know when others are practicing falsehoods and injustices upon our person and our sense of self.

It does not take a god from a burning bush with a fiery finger to write out the rules of what is proper and good in any human exchange. We know it from before we were born, because our brains and our society had already supplied the answer, hard-wired and ready to function. In the same way, we see the world in the colors for which our eyes were adapted, breathe the air for which our lungs were optimized, and recognize the adorable cuteness of babies and puppies because it is beneficial to both that our brains release the right endorphins at the sight of them.

Evolution says that we are at home in this world because we are the products of this world. And that is enough of a natural wonder for me.

1. See, for one example among others, The God Molecule from May 28, 2017.

Sunday, November 4, 2018

The Next Civil War

War devastation

It has been suggested for some years now, at least for the past decade, that this country is in the midst of a “cold civil war.” Disagreements of both policy and principle between the progressive left and the conservative right have reached a fever pitch. Factions are marching in the streets and attacking each other with bats and chemical sprays—although the fighting hasn’t reached the stage of firearms yet. Friendships are breaking up, sides have formed, and the lines are drawn on social media. I even know of one Facebook friend who seems ready to divorce her husband over his political views.1

We’ve been here before, back in the late 1960s, when I was at the university and the young people in college and the radical activists were protesting against the Vietnam War and in favor of civil rights and free speech. Back then, we had campus demonstrations, protest gatherings on the Washington Mall, and rioting in the streets—most notably outside the 1968 Democratic Convention in Chicago. The difference between then and now is that in the ’60s the radical view and its hard-line conservative response were both on the fringes of the political spectrum, while the two main parties could still conduct business in a relatively consensual, bipartisan fashion. Today, the two parties function in lockstep with their most radical elements. Discussion and votes in the Congress and decisions on the Supreme Court are divided along party lines with almost no crossover. The White House and the top echelon of the Executive bureaucracy swing back and forth with whichever party captures the Presidency.

On the one side, we have people who want to create and celebrate a “fundamental transformation” of the country’s political, economic, social, and environmental relations according to a perceived “arc of history.” On the other are those who don’t mind moving forward into the future by evolutionary steps but resist being pushed bodily through revolutionary action. Frustrations abound on either side, and with them come name calling, social shunning, brick throwing, and tear gas.

Some people are even speculating—myself among them, and mostly since the upheavals of the 2016 election—that the cold civil war will eventually turn hot. That our political and economic differences, our social and environmental positions, will reach a point where they can no longer be resolved by discussion and bargaining, by yielding on some points and advancing on others, to arrive at a national consensus. That the political crisis will demand a clear-cut winner and loser. That internal peace will only be achieved when one side or the other can no longer stand up for its position because its politicians and their supporters have—each man and woman—been economically subdued, personally incarcerated, or rendered dead. Or when the country has been divided by physical partition and personal and familial migration, as occurred between India and Pakistan in the late 1940s, with each party maintaining its own new national government.

The first American Civil War of the 1860s was a dispute between cohesive regions, North and South, Slave State and Free. But many people think the current differing viewpoints are too intermixed for the country to break and go to war along regional lines and across state boundaries. This view says that the coming hot war will be more like the Spanish Civil War of the 1930s, with neighbor fighting neighbor for control of the cities and the countryside for each party.

I can see the reasoning for either approach. In many ways, the opposing sides in this country reflect a divergence between urban progressives and rural conservatives. We keep seeing that map comparing the votes cast in Los Angeles County—which is just the urban core of the big place we think of as “LA”—and those in the seven states of the Upper Northwest, from Idaho to Minnesota. And really, even California is not a homogenous polity, because the feeling in communities of the foothills of the Gold Country and in the Sierra is more conservative than the progressive politics of the big cities in the Central Valley and along the Coast.

But I can also see a breakup between regions. The states along the Pacific Coast, in the Northeast, and across Upper Midwest are typically progressive, while the middle of the country is typically more conservative—with a few isolated exceptions like Colorado and New Mexico.

The question of how the country will break apart if and when war comes depends, in my mind, on what incident, what spark, finally sets it off. If the decisive point is internal, say, an election that fails to satisfy one party so greatly that it simply revolts, then we might see a piece-meal collapse as in the Spanish Civil War. But if the incident is external and the shock is to the whole country, then we might see a response that takes shape along regional and state lines.

The latter is the picture I painted as a leitmotif to my two-volume novel about life extension through stem-cell organ replacement, Coming of Age. There, the incident was the repudiation of the national debt.

When I was in college, my economics text book said the national debt was irrelevant because it was just money that we owed to ourselves, financed by Savings Bonds held among the citizenry. No one was going to call in that debt; so the government could just keep financing it by issuing more bonds. As recently as 2014, however, almost half of our publicly held debt in the form of U.S. Treasuries, and a third of our total debt, is held by other governments and offshore banks. The biggest holders are China, Japan, Ireland, Brazil, and the Caribbean banks.

If these external holders wanted to collapse this country—which, given that our global economy is so interconnected, would be a foolish thing—they could simply sell off huge blocks of the U.S. Treasuries they now hold. The federal government would then have to scramble to make good on the sales, and so would likely impose massive economic restrictions and additional taxes on the American public. In my book, this prompts many of the states in the central part of the country—whose residents don’t feel they are well represented in the federal government’s spending decisions—to renunciate the debt and along with it their allegiance to the Union: either secede from the union or go broke by staying in it.

Under those conditions, many of the National Guard units would side with their home states. And many U.S. Army, Navy, and Air Force bases located in these states might weigh their allegiance to the national government against the conservative political instincts of their commanders and troops. The split would not be uniform. The choices would not be pretty. And once initial blood was spilled in the breakup, it would not be much more of a step to spill blood in establishing either national dominance or domestic partition.

In my novel, the breakup along these economic lines came in the year 2018. Of course, that year has now come and is mostly gone. But the weight of the national debt and the simmering divisions of our domestic politics still hang over us all.

I don’t look for war or want it. But my novelist’s ear listens to the rhetoric that is now splitting the county along its fracture lines, and I cannot discount the possibility of a shooting war coming to these United States sometime soon.

1. My late wife and I had opposing political views: she an old Berkeley liberal Democrat, me an unreformed Eisenhower-era conservative Republican. But we fell in love and married in an earlier time, some forty years ago, when political differences were treated in the same way as differences of religious doctrine and practice: a private, personal matter that did not touch on the essentials of what made a good person. My wife and I shared the same values about honesty, integrity, kindness, education, and fair dealing—and that was what mattered. For the rest, we joked about making sure we each went to the polls on election day so that we could cancel each other’s vote.

Sunday, October 28, 2018

Radicals and Revolution

Joseph Stalin

Joseph Stalin

I am becoming more and more concerned about the direction of politics in this country.

It used to be—back when I was growing up, and for a decade or two afterward—that Democrats and Republicans could be civil to each other. They might differ on policy issues but they agreed about the principles and process of governing. Their disagreements were more about emphasis and degree than about goals and objectives. We debated a larger or smaller role for the public and private sectors, for amounts of taxation and regulation, for our stance in international relations, for the uses of military force versus diplomacy, and for other aspects of life in this country. But we could all agree that the United States was and would remain a country of individual liberties, rights, and responsibilities, functioning in open markets that supported investor capitalism, and with sovereign powers and responsibilities in the larger world, all under a government structure spelled out in the U.S. Constitution. These things were the bedrock of our society.

Yes, there were fringe groups. In the late 1960s, radicals on the New Left demonstrated not just against a war they didn’t like but actually in favor of our Cold War enemies in the Soviet Union and China, as well as for our “hot war” enemy in North Vietnam. They envisioned an end to capitalism and the introduction of a Communist-style command-and-control economy in this country. They wanted to overthrow the U.S. government in a violent revolution. The Students for a Democratic Society and their adherents marched for it, and the Beatles sang about it. But reasonable people in the mainstream parties ignored them—or called them “crazies”—and went about the business of governing.

Some of those New Left radicals grew up and matured but never abandoned their extremist principles. Saul Alinsky became a “community organizer” and wrote his signature work, Rules for Radicals. Bill Ayers, one of the founders of the Weather Underground, became an education theorist but remained in his heart a domestic terrorist. One of Alinsky’s admirers—she actually wrote her senior thesis at Wellesley College on him—was Hillary Rodham Clinton. A Democratic politician who was early-on linked to Ayers in Chicago was Barack Obama. But Obama’s and Clinton’s associations can be written off as errors in judgment due to their youth.

Now, and for the past dozen years or more, the internal direction of this country has shifted. The radical viewpoint has grown from a splinter at the extreme left of the Democratic Party to a driving force in much of that party’s current policies and rhetoric. In recent years, many Democrats have expressed sympathy with “Democratic Socialism,” which joins political democracy with government ownership of production and distribution. Many Democrats now favor open borders and the surrender of national sovereignty. The want a free-form and non-originalist interpretation of the U.S. Constitution, including revision of the First Amendment, abolishment of the Second Amendment, and elimination of the Electoral College. The largest unionized group in this country is now the public sector, which bargains for increased pay and protections from politicians who are not negotiating with their own money. That’s an idea that even so liberal a politician as Franklin Roosevelt thought disastrous and ought to be discouraged.

Since the presidential election of 2016, many Democrats have been in open revolt. Celebrities have called for bombing the White House. And seemingly rational people proclaim the current incumbent to be “Not My President.” Since the confirmation of Bret Kavanaugh to the Supreme Court went against the wishes of the Democrats in the Senate—largely because they had previously invoked the “nuclear option” of making judicial confirmation a simple 51-vote majority instead of the 60 votes formerly needed to override a potential filibuster—the party has called the legitimacy of the court itself into question.

People on the left now want the country run as a pure democracy rather than the republic of elected officials that the Constitution established. Without the Electoral College, which was put in place to give small states with scant population some parity with large states, the citizens of Los Angeles county would be able to outvote the citizens of the Upper Northwest from Minnesota to Idaho. Presidential elections are always strategic games of chess, trying to take and hold at least 270 electoral votes. The makeup of the Senate, with two Senators for every state regardless of size, protects the interests of small states that are out of sight and of mind of large state and big city politicians. The Electoral College is part of the “checks and balances” put into the Constitution to protect minority interests from the whims and tyrannies of the majority.

What I fear in this new direction is what the left now seems openly to want: revolution. A country’s government is just laws and procedures, words written on pieces of paper or, worse, loosely preserved in bits and bytes somewhere in computer memory. What makes those words work is the belief of citizens and their elected politicians in the principles behind them. We agree to these things being true and necessary. We dismiss them at our peril.

The Constitution has set up a mechanism to change any part of it or add to its effective structure through the amendment process. But that process is hard, requiring any amendment to get a two-thirds vote in the House and the Senate, or being ratified by two-thirds of the state legislatures meeting in a special convention. The idea is that any change has to be popular enough and gain enough bipartisan support to pass this hurdle. We’ve done it twenty-seven times over the years, addressing issues as important as the voting rights of former slaves, women, and people age eighteen to twenty-one, and issues as ephemeral as the drinking habits of the average citizen.

The radical notion of changing or abolishing elements of the government or the Constitution itself by popular vote would invite a landslide. We have seen something of that kind operating in California, where initiatives are placed on the ballot by petition and voted on by the public at large. Those that pass acquire the force of law. Some initiatives have had good effects, some bad. But all are subject to the same verbal tricks and legal manipulations of any advertising campaign: claiming to support one thing while actually undermining or subverting it. Popular politics is a rough game to play.

The edifice of the U.S. government is fairly robust and can stand to have a few bricks knocked loose from time to time. But a popular onslaught in the name of “revolution”—whether meant picturesquely or in deadly earnest—could lead to a collapse. We’ve seen that happen before: France in 1879, Russia in 1917, Cambodia in 1975. The problem is that immediate and necessary changes in government always start with well-meaning people who have goals, a plan, and a vision for the future. But once the structure collapses, anything goes. The process starts with essentially kind-spirited liberals like Jean-Paul Marat or Alexander Kerensky; it ends with closed-minded tyrants willing to spill vast quantities of blood like Maximilien Robespierre and Joseph Stalin.

My fear is that the people so innocently dreaming of and calling for revolution today are the nation’s Murats and Kerenskys. If they get what they want, these people will go up against a wall within about six months. Waiting in the wings are the Stalins and the Pol Pots. And they will not be so liberal or kind-hearted.

Sunday, October 21, 2018

Mind Games

Subatomic particle

I am just finishing up Adam Becker’s book What Is Real? about the relationship between quantum physics and the real world it is supposed to represent. Becker tells a good story, especially as an introduction to the world of quantum physics, the players over the years, and the intellectual principles involved. His basic premise is that, while the equations that physicists use to predict the outcome of their experiments—and so test the value of those equations as representations of the underlying world of the very small—have consistently proven their worth, the physicists themselves remain in doubt as to whether the world that they are describing actually exists.

Without going into the entire book chapter by chapter, the issue seems to be one of describing a world so small that we cannot detect it without changing it. Atoms and their component protons, neutrons, and electrons—plus all the other subatomic particles in the Standard Model—are not fixed in space like pins on a board. As with everything else, they move, as do galaxies, stars, and planets. However, instead of occupying observable orbits and tracks across the night sky, atoms mostly vibrate with the energy of what’s called “Brownian motion,” and electrons buzz frantically and randomly around their nuclei like flies in a cathedral.

We can detect the larger celestial bodies—and even masses as small as freight trains and automobiles—with visible light without the danger moving or deflecting them much. Bounce a few hundred thousand photons off a teacup, and you will not move it one millimeter. But the subatomic particles are so small that the wavelength of light we can see is so long that it misses the particle entirely, passing over and under it with no impact. Imagine that the wavelength is a long piece of rope that two girls are spinning in a game of Double Dutch. If a human-sized person enters the game and performs unskillfully, the rope has every chance of hitting—that is detecting—his or her body. But if a flea jumps through the game area, the chances of that long, curved rope ever touching its body become vanishingly small.

To detect subatomic particles, physicists must use other particles, as if in a game of subatomic billiards, or photons with much shorter wavelengths and thus having much higher energies. A high-energy photon impacting a moving electron or proton will change its direction of motion. So the issue in quantum physics is that when you locate the particle you are observing here, it’s now no longer there but going somewhere else. In quantum physics terms, no particle has an exact position until it’s observed, and then it has some other position or direction of movement in response to the observation. Mathematically, the particle’s supposed position can only be defined by probability—actually, a continuous wave function that defines various probable positions—and this wave “collapses” into a single definite position at the place and time of your observation.

Well and good. This is what we can know—all that we can know for sure—in the world of the very small.

The first issue that Becker’s book takes up is that most of the original proponents of quantum physics, including Niels Bohr and Werner Heisenberg, adopted this lack certain knowledge to an extreme. Called the “Copenhagen interpretation,” after Bohr’s institute in Denmark, their view insists that the entire point of quantum physics is the manipulation of the results of observation. The measurements themselves, and the mathematics that makes predictions about future measurements, are the only things that have meaning in the real world. The measurements are not proof that subatomic particles even exist, and the mathematics are not proof that the particles are doing what we think they’re doing. To me, this is like calculating the odds on seeing a particular hand come up in a poker game, or counting the run of cards in a blackjack game, and then insisting that the cards, the games, and the players themselves don’t necessarily exist. It’s just that the math always works.

Other physicists—including Albert Einstein—have been challenging this interpretation for years. Mostly, they pose thought experiments and new mathematical formulas to prove them. But the Copenhagen interpretation persists among quantum physicists.

A second issue in the quantum world is the nature of “entanglement.” Here two particles—two atoms, two electrons, two photons, or two other bits of matter that is sometimes energy, or matter that oscillates with wave-like energy, or waves that at the instant of detection appear as singular objects—become joined so that what one of them does, the other will do. This joining and the parallel actions persist through random occurrences—such as passing through a polarized screen—and are communicated instantly across distances that would violate the limit of light-speed travel for any object or piece of information. Here is the sort of “spooky action at a distance” that Einstein derided as a violation of general relativity.

A third issue in quantum physics is the nature of Schrödinger’s cat. To illustrate the limitations of measurement, Erwin Schrödinger proposed the thought experiment of putting a cat in a sealed box with an apparatus that releases a poison when triggered by the decay of an atomic isotope. Since the atomic decay is unpredictable, the cat in the box might be alive or already dead. It was Schrödinger’s point that until an observer opens the box, the cat exists in two “superposed” states—both alive and dead at the same time, expressed by a wave function of probability—and that the wave function does not collapse and reveal the cat’s final nature until the box is opened. As a thought experiment, this is a metaphor for measurement and observation. But some physicists insist that the superposition is real. The actual cat is physically both alive and dead until discovered.

This superposition has led some physicists to describe a splitting of the universe at the point of the box’s opening: one universe proceeds with a physicist holding a live cat; the other with a physicist mourning a dead cat. This is the “many worlds” interpretation. Both universes are equally valid, and both continue forward in time until the next quantum change that forces each universe to split again in some other way.1

Now, I freely confess that I do not have the mathematical skills to understand the equations of quantum physics. And mercifully, Adam Becker’s book does not focus on or discuss the math in detail, just the thought experiments and their supposed meaning. I also confess that I do not understand what condition enables two particles or two waves to become “entangled,” or how they interact at a distance in this state, or what might be required to untangle them. Becker does not explain any of this, either. Further, I confess that I can sometimes be simpleminded, rather literal and obvious about what I see, hear, and know, and oblivious to distinctions and nuances that other people perceive easily.

But, that said, it would seem to me that what we have here is a misinterpretation of a metaphor. The limitations of observation and measurement, as expressed in colliding particles and probabilistically dead cats, are simply reminders that we do not have direct perception of the quantum world in the same way that we can see, hear, touch, and taste, if necessary, a steam locomotive or a billiard ball. That’s a good thing to keep in mind: we don’t have all knowledge about all things. However, to insist that this metaphorical reminder means that quantum physicists are simply doing math, and that their calculations—no matter how enticingly predictive—have no meaning in the real world, that quantum physics is just a mind game … that’s taking things too literally.

I have criticized the use of mathematics to prove the improbable before.2 And I insist again that, if all you’ve got is a series of equations to prove your point, you may just be playing mind games with yourself and your fellow physicists. But the reverse is also true: the real world must exist at the quantum level. If the math works out, if the vision behind it holds together, then it must be describing something that has actual substance and energy. The details may not be exactly as we understand them. The description may be missing some elements, forces, or bits of math that we haven’t worked out yet. But the world must exist in the smallness of subatomic particles as much as it does in the vastness of stars and galaxies.

The math doesn’t exist in a quiet vacuum. The cards, the game, and the players must also exist to give the calculations meaning.

1. I have cheerfully used the many-worlds interpretation in my novel The Children of Possibility, about time travelers from the far future, and in its prequel The House at the Crossroads. But I know I’m having fun and don’t take this stuff too seriously. So much fun, in fact, that I’m now working on the sequel that picks up where Children left off.

2. See Fun with Numbers (I) and (II) from September 19 and 26, 2010.

Sunday, October 14, 2018

Courage in Authority

King Lear’s Fool

King Lear’s Fool

We have a young man on the board of directors of our condominium homeowners association who is consistently negative. He routinely predicts disaster in every situation. If someone proposes a solution, he calls for more consultants, more bids, more analysis, more legal review. He always criticizes proposals and decisions by other board members for their lack of “doing their homework” and “due diligence,” or their failure of “fiduciary responsibility.” If he offers a solution of his own, it is numbingly complex—if not self-contradictory—and hedged with so many technical and legal caveats that it becomes simply unworkable.

He has been responsible at times for bringing the entire organization into a state of paralysis. And if other board members vote for a motion that seeks to override his objections, he always votes against it or abstains, in order to preserve his right to later criticize the decision. Yet he never considers—or offers to take responsibility for—the negative consequences of action postponed or prevented by his criticisms and time and money spent on considering his objections.

If this young man, his attitude, and his effect on the organization were unique to our homeowners association, this might make a good story but would hardly rise above a curious local anecdote. The truth is, we see this kind of negativity too often in our current politics on both a local and a national level—and too often in the corporate and other spheres. Problems are insurmountable. Solutions are insufficient, infeasible, unprincipled, illegal, or unconstitutional. Nothing can be done but, at the same time, the situation cannot be allowed to continue.

The position of the naysayer, the delayer, and the critic is an easy one to assume. It involves no great courage to demand that the organization take more time to consider, seek another opinion, gather more data, investigate all possibilities.1 The organization usually places no blame if we don’t perform an action, approve a decision, praise or support a member, or confirm a vendor. For if the action or decision is not made, or the person or situation is left in a state of uncertainty, there is no discernible result that might later be examined and criticized. It’s a no-lose position for any member of a group to take.

What requires courage is to take action, make a decision, or give your approval and blessing to another person or group. Of course, the action might fail, the decision lead to disaster, and the person in question turn out to be a liar, a thief, or a scoundrel. Those possibilities always exist. The best that anyone can do is make a judgment based on available data, personal experience, imaginative projection, good founding principles, and common sense. After that, the outcome is in the realm of probability or—in an older view—the lap of the gods.

Any position of authority requires such courage. Even when an organization has a second in command, a board of directors or council of advisors, a legal and technical staff, and an on-site actuary, most decisions come down to one person willing to act—or to formulate and spread a vision upon which others can take action. Any deliberative body, like a senate, assembly, parliament, or a condo board will, on any one issue, look to the person who will take the lead to find or imagine a solution, provide arguments for it, defend it against its critics and naysayers, and call for action or a vote.

That person must inspire confidence among those who will vote for the solution or be required to act on its implementation. They must believe he or she is a person of integrity, sound judgment, and experience. Moreover, they must believe he or she is acting in the organization’s best interest and not for personal advantage.

But still, the person in authority is taking a risk. If the action or solution fails, the proposer or promoter will be labeled a failure along with it. Even if the proposal had a unanimous vote behind it, the leader who complains, “But we all agreed …” is taking a weak position. The rest of the organization will simply respond, “Yes, but we agreed with you!

This is why we ask of people in authority that they possess and demonstrate courage along with their other qualities of experience, judgment, integrity, and sobriety. The CEO of a corporation, the captain of a vessel, the pilot of an airplane are all required to take responsibility for their actions. They must make judgments, recommend and follow courses of action—sometimes in an instant and without recourse to advice, consultation, and second opinions—and trust that the people around them—subordinates, employees, crew, vendors, suppliers—will perform appropriately. And if the performance of the people undertaking the action, or the mechanism of the ship or plane itself, were to fail, then the CEO, captain, or pilot stands ready to take the blame. If the person in authority did not have this courage, then the company would never do anything, the ship never leave the dock, and the plane never leave the ground.

It’s a simple lesson: Action takes courage. Delay is not always wise or safe. And the path forward leads upward and requires strength.

1. For the role of the leader in making a decision, see the story of “five heartbeats” in The Immediacy of Life from April 29, 2018.

Sunday, October 7, 2018

The New Conservatism

Lenin on a Tribune

A. Gerasimov, Lenin on a Tribune

I believe there’s a common feeling among those who follow politics and economics, based mostly on the labels assigned, that “conservatives” want things to stay the way they are, while “progressives” want things to move forward.

Conservatives are supposed to yearn for the political, economic, and social conditions of their youth. In my case that would be rock-n-roll, ducktail haircuts, the postwar boom, Eisenhower political blandness, and stable nuclear families living in suburban housing with good schools. There were some downsides to be sure: duck-and-cover drills, Jim Crow segregation, Formica in loud colors, and Melmac dinnerware. But all in all, for the white middle-class majority, it was a good time to be alive in America. We didn’t see the social and economic problems or, if we did, we minimized them.

Progressives are supposed to look ahead to better times, which means focusing on the things that need to change right now. For most progressives these days that would be income inequality, industrial and automotive pollution, environmental damage and anthropogenic climate change, racial inequality, binary gender inequality, capitalist winners and losers, housing shortages, healthcare governed by insurance companies, and cultural hostility for “the other” leading to rampant hate speech. Sure, there are some good things: advances in renewable energy, administrative regulations on industry and finance, progressive income taxes, union protections, feminism, and the #metoo movement. But these things are not enough—may never be enough—when what is needed is a true social, cultural, and economic revolution to make people equal in both their expectations and outcomes, happier with their lives, and kinder to each other.

But are these labels correct?

I believe many conservatives have a forward-looking approach in many areas, including politics and technology. They believe the social and economic climate is improving all the time, compared to the situation fifty, a hundred, or two hundred years ago. They believe in continued evolution in this regard, but not abrupt revolution. Much of their expectation is based on humankind’s increasing knowledge and technological capability, derived from the application of scientific and humanitarian principles originating in the Enlightenment of the 17th and 18th centuries.

In contrast, many progressives seem to be in the position of tacit conservatives. They don’t trust evolutionary change in social, political, or technological conditions, largely because such change is not predictable or guided by the principles to which they subscribe. In other cases, they actually want to preserve a static world which is safe and predictable until they choose to change it through a directed revolution.

Let me suggest three areas in which this is so.

First, union protections. The history of unionism has been one of fighting changes in technology and working conditions that might affect the number and skill levels of jobs, or require workers with seniority in a craft to learn new skills or enter new positions. The classic example of this tendency was “featherbedding” in the railroads during the 1930s and ’40s, preserving the jobs of firemen who stoked the boilers on steam engines when the railroad companies converted to diesel-electric locomotives. An earlier example was hand weavers who tried to destroy and ban mechanical textile mills because the machines put them out of work. Unions consistently choose older ways of working over new efficiencies if it means that certain jobs and skills will become outmoded. This is a bid for stasis over advancement and is, at least in spirit, non-progressive. What they will make of artificial intelligence and increasing automation in the workplace is totally predictable.

Second, capitalism itself. The basis of market-driven economics and capital investment is “creative destruction.” Every product and service, every company that provides products and services, competes in the marketplace for consumer attention and dollars. Consumer favoritism and brand loyalty only go so far—and not far at all if a product line or service deteriorates in terms of quality, usefulness, price, or some other dimension that customers value. Sometimes, however, frivolous products or variations are introduced and sold; the classic example is Bernie Sanders’s complaint about “twenty-three kinds of deodorant.”1 But by and large, new and useful products are coming all the time: consider the personal computer and the internet revolution.

Capitalism in a free market means giving people what they want, even if it means giving them what they only think they want—or what you can convince them to want, or deceive them into wanting. Capitalism is not predictable and directed, but decidedly uncontrolled. Sixty years ago, when I was a child, everyone confidently predicted that my car would fly by the time I was middle aged. But no one, looking at the basement full of vacuum tubes or single transistors that was the current state of the art in computing predicted the development of the integrated circuit, the microchip, and telephones that would eventually replace cameras, stereo systems, movies and television, telegrams, libraries, and retail stores. Creative destruction is a wild and woolly territory—just ask a taxi driver whose radio-dispatched cab is being replace by a cellularly summoned Uber or Lyft driver.

We’ve seen enough of the command-and-control economies that were spawned from social and economic revolutions in the 20th century to know how they operate. They were all focused on preserving the status quo in terms of products, processes, and services. None of them developed the advances in computing, personal communications, or consumer goods—let alone medical technology and energy infrastructure, to name a few more areas—that we have steadily enjoyed in the capitalist West.2

Third, the environment. Is the climate changing? Oh yes! It was changing before modern industrialization and transportation fueled by coal, oil, and gas began increasing the atmosphere’s carbon dioxide load. We live on a planet with a precession in its orbit, under a variable star, with an active geology based on plate tectonics. We have gone through periodic ice ages, glaciations, warming and cooling periods, and occasional long winters due to volcanic eruptions ever since humans started recording their history—even before, if you count all the cultures with a flood story in their mythology.

Sea level rises and falls, deserts grow and shrink, forests advance and retreat, rivers change their course, all without the influence of human activity. Life has evolved on this planet to adapt to these changes. Every extant individual and species was shaped to take advantage of a particular environmental niche—except humans, of course, who use their big brains and clever hands to build shelters and machines that let us exploit areas where we otherwise could not live. Since those environmental niches—particularly the ones with marginal populations—are changing all the time, some species must either adapt, move, or die out. It matters not how picturesque or precious a species might be, if it lives too close to extinction in terms of diet or tolerance for environmental stress, it will eventually disappear. In the long run, no one can save the panda.

And yet the current crop of environmentalists would try to prevent this change wherever possible. They want a static world in which every river, swamp, and forest remains unchanged, where every butterfly and exotic plant can be preserved. They want to fix the world’s climate at some preferred set point—usually around the time and temperature of their childhood—and maintain it … forever.

Even the politics of the progressives is frozen in place and time. Their view of “the arc of history” is guided by a 19th-century view of social and economic order as prescribed by Marx and Lenin and then communicated by the anti-war radicals and anti-capitalist activists of the 1960s. It is a world view that values world peace at the expense of national sovereignty and the primacy of human-muscle labor at the expense of technological advancement. If they were alive today, Marx would not be a Marxist, and Lenin would be busily adapting and promoting some other social and economic creed.

I believe we are at a time of great confusion over labels and intentions. I also think we are at a time that demands a new teaching, a new world view, a new politics and economics that is neither “conservative” nor “progressive” but adopts a new social and philosophical stance entirely.

I just wish I knew what it was.

1. I’m sure all the ladies out there wouldn’t mind using my brand of deodorant, which has the image of a sailing ship on the package. Or that Bernie wouldn’t mind using the Secret brand—“Strong enough for a man, but made for a woman.” One of the comments about life in Russia in the 20th century was the prevalence of “Soviet scent,” as if one smell would fit all bodies.

2. To be fair, none of them made flying cars, either.

Sunday, September 30, 2018

Retroactive Prime Directive

Alien landing

In the Star Trek universe—in case you don’t follow the series—there is a rule called the Prime Directive. It forbids the Federation’s interstellar explorers from interfering with the civilizations they discover, especially the more primitive societies. Visitors to new civilizations are forbidden from offering advanced technologies or, in some cases, even revealing that they come from beyond the stars. The intention is to preserve the unique nature of these developing civilizations and allow them to achieve whatever their native skills, cultural qualities, and particular history will enable them to become. Many of the various Star Trek series include stories where the Prime Directive is tested and ultimately found to be wise and appropriate.

Of course, in the Progressive future world depicted by the series, the Prime Directive is an antidote to and an apology for Western imperialism. This is the world, or the galaxy, done right the first time. This is the situation in which an advanced civilization—the enlightened, gracious, Western European–based explorers of Star Fleet—“boldly go[es] to seek out new worlds and new civilizations” and then carefully and studiously leave them alone. No educating the natives here. And certainly no enslaving them and making off with their trade goods and raw materials.

It’s a pretty picture. An ideal of self-restraint. But is it real?

In the Progressive doctrine, the New World as discovered by 15th- to 19th-century Europeans embodied many such primitive civilizations. The “Native Americans,” the people who were here first—but only after crossing the Bering Sea land bridge at the end of the last Ice Age—were still living a mostly Stone Age existence. The hunter-gatherers of the North American plains needed something on the order of twenty square miles of open land to feed one family throughout the year, several thousand square miles or more to feed a whole tribe. The city-based civilizations of Central and South America practiced slash-and-burn farming and so could feed more people on less land, but they still were primitives compared to European farmers and their tools, and these populations were more vulnerable to climate cycles.

In either case, the North American tribes and civilizations possessed no horses—until, that is, the Spanish came and a few of their herds went feral in the wilderness. The natives had no iron, certainly no gun powder, no simple machines, and not even the wheel. Their spears and arrows were tipped with bits of knapped flint, and the “swords” of Central American warriors were clubs edged with flaked obsidian. The Maya had an advanced form of ideographic writing and sophisticated mathematics, as well as pretty good skills with stone work. The Inca of South America had a flair for hydraulic engineering equal to that of the Romans. But still, these were largely Stone Age peoples.

They also weren’t particularly peaceful or gracious themselves. The Aztecs and the Maya both practiced human sacrifice. The tribes of the plains went to war against each other long before the Europeans showed up. Widows and the aged in the tribe who had no one left to support them would be exiled and exposed. Life was hard. People died.

The modern, Progressive view that the Europeans came into the New World, committed genocide against the peaceful natives, enslaved the survivors, and stole their lands and raw materials is a compelling narrative. But absent a Western culture imbued with some kind of 15th-century Prime Directive, it is not a realistic one.

With the exception of small groups—prospective traders like Christopher Columbus, who was only seeking a passage to the markets of Asia; explorers and cartographers like John Cabot and Amerigo Vespucci, who were commissioned by royalty and functioned not unlike the explorers of Star Trek; and Portuguese and Basque fishermen, who landed in what was to become New England in order to process their catch of the Grand Banks cod fish—most of the Europeans who came to the New World were people seeking a new life, new land, refuge from persecution, and freedom from the religious restrictions, economic repressions, and monarchical wars of Europe. Some also came as transported convicts, who had no choice but indentured servitude until they could escape into the wilderness. These Europeans did not come to observe, study, and make a map. They came to stay and hoped to prosper.

One can imagine such people—the Pilgrims or the Spanish conquistadors—arriving on the eastern shores of the New World and exercising some form of Renaissance Prime Directive. “Oh my! There are already people living here! And they have formed stable hunter-gatherer—or in some places slash-and-burn—cultures capable of their own eventual development. It is not our place to intrude. We must preserve their heritage on their own land. We will now withdraw and not disturb them.” Maybe the Pilgrims could have found an isolated and uninhabited island somewhere else to establish their spiritual sanctuary. Maybe the conquistadors could go and invade some established neighbor who was both culturally and technologically equivalent, like Morocco, and had the ability to fight back.

That is not, however, the way these things work. And it’s not because Europe had experienced its own invasions from the dawn of prehistory: the Dorians, the Ionians, and Sea Peoples coming into Greece; the Romans into the rest of the Mediterranean and Western Europe; the Celts, Huns, Goths, Vandals, and Visigoths into Rome; and the Saxons, Danes, and Normans into England. The history of the world has been that of roving bands moving in on and pressuring their neighbors, when they weren’t carrying out explicit wars of conquest like the Mongols and the Muslim Caliphate. The fact that the New World pitted Stone Age people with flint spears against Iron Age invaders with horses, the wheel, and gun powder is a tragic accident of history, but it was not unforeseen.

When we first meet an intelligent species out among the stars, let us pray that we are the explorers and that our interstellar drives, dense energy sources, potential weaponry, and coherent organization allow us to be at least culturally and technologically equivalent to whomever we find. Then perhaps we can afford to follow our own Prime Directive. But if we meet that extraterrestrial species as it comes here to Earth, where the advantages in energy, weapons, and sophistication lies with them, then we had better prepare to either make friends fast and learn their technology even faster—or, in the words of Homer, “fall on the ground and bite the dust.”

In my opinion, it has never been a good strategy, in the words of Blanche Dubois, to “rely upon the kindness of strangers.” People possessing advanced skills and their own intentions will not wait upon the less developed.