Sunday, November 29, 2020

The Wichita Lineman

Powerlines

Glen Campbell’s “Wichita Lineman”—a poignant song from my college days—has been running through my head. And after ten years working in the electric and gas utility business, back in the 1980s, I still can’t get the inconsistencies out of my mind.

I am a lineman for the county
And I drive the main road,
Searchin’ in the sun for another overload.

Where to begin? Well, first, “lineman” is not usually a county function. In most service territories, the line crews are employed by the public utility that owns, operates, and maintains the powerlines. So I thought we had difficulty in the first line. But, on investigation, I found that Wichita is served by the Sedgwick County Electric Cooperative, so in that case the lineman probably does work for the county.

But to continue … In most utilities, the linemen don’t drive around looking for trouble. That job—which is a promotion above the classification of lineman—goes to a “troubleman.” This person works alone, as in the song, and does patrol the distribution and transmission lines. And when the troubleman finds trouble, he—sometimes nowadays, but rarely, a she—calls out a line crew to fix it. Note that while some powerlines, especially distribution lines in neighborhoods, have rights of way along “the main road,” most of the higher-voltage transmission lines cut across country. So the troubleman spends more time on back roads and in the dirt than on easy main roads.

And then, what kind of trouble does this person look for? An “overload” implies a powerline that is carrying too much electricity. That is, the utility is operating it at too high a load for conditions. Electricity passing through a wire creates resistance and sheds this energy in the form of heat. On hot and windless days, the heat can start to melt—or at least soften—a one-inch cable braided from aluminum strands, so that it sags between towers or poles and threatens to touch the trees, brush, or even ground along the right of way. That would be an obvious problem. And maybe in Kansas, in 1968 when the song was written, utility operators wouldn’t know when they were overloading their lines and causing fires. These days, powerlines are monitored by remote sensing equipment and the operator varies the load to match conditions.

What most troublemen are looking for is the location of a known fault. Sometimes, in a high wind, two of the three phases on the line—carried in the three separate wires strung from pole to pole—come together, short out, and cause a fault. Then, in a modern powerline, a device called a “recloser” works like a fuse: it pops, interrupting the flow of current, then tries to close again in case the touch was momentarily. If the fault persists, it stays open. The troubleman finds the hanging recloser, gets out a long extension pole, and closes it. But more often the trouble is a line brought down by someone driving into a pole, or the wind causing a tree branch to fall into the lines and bring them down. Then the troubleman calls out a line crew to fix it.

I hear you singin’ in the wire;
I can hear you through the whine.
And the Wichita lineman is still on the line.

I understand that you can sometimes hear the wind in the wires and think it’s your loved one singing. If you hear a whine or a buzz, then you’re not dealing with a neighborhood distribution line. More likely it’s a high-voltage transmission line, and the sound is caused by moisture or dust carrying some of that current outside the wire and along its surface, like a giant Tesla coil. And yes, a lineman or troubleman might hear this too.

I know I need a small vacation
But it don’t look like rain.
And if it snows that stretch down south won’t ever stand the strain.
And I need you more than want you,
And I want you for all time.
And the Wichita lineman is still on the line.

All right, this one’s a definite no. Well, maybe in Kansas, where the rain comes straight down and the wind doesn’t blow. In California, however, rain means winter storms with lots of wind. That means lines coming down and lots of trouble to repair. You don’t want your troublemen and linemen going on vacation then. You want everyone on call and ready to roll at all hours of the day or night. And maybe, in Kansas on the prairie, you can get so much snow that it weighs down the lines or the towers and causes them to collapse. But if a troubleman knew of a weakened section vulnerable to such weather, he would have put in an order to repair or replace it. Maybe the utility company or cooperative has not yet executed that order, but prudent management suggests you do that while the sun is shining and not wait for the snow to take out part of your system.

And then, this lineman is thinking about his lady love and needing her and wanting her. Some senior member of the crew is going to slap him upside his hardhat and tell him to get his mind on the job and get back to work. That’s the reality.

But then, “Wichita Troubleman” have been a different type of song. And none of it would have rhymed.

Sunday, November 22, 2020

On Abortion

Human embryo

I try not to be too overtly political on these blogs, because I have friends and readers on both sides of the aisle. Also, I am generally in the middle on most issues, polling about three points right of center on a scale of one hundred either way. But this issue has been batting around in my head recently, and writing it down is a way to get it out.

First, let me say that I’m not generally against killing. I mean, we humans do it all the time. I eat meat and feel unashamed. I support war as the last resort of the beset and desperate. War means killing. War means horror. But war is what you do when negotiations break down and surrender is not an option.

Like most Americans, I think, I am not opposed to abortion when it’s done in the first trimester. At that time the embryo is still developing and doesn’t have much of a nervous system, so not a lot of sentience. While I believe life begins at conception, the early-stage fetus is still just the potential for human life. A lot of things can go wrong in a pregnancy and do. And a miscarriage in the first trimester is more a dashing of the parents’ hopes than the destruction of a human person. Still, I don’t like abortion as a birth-control option, because it’s invasive and it seems to teach the woman’s body how to miscarry. But if she is beyond the option of less invasive measures and still wishes not to be pregnant, that will be her choice.

The second and third trimester become, for me, more problematic. The fetus is developing a nervous system, sensation, and—if we can believe the memes that would have you play Bach to your belly, talk to it, and send happy thoughts in that direction—some self-awareness. Destroying a fetus in these circumstances becomes more like the murder of a human person than the destruction of a clump of cells. I have some feelings about that, and so do many Americans, I think.

Abortion at the moment of birth—what “partial-birth” abortion would seem to advocate—is, in my mind, really the killing of a viable human baby. I understand that the birth may be induced for this purpose, but that does not make it better. I also understand that the people who advocate for this are less concerned with the mental or physical health of the mother than they are with the legal standing of the abortion issue. They are absolutists and legal purists: if abortion in principle is not made an absolute right at all stages of a pregnancy, then it can be challenged and overturned at any stage, including the moment of conception.

I am not an absolutist or a purist about anything. So the appeal of this argument leaves me cold. I believe people should be responsible for their actions: if a woman decides she does not want a child, she should make up her mind in the first three or four months, not wait until the baby is almost born. Bending the law and common sense to accommodate her every whim is not good practice.

Also, abortion at the end of a pregnancy crosses a line that, I think, most moral people are unwilling to cross. If it’s acceptable to kill a baby at the moment of birth, then why not two weeks later? Does the child keep you up at night? Do you have regrets about becoming a parent? Smother it! Does your two-year-old daughter throw tantrums and grate on your nerves? Drown her! Did your sixteen-year-old son borrow the car and dent it? Shoot him! Did your sixty-year-old daughter tell you it’s time you were put in the old folks’ home? Stab or poison her!

Again, I’m not completely opposed to killing—at least not when it’s done with cause. But I do believe people should take responsibility for their actions. And their response to pressure, aggravation, and opposition should be proportional to the incident, where casual murder would be an extreme reaction. While I don’t have an absolute respect for life—again, not absolutist about anything—I do believe being careful and respecting the rights of other sentient beings, especially among your own species, is a moral good. It’s something to strive for, even if we cannot always attain it.

Now, many women are also saying, with reason, “My body, my choice.” This is to say that other people, men in particular, and society in general, have no right to an opinion in the matter of abortion, nor should they be allowed to make rules about it. And, in my view, this is true up to a point—that point being somewhere in the second trimester. Until then, the fetus might be considered just a “clump of cells,” not much different from a tumor, and certainly without a separate identity or viability, perhaps with the potential for humanity but not exactly a human being. But after that point—and we can debate where to draw the line—the fetus becomes a separate entity with sensation, some awareness, and more than just potential. At that point, the woman is hosting, sharing her body with, another living being. And whether you like the biological facts or not, that becomes a societal concern.

To say, “My body, my choice” about the entire process, up to and through the stage of actually giving birth, is like someone saying, “My dog, my property.” After all, they own the dog, bought it, fed it, took care of it in their fashion, and can now decide what to do with it. If a dog owner wants to beat it, starve it, leave it out in the cold chained to a tree, or even abandon it alongside the road, then society should have no say in the matter. The dog is a wholly owned possession that may be disposed of at the owner’s whim. Right?

That’s one view of the legal process about ownership and responsibility, but most of us would disagree. A world where such mind-your-own-business callousness is the societal norm would be a cold and unfeeling place, without pity or concern for the weak and defenseless.

That is not a world I want to inhabit.

Sunday, November 15, 2020

The Unexpected Candidates

Puppet master

Something very strange is going on. Or, put another way with more emphasis, what the hell is going on? Or, as we used to say back in I think it was the 1960s or early ’70s, “Who the hell for President.” Simply stated, the American electorate over the past decade and maybe more has been choosing, or perhaps being offered, the most surprising, least expected, and sometimes least qualified candidates for the highest office in the land.

The Presidency is the most prominent and most powerful popularly elected position in the country. It ranks above Speaker of the House, who is only elected by members of Congress; above Majority Leader in the Senate, who is only elected by members of the majority party in Congress; and Chief Justice of the Supreme Court, who is appointed by the President and confirmed in the Senate. Of all the key players in our national government, the President is the only one we all get together and choose, first in the primary elections or party caucuses in each state and then in the national election.

Yes, the Republican National Committee and the Democratic National Committee have great influence on how the candidates of each major party are chosen. The national committees solicit and direct funding for campaigns and write the rules for party organization and choosing delegates to their national conventions, where input from the primaries are reduced to votes for and against potential candidates. And sometimes the national committees, whose members and influence may not be publicly recognized—that “smoke-filled room” thing—put their fingers on the scale. In both parties, the votes at their national convention include both “pledged” delegates, representing results of the primary election in their state, “unpledged” delegates, who presumably can vote their conscience, or the desires of the party structure, or whatever.

Up until 2018, the Democrats had a large number of “superdelegates” in this position, representing members of Congress, governors, and past Presidents. They could vote however they themselves wanted or at the direction of the party. After 2018, the superdelegates were forbidden to vote on the first ballot of the convention, effectively letting the people decide that much, unless the outcome was beyond doubt. In 2015, the Republicans ruled that unpledged delegates had to vote in accordance with the popular vote in their state’s primary election.

And then, there is the matter of whether the state holds an open, semi-open, or closed primary election, reflecting when and how people not registered with a particular party can vote for the candidates of other parties. Only thirteen states and the District of Columbia have closed primaries, where the voter is only offered the choice of candidates within his or her registered party affiliation. Fifteen states have semi-closed primaries, where only independent voters may choose among candidates on any of the affiliated ballots, or may change their registration on election day. Fourteen states have open primaries, where the voter chooses the party ballot on election day. Others, including my own California, have some kind of blanket primary, where the voters choose from a roster of all candidates from all parties.

So how much actual choice any individual voter has in the selection of the final candidates put forward on the November ballot is open to question.

Still … what the hell has been happening? Sometimes, the party’s candidate has been around so long, raised so much money, or tried often enough that the national committee, the primary voters, and the delegates decide that, come what may, “it’s his [or her] time.” This is apparently what happened when the Republicans selected the cold-natured Bob Dole to run in 1996 and Democrats promoted the unlovable and sometimes questionable Hillary Clinton in 2016. In both cases, party loyalists had to grit their teeth and vote the platform. At least, both candidates had solid careers in the Senate, and Clinton had been Secretary of State, a high and influential office in any administration.

But in 2008, the Democrats nominated Barack Obama, a junior senator with limited government experience, with sealed transcripts and a ghost-written autobiography—but selected presumably because he was the only obvious Black candidate, and “it was time”—and the Republicans nominated John McCain, an established senator from Arizona but one who had voted against his party’s interest so often that he felt like an independent.

In 2012, the Republicans nominated Mitt Romney, a businessman, son of the former governor of Michigan, and chief executive of the organizing committee for the 2002 Winter Olympics. He was a nice enough guy, but still not ready for the presidency.1

In 2016, the Democrats finally decided it was Hillary Clinton’s “time,” narrowly excluding the senator from Vermont, Bernie Sanders, whose party affiliation is officially “independent” and unabashedly claims to admire socialism. And the Republicans passed over a dozen able candidates with political experience including governors and senators as well as a nationally prominent businesswoman with executive experience to choose Donald Trump, a real estate magnate and reality-television star with no background in electoral politics.

And then in 2020, we almost got Sanders as the Democratic nominee but he was passed over at the eleventh hour by Joe Biden, a long-time senator, vice president under Obama, previous candidate for president—but also a man of obviously frail and perhaps failing mental and physical health. If it was “his time,” that was sometime in the past. Biden was joined on the Democratic ticket by Kamala Harris, the junior senator from California and former state attorney general, who dropped out of the field of presidential candidates before her first primary. These are hardly charismatic personalities.

It used to be that candidates for the highest office in the land would have extensive political experience, usually as a governor running one of the larger states or as an influential and long-serving member of Congress, at least as a senator. But lately we have seen a parade of candidates chosen for some other reason. And not all of them have outstanding service in some other line of work, such as Dwight Eisenhower in the 1952 election after a leadership role in winning World War II.

It is almost as if the parties, or the people themselves, are devaluing the office, saying “Who the hell for President.” And this is at a time when Congress defers more and more of the details in the laws they pass to the judgment of unelected bureaucrats in the Executive branch and lets the legality of those laws be decided in cases before the Supreme Court. You would think that the person who appoints the senior executives in the administration, sets its day-to-day tone, can veto legislation, and nominates the federal judges and Supreme Court justices should be a person of proven capability, probity, and reasonable judgment.

Instead, we seem to get more than our fair share of nonentities and, sometimes, thinly disguised crooks and buffoons. Who chooses these people? What the hell is happening?

1. And it was only in the last year or so that he became the junior senator from Utah, gaining the political experience that he should have had eight years ago.

Sunday, November 1, 2020

Electricity's Dirty Secret

Power lines

For the decade of the 1980s I worked in the Corporate Communications department of the Pacific Gas & Electric Company, PG&E, one of the largest energy companies in the country, with a service territory covering most of Northern California. One of the biggest things I learned from this time—aside from the fact that your local utility is made up of good people who support their community—is that there are many ways to generate electricity and the key to choosing among them is economics rather than technology.

By a quirk of geography and history, PG&E had—and still has, for all I know—one of the most diversified generating systems in the country, although some of that generating capacity has since been spun off to private owners and suppliers. The company inherited a network of dams and flumes in the Sierra Nevada that provided powerful water jets for hydraulic gold mining in the 19th century, and these were easily converted to run turbines in hydroelectric powerhouses up and down the mountains. It had four large thermal power plants—steam boilers driving turbine generators—that drew on the company’s abundant natural gas supplies for fuel. PG&E also operates smaller units that burn the gas to directly drive turbines, similar to a grounded jet engine attached to a generator. It built a major nuclear power plant at Diablo Cove in San Luis Obispo County, and built almost two dozen generating units drawing on The Geysers geothermal steam field in Sonoma and Lake counties. It draws power from the world’s largest photovoltaic power plant, on the Carrizo Plain, also in San Luis Obispo County, and from the Shiloh wind-power farm in the Montezuma Hills along the Sacramento River in Solano County, among others. The company buys electricity from the Western System Power Pool (WSPP) and the California Independent System Operator (CaISO).

With all of this diversity, PG&E’s energy cost is relatively low, depending on factors like snowfall in the Sierras to feed those dams and the state of the aquifer feeding the steam fields. The company does not draw on enough renewable energy—yet—to be much affected by variations in the wind and sundown over the state.

If the state ever fulfills its promise to get rid of all fossil fuels and provide all power from renewables like wind and solar, the remaining nuclear and geothermal assets will not be able to make up the difference from those abandoned gas-fired power plants.1 There is talk of making up the difference from windless days and dark nights with some kind of energy storage: batteries, compressed air in underground chambers, or superconducting materials that let an electrical charge chase round and round in a donut-like torus. None of these technologies has been tried or proven at any scale needed to supply a utility grid. There is also talk of mandating solar powered roofing in all new housing and retrofitting existing roofs, with transformers to convert the electricity to household current and with batteries to supply energy on dark days and at night. Aside from the initial cost and payback time, generally measured in tens of years, these plans are intended—at least in the promoters’ dreams—to put the local utility entirely out of business.2

The dream of “free electricity” without fuel costs or emissions, using wind and solar power, runs into some basic engineering realities involving energy efficiency and capital cost.

In making these technologies work, the engineer has to move from conceptual design—linking up components, energy flows, and costs in back-of-the-envelope calculations and drawings—to detail design—putting the components in place at the right scale, establishing the true cost of each component, and accounting for variables like heat loss and line losses.3

Engineers constantly work with another variable set, too. For them, there is no such thing as perfection, no solution that is best under all conditions. Everything is a tradeoff in the engineer’s world. Instead of “good” and “bad,” the engineer thinks in terms of “better” and “worse.” You can make electricity with a gasoline generator—if the EPA and county authorities will approve it—or with a hand crank, or by rubbing a silk scarf on a glass rod. The question is always—and this is what I learned at PG&E—at what site, with what investments, and using what fuel supply at what cost? How attractive or interesting or politically correct the technology might be is not a factor.

Solar photovoltaics—generating an electric current by using the energy in sunlight to pass an electron through a semiconductor substrate—is about 20% to 22% efficient, even in cells and panels of the highest quality. This means that three-quarters of the solar energy that falls on them is lost to heat or reflection. And how that efficiency is affected by dust or a layer of snow and ice is still undetermined in large-scale applications, although probably not to good effect. Perhaps, in time, research into new materials can boost that efficiency up to maybe 30%, but much farther than that doesn’t seem to be in the cards.

Wind turbines have an efficiency of about 50% to 59%. This is comparable to the energy efficiency of a gas turbine or thermal power plant. But wind farms require the right conditions, a place with strong, steady, and predictable winds. Like a geothermal steam field, such locations are a resource that can’t be established by fiat or political rezoning. And wind turbines, like any machine dealing with strong forces, are subject to mechanical stresses on the blades and shafts. Although their energy resource is free, the capital investment to harvest it is expensive, not easy to maintain—that is, a heavy generator on a tall tower, sometimes sited on a hilltop, is harder to fix than a turbine generator under cover in a power plant—and subject to depreciation and eventual replacement.

Either of these fuel-free, renewable resources would require the participating utility to maintain a commensurate amount of “spinning reserve”—an alternate, dispatchable generating resource all fired up and ready to come on line instantly to meet the system load dropped when the wind dies or the sun goes behind a cloud. In most cases, this reserve power would have to come from fossil fuels, because the small amounts of electricity available from hydro and geothermal power, and the supply from an operating nuclear plant, would already be spoken for. And some form of “battery backup” on a systemwide basis is not currently technically or economically feasible.

And finally, fusion—the dream of limitless energy by harvesting hydrogen isotopes from sea water and compressing them with laser blasts or electromagnetic fields—is still ten years away. Always ten years away. Yes, we can make deuterium and tritium fuse with either compression technology; we just can’t make them give off more energy than we must put into the reaction. For now, it seems, the only way to fuse hydrogen into helium reliably is to compress it in a steep gravity field, like the inside of a star. Until we find some magical gravity-manipulation technology, utility-scale fusion is going to remain a dream.

All of these renewable technologies—except for fusion—have their place in a diversified system. None of them is ready, yet, to satisfy all of our energy needs. And a modern economy runs on ready availability of energy the way ancient economies ran on resources of clean water and food. Maybe in a few hundred or a thousand years, when we have run out of conveniently obtained fossil fuels, we will develop efficient and low-cost solar4 or fusion power. But for right now, we run on bulk carbon energy.

And no amount of wishing will make it otherwise.

1. Of all the fossil fuels, natural gas is the most efficient in terms of high energy output with low carbon dioxide emissions. This is because the methane molecule (CH4) burns completely, breaking all of its hydrogen bonds in turning methane into carbon dioxide and water. Other carbon sources like coal and oil either burn incompletely or tend to put soot particles and other contaminants into the atmosphere along with greater amounts of carbon dioxide.

2. Of course, manufacturing plants that need large amounts of electric power to run their operations—more than rooftop solar can supply, like steel mills, auto factories, shipyards, and other heavy industries—can either run their own generating stations or leave the state.

3. Building a solar- or wind-power farm—whose energy resource and efficiencies are generally be weaker than a thermal plant’s, and which will generally have to be sited some distance from the end user—must take into account energy lost to resistance and heat on a transmission line. This is usually accounted as 5% to 15%, depending on distance traveled.

4. Probably from orbit, as in my novel Sunflowers, where sunlight has an energy density of 1,300 watts per square meter instead of the 130 W/m2 that strikes the Earth’s surface.

Sunday, October 25, 2020

Are Empires Always Evil?

Roman arm

If you read science fiction, the Empire is always evil, the Emperor is always a villain, and his officers and minions—we’re looking at you, Darth—are always either toadies or supervillains. It was so in the Star Wars movies and the Dune books. Generically, if there is an empire involved in the story, it is bad place and meant to be fought against by the forces of light, reason, and goodness.

Perhaps this is a cultural spillover from the political view—generally held by Marxists and Soviet-inspired Leftists—that all the troubles of the modern world stem from “imperialism.” And by that they usually mean the empires built by white Europeans in Africa, the Middle East, Asia, and South America. The equation is: “Empire bad, local governance good”—even when local governance is at the tribal level without any political refinement. And that equation holds right up until the empire in question is one managed by Soviets or Chinese Communists, and then the benefits of central control by a foreign power structure are not to be questioned.

The cultural spillover also derives from the depiction of Rome and its ancient Mediterranean empire from the Judeo-Christian viewpoint. That is, from the troubles the Romans faced in the province of Judea, particularly when Rome tried to impose its statist, polytheistic religion on people who only believed in one, true god. This dispute ended with the Siege of Jerusalem in 70 A.D. and the Jewish Diaspora. That jaundiced view of the Roman Empire was also fed to us by the persecution of Christ under Pilate and of Christians in general under the empire—until Constantine legalized their religion three hundred years later.

But was Rome an evil empire? Was life there such a hardship?

First, let’s count the negatives. For starters, most people outside the City of Rome itself were added to the empire through conquest. You started off by trading with Rome at a distance, then getting a road built into your territory, then seeing an army march in along that road, and then you had to fight for the right of self-determination. Sometimes the army came first and the road came second—to make it easier for Rome to send reinforcements and hold you down. Almost nobody welcomed Rome at first. But let’s be fair: when the Romans marched in, what they were fighting was mostly the local king, the ancient families who held positions of power, and the armies they could recruit and command. Whether the war was short—as in a few campaigns by Caesar among the Transalpine Gauls—or long—as in all that unhappiness in Judea ending in the reduction of the capital and a bloodbath—was usually a matter of whether and how involved the average person, the peasant in the fields, became in the struggle. That, and the cohesive nature of the civilization that the Romans were attempting to absorb. Gallic and German tribesman were culturally similar but independence-minded and locally divided, and by the standards of the day they were primitive. Judea was an advanced civilization with a unified culture, strong central government, and firm beliefs.1

Next, the issue of slavery. Rome had it and didn’t apologize for that. But then, so did most of the lands and kingdoms they conquered. But, unlike the South in the United States, Roman slavery was not race-based. Just because you had a certain heritage and skin of a certain color did not make you a slave, subject to harassment and capture even after you were freed. Roman slaves entered captivity by losing a battle—all those wars of conquest—or resisting so strongly that the Romans made an example of your whole family or town by selling them into slavery. Or you could become a slave after being found guilty of a crime or through indebtedness—having pledged your person as collateral for a loan. Still, a Roman slave was property and could be abused, sexually exploited, tortured, and even summarily executed—although it generally didn’t profit an owner to damage or destroy his or her property. But also, Roman slaves could earn their freedom, and Rome eventually legislated slave protections such as being able to lodge complaints against their masters and to receive medical care in sickness and old age. And finally, in the ancient world, as in much of the world today, unless you held a piece of property or were trained and engaged in a skill or trade, you always had someone standing over you and making demands on your labor, your time, and ultimately your life. Still, it was better to be a citizen of Rome than anyone’s slave.2

And then, there was tribute. As a Roman province, you were put under the administration of a governor known as a propraetor or proconsul—usually an ex-consul or senior government official out to make his fortune after years of public service. The Roman administration was there mostly to collect tribute—so much to be paid each year in gold or trade goods—or to secure some necessity that the City of Rome needed, such as grain from Egypt, which was the ancient world’s breadbasket. Along with the governor and his administration came the tax collectors, who were not always honest and not always working directly for Rome. It was hard being someone from an old family, landed, wealthy, or otherwise locally important in a newly established Roman province. But, as noted above, life was hard all over—still is in many ways.

And now, some of the good things. First, you were generally cleaner and safer inside the Roman Empire than out of it. The Romans were creative and compulsive engineers, and wherever they went they took with them their construction skills and their preference for clean water and a relaxing bath. They built huge aqueducts not just to serve the City of Rome but throughout the empire to provide clean water and introduce the concept of regular bathing to the general population. And you tended to be safer because the Roman administration frowned upon casual banditry—an occupation reserved to the state—and introduced a proven code of laws suitable to civilized urban living.

Next, your worldview and access to trade expanded. The Romans transmitted knowledge and trade goods from one end of the Mediterranean basin to the other and extending into the hinterlands. If you were part of the empire, you were a citizen of the world. That meant, for a person with ambition, an increase in opportunity and income. And for a citizen, either in the city or the countryside, who might not have owned a piece of property or engaged in a lucrative trade, there was always the army. You signed up for 25 years of service with the legion. After that time, if you survived, you were generally awarded land and a living in the province where you had fought or maintained order—and by then you usually had a local wife and children. Being a Roman soldier was more dangerous than being, say, a farmer out in the hinterlands—except for that casual banditry—but it wasn’t a death sentence, either. The Roman legions fought with a disciplined cohesiveness and regular tactics that tended to minimize wounding and death and favored applying massive and concentrated force against their enemies. It was good to be on the winning side.

And finally, if you were a good ally and willing supporter of Rome, you eventually became a Roman citizen yourself. You had to bathe, speak and read Latin, and obey the law, of course. No hot-headed rebellion—which anyway would be quickly crushed, at least in the times that the Republic and then the Empire were a going concern. Eventually, you could move to Rome itself and become part of the elite. And the consensus seems to be that, in the ancient world, the best time to be alive was Rome in the second century—that is, between 100 and 200 A.D. Not only was the weather mild—the “Roman Warm Period”—but the Mediterranean world was generally at peace. It was a lull between the political chaos of the Hellenistic Age and the rising cold and invading barbarians of the encroaching Dark Age.

There is a reason people submit to the rule of empires and emperors. Whether the Islamic Caliphate, the Mongol Empire, the Ottoman Turks, or the British Empire, the food is usually better, the arts and sciences richer, the trade more expansive, the rule of law generally gentler and less oppressive than the dictates of a local king or brigand, and the average person has a sense of being part of something really grand. Also, under the Romans, you got a hot bath, and under the British, a flush toilet. Not bad for minding your own business and occasionally tugging the forelock.

1. And Egypt was just a mess, having been conquered by Alexander three centuries earlier and then mismanaged by the Ptolemies.

2. The taint of slavery did linger, however, even after a person was set free through the process of manumission. “Freedman” was a separate class in Rome from “citizen,” although freedmen who had previously been owned by Roman citizens could vote and their children became citizens. Still, in the Republic it was rumored that the general and statesman Gaius Marius, one of the “New Men” whose family originated in the allied Italian states and not in the City of Rome itself, had slaves in his ancestry. This was considered a blot on his character.

Sunday, October 18, 2020

Too Many Superheroes

Superhero

It’s no secret that our movies, television, and to some extent also our popular fiction are inundated with superheroes.1 The main characters, or the essential focus of the story, is on people with some physical or mental enhancement: super strength, x-ray vision, ability to fly, increased lifespan, or genius-level perception. And I would include here people who are otherwise separated from the human race by exceptional circumstances: vampires, witches, fallen angels, and the victims of medical experimentation.

These movies, television shows—series, I guess you call them now, with extended story arcs—and books are aimed at the young adult, the middling young, and the young at heart. The trouble is that, in my view, they tend to arrest the normal human development from child to functioning adult.

Life’s problems, which all of us must deal with, cannot be solved by punching through walls, seeing through doors, outsmarting your enemies with a genius IQ, or becoming immortal. A functioning adult has to use the skills and knowledge developed through hard work, proper choices, and good use of time in order to gain confidence, capability, and self-esteem. These things cannot be granted by birth on another planet, a medical advance, or a fortuitous afterlife. There are no shortcuts to growing up.

One of my favorite science-fiction series is Frank Herbert’s Dune books, telling the fantastic far-future history of the accomplished Atreides family. The series actually climaxes in the fourth book, The God-Emperor of Dune. The main character there is Leto II, who is the ultimate superhero: emperor of the known universe, served and protected by fiercely loyal people, commanding a superb fighting force, as well as being virtually immortal, physically invulnerable, able to predict the future, and able to access the living memory of every one of his ancestors and so the entire history and example of all humanity. And yet, in Herbert’s brilliant style, he is brought down by two skilled but not super-powered human beings who resist being his slaves. The book is really the anti-superhero story.

To be an adult is to possess hard-won knowledge, to develop skills that cannot be acquired magically or through a pill or genetic manipulation, to have endured experiences that are both constructive and destructive and enable you to know and understand the difference, and to become adept at foreseeing and dealing with the consequences of your actions. All of this must be learned. It must be acquired by having hopes and dreams, working toward them, and sometimes—maybe often—seeing them dashed. It is acquired through working through your problems, paying attention to what happens and when, remembering those consequences, and formulating rules of living both for yourself and your children, if you have any. This is the process that every child, every young adult, and every post-adolescent goes through. If you are lucky to survive, you keep learning and updating your internal database through adulthood and into middle and old age. Perfecting who you are should never stop until you draw your last breath.

And that is the final lesson. To be an adult includes the sober knowledge and acceptance of the fact that you, personally, in your own self, will one day die.2 This is not a cause for grief, fear, rage, or despair. Humans die, animals and plants die, bacteria and funguses can be destroyed, cell lines come to an end. Even rocks and whole mountains wear away to dust and silt, then break down into their component atoms, and rejoin the cycle of life on this planet. In my view, this is the key understanding of the human condition. We are not immortal. We have no lasting power over death, only good fortune and small victories. We only have the strength of our bodies, the power of our intelligence, and the focus of our wills. That is all we human beings can command.

When you know that you will eventually die, then you know how to value your life, your time, and your effort here on Earth. To be willing to sacrifice your life for something you believe is greater than yourself, you have to know how to value your remaining time. This is a rational decision that our brains were designed to make—if they are not clouded by the veil of hope that we, in our own bodies, just might be immortal. That hope protects us when we are young and stupid and have little experience of death. It is a foolish thing to carry into adulthood and middle age, when we are supposed to know the truth and act accordingly.

Oh, and in addition to what we can command and accomplish as individuals, we can also work together, pooling our achievements and our knowledge over time. We can raise vast cathedrals, each person adding his own carved stone or piece of colored glass. We can build a body of scientific knowledge by researching and writing down our findings in a discipline that we share with others. We can join a company—in the oldest sense of that word, whether an economic enterprise, a body of troops, or a group of travelers—to attempt and achieve more than a single human can do. And if we cannot do any of these things directly, then we can support the efforts of others by mixing mortar for their cathedral, serving as an archivist of their scientific endeavors, or becoming the financier, accountant, or quartermaster to that company in whatever form it takes.

Any of these tasks shared with other humans requires a knowledge of self and your limitations, a willingness to hold your own dreams and desires in check and subvert them to the common will, and to take and give orders for the good of the common effort. And this is another aspect of becoming an adult: to put aside the me-me-me of childhood and adopt the us of a collaborative group.

Superheroes, in fiction and on the screen, leap over these everyday problems and concerns. If they experience disappointment and existential angst at all, it is usually focused inward, on their supposed powers and their failure when they meet a foe who exhibits a greater power. But it’s all a conception of, and played out in the mind of, the graphic artist, the writer, or the film director: the presumed power, the challenges, and the intended result. And, curiously enough, the superhero always manages to win in the end. That is the way of fiction.

Real life involves dashed expectations, failed attempts, physical and mental limits, rejection by loved ones, and sometimes rejection by society itself. It is what a person does with these situations, using only the strength and wits, skills and knowledge, that he or she has acquired through conscientious development, that marks a successful human being. And ultimately the extinction of body and mind comes for us all. If you’re not dealing soberly with these things—and superheroes don’t—then you remain a species of child.

Those developing-adult stories, dealing with growth and change, are really the ones worth telling.

1. In fact, about fifteen years ago, when I was still trying to find an agent for my science-fiction writing, one potential candidate asked, “Who is your superhero?” That was the literary mindset: the main character had to have extraordinary powers for any book that could hope to be optioned for a movie—and back then selling a million copies and making it to the big screen had become the sole purpose of publishing. Maybe it still it, for all I know. But Covid-19 and the closing of the theaters might change all that.

2. I believe I first read this in a Heinlein story—perhaps Stranger in a Strange Land, although I can’t find the reference—that the difference between a child and an adult is the personal acceptance of death. To that, one of the characters in the conversation replies, “Then I know some pretty tall children.”

Sunday, October 11, 2020

Modeling Nature

Mandelbrot fractal

A saying favored by military strategists—although coined by Polish-American scientist and philosopher Alfred Korzybski—holds that “the map is not the territory.”1 This is a reminder that maps are made by human beings, who always interpret what they see. Like the reports of spies and postcards from vacationing tourists, the observer tends to emphasize some things and neglect or ignore others. Human bias is always a consideration.

And with maps there is the special consideration of timing. While the work of a surveyor, depending on major geographic features like mountain peaks and other benchmarks that tend to stand for thousands of years, may be reliable within a human lifespan, mapmakers are taking a snapshot in time. From one year to the next, a road may become blocked, a bridge collapse, a river change course, or a forest burn—all changing the terrain and its application to a forced march or a battle. If you doubt this, try using a decades-old gas station map to plan your next trip.

This understanding should apply doubly these days to the current penchant for computer modeling in climatology, environmental biology, and political polling. Too often, models are accepted as new data and as an accurate representation—and more often a prediction, which is worse—of a real-world situation. Unless the modeler is presenting or verifying actual new data, the model is simply manipulating existing data sources, which may themselves be subject to interpretation and verification.

But that is not the whole problem. Any computer model, unless it becomes fiendishly complex, exists by selecting certain facts and trends over others and by making or highlighting certain assumptions while downplaying or discarding others. Model making, like drawing lines for topological contours, roads, and rivers on a map, is a matter of selection for the sake of simplicity. The only way to model the real world with complete accuracy would be to understand the situation and motion of every component, the direction and strength of every force, and the interaction and result of every encounter. The computer doesn’t exist that can do this on a worldwide scale for anything so complex and variable as weather systems; predator/prey relationships and species variation and mutation; or political preferences among a diverse population of voters and non-voters.

Computer modeling, these days—and especially in relation to climate change and its effects, or concerning political outcomes—is an effort of prediction. The goal is not so much to describe what is going on now but to foretell what will happen in the future, sometimes by a certain date in November, sometimes by the beginning of the next century. Predicting the future is an age-old dream of mankind, especially when you can be the one to know what will happen while those around you have to grope forward blindly in the dark. Think of oracles spoken only for the powerful or the practice of reading tea leaves and Tarot cards for a paying patron.

But complex systems, as history has shown, sometimes revolve around trivial and ephemeral incidents. A single volcanic eruption can change the weather over an entire hemisphere for one or several years. A surprise event in October can change or sour the views of swing voters and so affect the course of an election. The loss of a horseshoe nail can decide the fate of a king, a dynasty, and a country’s history. Small effects can have great consequences, and none of them can be predicted or modeled accurately.

When climate scientists first published the results of their models showing an average global temperature rise of about two degrees Celsius by the year 2100, the counterclaims were that they focused on carbon dioxide, a weak greenhouse gas; that the models required this gas to produce a “forcing,” or positive feedback loop, that would put more water vapor—a more potent greenhouse gas—into the atmosphere; and that the models did not consider negative feedback loops that would reduce the amount of carbon dioxide or water vapor over time. The climate scientists, as I remember, replied that their models were proprietary and could not be made public, for fear they would be copied or altered. But this defense also rendered them and their work free from inspection. Also, as I remember, no one has since attempted to measure the increase, if any, in global water vapor—not just measured in cloud cover, but also by the vapor loading or average humidity in the atmosphere as a whole—since the debate started. And you don’t hear much anymore about either the models themselves or the water vapor, just the supposed effects of the predicted warming that is supposed to be happening years ahead of its time.2

Add models that, for whatever reason, cannot be evaluated and verified to the general trend of results from scientific studies that cannot be reproduced according to the methodology and equipment cited in the published paper. Irreproducibility of results is a growing problem in the scientific world, according to the editorials I read in magazines like Science and Nature. If claims cannot be verified by people with the best will and good intentions, that does not make the originally published scientist either a liar or a villain. And there is always a bit of “noise”—static you can’t distinguish or interpret that interferes with the basic signal—in any system as vast and complex as the modern scientific enterprise taking place in academia, public and private laboratories, and industrial research facilities. Still, the issue of irreproducibility is troubling.

And, for me, it is even more troubling that reliance on computer models and projections are now accepted as basic research and scientific verification of a researcher’s hypothesis about what’s going on. At least with Tarot cards, we can examine the symbols and draw our own conclusions.

1. To which Korzybski added, “the word is not the thing”—a warning not to confuse models of reality with reality itself.

2. We also have a measured warming over the past decade or so, with peaks that supposedly exceed all previous records. But then, many of those records have since been adjusted—not only the current statement of past temperatures but also the raw data, rendering the actual record unrecoverable—to reflect changing conditions such as relocations of monitoring stations at airports and the urban “heat island” effects from asphalt parking lots and dark rooftops.
    As a personal anecdote, I remember a trip we made to Phoenix back in October 2012. I was standing in the parking lot of our hotel, next to the outlet for the building’s air-conditioning system. The recorded temperature in the city that day was something over 110 degrees, but the air coming out of that huge vent was a lot hotter, more like the blast from an oven. It occurred to me that a city like Phoenix attempts to lower the temperature of almost every living and commercial space under cover by twenty or thirty degrees, which means that most of the acreage in town is spewing the same extremely hot air into the atmosphere. And I wondered how much that added load must increase the ambient temperature in the city itself.

Sunday, October 4, 2020

Clever Words

Dissected man

Our politics is—and, I guess, has always been—susceptible to clever word combinations, puns, and rhymes that appear to tidily sum up a grievance, intended consequence, or course of action. For most of us, they are mere curiosities. But in my view they are treacherous if taken as a philosophy or a substitute for rational thought.

I’m sure there were chants and slogans that caught on during the American War of Independence, probably something to do with Indians and the tea shipments arriving in Boston Harbor. The slogan that comes readily to mind is from slightly later, the dispute with Canada in the mid-19th century about the Oregon border: “Fifty-four Forty or Fight,” relative to the latitude line that would define the hoped-for demarcation. I suppose it was just fortuitous that the map offered the preponderance of all those F’s and the opportunity for a stirring alliteration. If the border had been along the twentieth or thirtieth parallel, I guess the proponents would have had to come up with something else.

And then there is the modern-day all-purpose chant: “Hey-hey! Ho-ho! Fill in the Blank has got to go!” This one is particularly useful when a group of organizers want to stir up and direct a crowd. It’s got a rhythm that gets your arms and legs moving almost like a dance or a march step.1

To me, one of the worst substitutes for rational thought also comes from the 19th century, although a bit later. It is attributed to the journalist Finley Peter Dunne and his fictitious alter ego Mr. Dooley. In its shortened form it says: “The job of the newspaper is to comfort the afflicted and afflict the comfortable.” This formula, clever in its reversal—almost a palindrome—of verbs and objects, has been taken up by generations of progressives ever since. For some, it’s an exquisite summation of how they should heal social ills.

But this combination is, of course, nonsense. Clever, but still nonsense. It depends on a false equivalency: that the sufferings of the afflicted—the poor, the weak, the disabled, the denied and discriminated against—are directly attributable to the smug satisfactions of the people not so burdened. It presumes that those who have worked, saved, invested, and planned for the future of both themselves and their families—all of those middle-class virtues—have created conditions of poverty and injustice for those not so fortunate. And this is not so. Those who have taken up the virtues have simply removed themselves from the class of the destitute and the desperate, not caused their condition.

By all means, one should “comfort the afflicted.” Heal their hurts where it is possible. Work to change their current situation and their opportunities where you can.2 But at best, “afflicting the comfortable” serves only to remind them that an underclass exists in their society and that one should spend some portion of one’s day, one’s mind, and one’s charity—if not just their taxes—to alleviating the situation. “Afflicting the comfortable” is intended to be fighting words, suggesting that by reducing their comforts a society can somehow magically improve the lot of the afflicted. And that magical thinking is just pure Marxism: Been tried; didn’t work.

Another set of fighting words, intended to stir up the complacent and draw them into a social battle, are the various formulas intended to fight social apathy: “If you’re not part of the solution, you’re part of the problem,”3 and more recently “Silence is violence.” Again, the false equivalency that those who are not actively joining the fight—and on the side of, under the terms of, the sloganeers—are causing the wrong, are in fact wrong-doers themselves, that is the unspoken purpose of the chant.

These clever slogans are meant to give the great mass of people no choice. Join us or die—or worse, gain our everlasting contempt. They raise the issue in contention to the level of an existential crisis, a civilizational catastrophe, or a cause for civil war. However, for some of us, for many of us, perhaps for most of us in the middle of the political spectrum, who are spending our days doing all of that working, saving, investing, and planning for our own futures, in order not to be counted on the public rolls, the issue is not existential or catastrophic and does not merit a civil war. Yes, perhaps, the issue may demand our notice and concern. We might even add the deserving recipients to our list of charities or our list of considerations in the voting booth. But many of us, most of us, know that there’s nothing we can personally do about a lot of these social problems. We are not prepared to climb on the barricades, bare our breasts, and offer “our lives, our fortunes, and our sacred honor”4 to the project.

And no amount of clever words and scornful chants is likely to change that reality.

1. And in terms of serving multiple purposes, there is also: “No justice, no peace!” Simply pick your object of “justice,” and fill in your action for withholding “peace.”

2. But you have to be realistic about this approach. You can work to improve other people’s conditions sometimes, but that should not include a free ride or a lifetime’s residency on the dole. A taut safety net, not a soft and cushy safety hammock. Human beings are designed by a hundred thousand years of heredity to have personal goals and to seek satisfaction and self-worth through attaining them. No one—not children, not the mentally or physically disabled, nor the socially or economically disadvantaged—benefits from having their personal agency removed by a benevolent parent’s or government’s lifting and carrying them through all the vicissitudes of life.

3. Speaking of clever, I have always favored the chemist’s version: “If you’re not part of the solution, you’re part of the precipitate.” In other words, if you don’t join in this fight, you’ll be part of the fallout. Chuckle, smirk.

4. To quote from the last line of the Declaration of Independence, which for the signers did involve an existential crisis and, right quickly, a civil war.

Sunday, September 27, 2020

Monopoly Power

French marketplace

The Emperor Caligula was quoted by Suetonius as saying, “Would that the Roman people had but one neck!” Apparently so that he could hack through it more easily. Everyone wants to have control of their situation, and on the easiest possible terms.

In the business world, this tendency is represented by monopoly, where for the sake of simplicity, economy, efficiency, or some other perceived value there is only one producer or supplier of a particular category of goods or services, and by monopsony, where for the same set of reasons there is only one buyer. Think of the Defense Department and its need for complementary weapon systems, as opposed to individual purchases by each branch of the service, or by each military unit and base. Or the current drift toward single-payer medical coverage and its promise of cost reductions through the government’s negotiating power and volume purchases.

Monopolies have always enjoyed state support. The English crown, up until the 17th century, regularly granted royal favorites the monopoly trade in certain products, such as sweet wines in the Elizabethan period. And the British East India Company was granted exclusive trade rights in lands bordering the Indian Ocean. Americans have not generally favored monopolies until the widespread distribution of electricity in the late 19th and early 20th century, when it became inconvenient to have several power companies stringing wires up and down both sides of the street to reach their customers. It then became necessary to grant regulated monopolies to electricity and gas providers to systematize their distribution.

Generally though, big players do better in any market. If a company making anything, from cars to soft drinks, reaches the position of first, second, or third in the marketplace, it will want to crush its competition and take all the customers.1 And the government likes a marketplace dominated by big players: they are easier to deal with, regulate, and tax.2 Certainly, government regulation tends to work against a field of small players, who do not have the legal and regulatory affairs departments or the budgets to lobby government, respond to regulations, and engage in defensive lawsuits.

While our government has officially been “antitrust” since the days of the Robber Barons and the interlocking directorates of various companies controlling, for example, the markets in coal and steel, government has turned a blind eye to amalgamation and unification in the labor market. There different unions have banded together into effective monopolies on the labor supply for factory workers, service employees, and truck drivers. Again, big players do better in the market. They swing more weight. As individual union members join together in a giant, amalgamated union—they can speak with one voice. They can get more things done to their liking. They can have their way. And it’s actually a form of democracy—at least for the members of the union.

And where unions don’t exist, or have been withering for decades under our huge economic expansion, they soon may make a comeback as government increases its reach into the economy. For example, the current push for single-payer medical plans, or some version of Medicare for All, would make it easier for the nurses’ union to negotiate a favorable pay rate with a single government entity, rather than with a handful of large hospital corporations or thousands of local hospitals and clinics. And a government monopsony on health care would push the rest of the medical profession—doctors’ associations and collections of other health care specialists—into some form of consolidated negotiation or full unionization. It would also further the amalgamation of hospitals into larger corporations and combinations.

But while bigger may be better for the dominant players in the marketplace, the trend towards monopoly and monopsony isn’t necessarily good for the market itself.

First, when one product or system dominates, it tends to limit invention and technological progress. Success tends to make people conservative. Yes, monopoly players worry about some competitor coming along and beating them at their own game, but then their urge is to buy up, buy out, and shut down that competitor, or simply crush it by temporarily lowering prices on their own products. If AT&T (“Ma Bell”) had retained its monopoly on long-distance telephony and its ownership of the various local telephone companies (“Baby Bells”), its own manufacturing arm with Western Electric, and its research facilities with Bell Telephone Laboratories (“Bell Labs”), how soon do you think cellular phones, which are not dependent on wires at all and are instead a radio product, would have become available? The phone company would have crushed any radio product that needed to touch its phone system and landlines—except, possibly, for automotive radiophones, which would have been expensive and limited to very special users.

Second, in a monopoly situation, or under the conditions forced by a monopsony, employment choices are more limited. If you were a telecommunications technician or inventor in the Ma Bell era, you could either work for AT&T or find some other career. And if you disagreed with the company’s directives, choices, and planning, you could either speak out and find your career truncated, or you could keep your head down, rise in the organization, and hope to one day influence those decisions. Jumping ship to join a competitor or starting your own company with a better idea just wasn’t in the cards. The same goes for employees at NASA or your regulated local utility company.

Third, monopolies and monopsonies are almost always bad for the average person, the individual buyer, the customer, the person at the ultimate end of the supply chain. Where one organization has purchasing and pricing power over the market, the little guy accepts what he gets and pays what is asked. Not everyone wanted a Model T in “any color so long as it was black.” Not everyone wants a single choice of deodorant or sneaker. Not everyone wants the government deciding who will get a CT scan and when, because someone far up the food chain made a nationwide decision about how many CT scanners to buy for each county. People might appreciate efficiency, simplicity, economy, or some other overriding value in the abstract. But not everyone prefers white bread over pumpernickel, plain whisky over flavored vodkas, or the deodorant with a sailing ship on the label over any other brand. People like choices, making their own decisions, and deciding how and where to spend their money.

Fourth, and finally, monopolies and monopsonies almost never last. Sooner or later, the entrenched position becomes so cautiously conservative, so calcified, and so behind the times that a clever inventor can find a work-around: a new and disruptive product, a new marketplace, or a new champion. That’s happening all over the place these days, in the automotive world (hybrids, Tesla), in telecommunications (cell phones), in computers (laptops and tablets), in medicine (genetic analysis, personalized medicine), and in space exploration (SpaceX, Blue Origin). Big players become vulnerable unless they can also become nimble—not just crushing the competition but learning to dance with it.

Caligula’s desire for Rome to have just one neck, to make it easier for him to put his foot on and eventually to hack through, was the cry of every tyrant. But for anyone, even for a Roman emperor, life just isn’t that easy.

1. Unless, of course, the competition is good for the top players. Think of Coca-Cola and Pepsi-Cola, both of whom benefited by fostering their brand loyalty over the other competitor. Or the “Big Three” auto makers, who sold more cars by competing with the other guys on styling, horsepower, or some other popular enhancement, thus churning the annual sales of new cars.

2. If you doubt this, remember the senator who complained about the inefficiency of a market that offered Americans a variety of products: “You don't necessarily need a choice of 23 underarm spray deodorants or of 18 different pairs of sneakers.” It’s much easier to manage an economy with fewer choices and a monopoly player making all the decisions for the folks doing the buying.

Sunday, September 20, 2020

The Truth

Total honesty

I have always believed in the truth: that one should try to understand it, to know and speak it whenever possible, and to accept it, even if the implications and consequences work against one’s own prior assumptions, beliefs, advantages, and one’s personal situation. I would rather know and follow the truth than be happy and whole in the shadow of ignorance or a lie.

It was this basic adherence to the concept of truth that kept me from following my grandfather’s career path—although he was a great believer in truth, too—into the law, which everyone in the family thought would be my future, because I was so verbal as a child. But as I grew older, I realized that a lawyer deals mainly in argument, precedent, and the intricacies of the law as a giant logical puzzle weighing rights and advantages. I knew or suspected that a lawyer must sometimes decline to know or search for the truth—the facts of what actually happened, which he or she is required to bring into court, if known—while working toward an argument or an interpretation of the known facts that will best serve the client’s purpose. By putting some gain above the human obligation to know and speak the truth, I knew the law was something in which I feared to and dared not dabble.

So I studied English literature and became a devotee of storytelling. Fiction, a made-up tale about made-up people, is not necessarily a violation of the truth. It is not exactly telling lies. An author telling a story is like a blacksmith forging an iron blade. The smith hammers away the surface scale, and with it the impurities that cloud the pure metal underneath. And so the author hammers away the alternate interpretations and contradictions of a life situation in order to reveal a pure fact, or sequence of events, or understanding of the human condition that the author recognizes as true.

When I write about “the truth” here, I am not referring to biblical truth, or revealed truth, or a studied construct made of equal parts belief and hope. I am talking about a summation of observations, of experienced cause and effect, of facts that have been seen and where possible tested and annotated, of things we know to apply in the real world. It’s an elusive thing, this truth, but something that I believe can be observed by one person, formulated into a statement or story, communicated to another person, and received by that second person as something that is apparently if not obviously real and congruent with known facts.

It is therefore an article of faith with me as a fiction storyteller and a nonfiction communicator that language can have adequate if not exact meanings, in terms of the denotation and connotation of words. That one person can share an idea, a realization, a piece of truth with another person through verbal means and not be completely misunderstood. Some misunderstanding may take place. Sometimes one person does not have the same meaning—denotation or connotation—for a word that the original speaker does. Sometimes the recipient of the thought has different ideas or beliefs that get in the way of examining the story or statement and perceiving it in the same way that the original formulator meant or intended. Accidents of language and intention do happen. But, on the whole, between people of fair mind and unbiased perception, communication of the truth is possible.

It is also an article of faith with me that truth exists outside of our personal, subjective perceptions. That is, truth is an object waiting to be discovered, analyzed, and discussed. It is not merely a personal belief that is particular to each person and changes with his or her perceptions based on personal needs and desires. Two people can point to the results of a scientific experiment, or an art form or artifact that was carved, painted, written, or created by a third person, or to an event common to their experience, and reach agreement as to its nature, purpose, and quality.1

Of course, I am not a fool. I do not believe that every truth can be discovered and stated. I understand that some things are the product of chance or probability and so can fall out one way or another. I understand the quantum physicist’s dilemma when dealing with very small, intimate systems, that the act of observing a particle in flight—usually by bouncing a photon or some other particle off it—changes the direction of flight immediately. So the physicist can know where a particle was but not where it is now or where it’s going.

And I do understand that humans, their perceptions and interpretations, and the things they hold to be important are constantly changing: that we do not live in the same world of values and feelings that was inhabited by the ancient Greeks and Romans, the Medieval or Renaissance Europeans, the ancient or modern Chinese, or the Australian Aborigines. Humans are exciting and varied creatures, constantly evolving and reacting to the products of their own minds, and this is not a cause for concern. But I hold it as a postulate that, given good will and a common language, they learn from each other, share ideas, and can arrive at an objective truth about any particular situation or experience. However, I also understand that this level of understanding may require one or more human lifetimes and leave little room for other explorations and understandings. That’s why we have books and can read.

At the same time, I do not believe that human nature changes very much. What people believe and hold to be real may be influenced by great thinkers, prophets, and teachers. Otherwise, the Islamic world would not have taken such a turn away from the Judeo-Christian tradition that was in part its heritage. But people still have basic needs, basic perceptions of fairness and reciprocity, and a basic sense of both the limitations and the possibilities of the human condition. Until we become incorporeal creatures of energy or immortal cyborg constructs, issues of life and death, family and responsibility, need and want, will be for each of us what they were in the time of our hunter-gatherer ancestors.

And yet there are also things we cannot know about each other. I cannot know what is really going on inside your head. Even if I know you well enough to trust your nature and sense your honesty, even if you use words well enough to express your deepest feelings accurately, there are still secrets people keep to themselves and never tell even their nearest and dearest. There are still secrets people hide away from their own conscious mind and, like the movements of great fish in the deep waters, can only be discerned by the effects that these deep secrets have on their lives, their loves, and their mistakes and missed opportunities.

That is a lot of unknowing and things unknowable for someone who believes in the truth. But as I said, to be completely knowledgeable would take a library of great books and several lifetimes to read them. All any of us can do is try to start.

1. When I was studying English literature, back in the mid to late 1960s, we were taught what was then called the New Criticism. This was the belief that the work of a writer or poet—or a painter, sculptor, or musician—stood on its own and could safely be analyzed by any person with sense, feeling, and a knowledge of the language. This displaced the author’s own expertise with the object. The author’s claims about “what I meant” or “what I intended” might be interesting but did not define the work. Sometimes an author intends one thing but manages—through accidents of carelessness or vagaries of the subconscious—to achieve something else and sometimes something greater than intended.
    This is opposed to the literary criticism called “Deconstruction,” which has been taught more recently in English departments but is something I never studied. Deconstruction apparently teaches—at least as I understand it—that words, their usage, and their underlying reality are fluid rather than fixed. That they are so dependent on a particular time, place, and culture that trying to understand the author’s intended meaning, from the viewpoint of another time or place, is practically impossible. And therefore it is useless to discuss “great books” and their enduring value. That nothing is happening in any universally objective now, and everything is subject to current reinterpretation. This is, of course, anathema to me. It is a denial of any kind of perceivable truth.

Sunday, September 13, 2020

End of Days, or Not

Red-sky dystopia

This past week has been weird and depressing. A growing number of fires in California cast a pall of smoke into the atmosphere over the northern part of the state, like a high fog but with a gritty perspective in the middle distance and bits of ash floating silently down, so that your car’s hood and fenders are speckled white. There’s a cold, red-orange darkness at noon, like the fume out of Mordor, or like life on a planet under a red-dwarf star. You’ve seen the pictures on friends’ Facebook pages—not quite as apocalyptic as my stock photo here, but still disturbing. It’s like—and I think this is a quote from either J. R. R. Tolkien or J. K. Rowling—you can’t ever feel cheerful again.

On top of that, Monday was the Labor Day holiday. So, for those of us who are retired and only loosely connected to the working world’s rhythms, that day felt like a second Sunday. Then Tuesday was like a Monday, Wednesday like Tuesday, and what the hell is Thursday supposed to be? Glad as we are for a holiday, it throws off the pace of the week and makes everything feel subtly weird.

And then there are the overlying, or underlying, or background burdens of 2020. The pandemic drags on and on, so that we are isolated from family, friends, and coworkers, except through the synthetic closeness of a computer screen. We wear masks in public, so that we are all strangers to each other, even to the people that we know and would normally smile at. We avoid people on the sidewalk and in elevators, maintain a shopping cart’s distance at the grocery store, and feel guilty about touching a piece of fruit in the bin and then putting it back when we find a suspicious bruise. This illness, unlike any other, is not so much a matter of concern about personal safety as a national and social pall that has descended on everyday life.

Because of the closures, our robust, consumer-oriented economy has tanked, and we don’t know when it will come back. The stock market has revived from its swoon in the spring, apparently rising on shreds of pandemic optimism. But anyone who follows the market knows that these weekly swings of 500, 1,000, and 1,500 points on the Dow alone, with comparable lurches in the other indexes, just can’t be healthy. It’s like the entire investor class is cycling from mania to depression, too. Meanwhile, we all know people who have been laid off and are scrambling. We all have a favorite restaurant that is eking by with takeout service or a favorite shop that has closed, apparently for good. We all miss going downtown or to the mall for some “shopping therapy”—not that we need to buy anything, but we look forward to what the richness of this country and its commercial imagination might have to offer us. Buying dish soap, toilet paper, face masks, and other necessities off the Amazon.com website just isn’t as satisfying.

And then there’s the politics. The divisions in this country between Left and Right—emblematic of but, strangely, not identical with the two major parties—have grown so deep and bitter that friendships are ended and family relationships are strained. The political persuasion opposite to your long-held point of view has become the other, the enemy, and death to them! We are slouching, sliding, shoved inexorably into an election that has the two sides talking past each other, not debating any real points of policy but sending feverish messages to their own adherents. And whichever way the national polling falls out—with the complication of counting votes in the Electoral College from the “battleground states”—the result promises to bring more bitterness, more rioting, more political maneuvering, and perhaps even secession and civil war.1 There’s a deep feeling in the nation that this election will solve nothing.

The one ray of hope in all of this is that things change. This is not a variation on the biblical “This too shall pass.” Of course it will pass, but that does not mean things will get back to the pre-fire, pre-pandemic, pre-boom, pre-strife normal. This is not the “new normal,” either. There never was, never is, any kind of “normal.” There is only the current configuration, life as we have it, and what the circumstances will bring. But this is also not the “end of days.”

Every fire eventually burns out. The rains come, the ground soaks up their moisture, and the stubbornest embers are extinguished for another year. We may have other and worse fires—or possibly better drought conditions—next year, but this year’s firestorm will eventually be over. Yes, the ground is burned, homes are lost, and a number of lives and livelihoods are upended. But the ground is also cleared for new growth, and the way is clear for people to start over. As someone who sits on a forty-year pile of accumulated possessions and closets full of just “stuff,” I sometimes think a good fire is easier to handle than a clearing operation where I would have to weigh and consider every piece of bric-à-brac against future need or desire.2 Sometimes you just have to let events dictate what happens in your life.

Every plague eventually fades away. The virus or bacteria mutates into a harmless nuisance, our immune systems adapt to handle it, or medical science comes up with a vaccine, and a devastating disease disappears from our collective consciousness. Yes, we have death and disability in its wake. But death and disability come to us all, if not from Covid-19 then from the annual influenza, or a cancer, or accident, or other natural and unnatural causes. For those who survive, our lives and our attitudes become more resilient, more grounded, more able to take life’s hard blows. That which does not kill me makes me stronger—until it or something worse finally kills me. And this is the way of life on this planet. The essence of the human condition is that we have the self-knowledge, foresight, and insight to understand this, where for every other animal on Earth, life’s stresses are pure misery and death is the ultimate surprise.

Every economic downturn paves the way for growth. At least, that is the cycle in countries that enjoy free-market capitalism. “Creative destruction,” the watchword of economist Joseph Schumpeter, captures the vitality of markets that are able to respond to current conditions and meet the needs and demands of people who are making their own decisions. In my view, this is preferable to one person or group, or a committee of technical experts, trying to guide the economy and in the process preserving industries, companies, and financial arrangements that have outlived their usefulness but provide some kind of national, political, social, or emotional stability that this group values above letting the mass of people make their own decisions.

Every political crisis passes. Issues get resolved, the emotions die down again, and life goes on in uneasy balance. The new stability may not reflect the goals and values that you were prepared to fight for, actually fought for, or maybe even died for. But the resolution is usually a compromise that most people can live with … unless the end of the crisis is a terminal crash, a revolution, a civil war, and a crushing loss that results in a majority—or worse, a virtual minority—beating the other side’s head in and engendering animosities and unhealed wounds that fester for generations and destroy everyone’s equanimity. Sometimes the best we can hope for is an uneasy, unsatisfying compromise that will hold until the next round of inspirations and aspirations takes control of the public psyche.

There never was a normal, just the temporary equilibrium that kept most people happy, a few people bitter, and many people striving to make things better. There never is an “end of days,” because history has no direction and no ultimate or logical stopping place—at least, not until the human race dies out and is replaced by the Kingdom of Mollusks, if we’re lucky, or the Reign of the Terror Lizards, if we’re not.

1. But see my take on that possible conflict in That Civil War Meme from August 9, 2020.

2. I recently completed such an operation with a forty-square-foot storage locker that I was renting, and the exercise took three months and was exhausting. You stare at a book you once thought you would read, or a jacket you once wore as a favorite, and have to decide its ultimate fate. Sooner or later, you just have to let go and throw this stuff away.

Sunday, September 6, 2020

Counterclockwise

World turned upside down

The other day I was reading in Astronomy magazine one of my favorite features, “Ask Astro.” There readers pose questions about the universe and astronomy in general, and experts are called in to answer them. This one asked why the Sun orbits our galaxy in a clockwise direction, while the planets orbit the Sun in a counterclockwise direction.1 And that got me thinking about the arbitrary nature of directions and much else in our daily lives.

After all, the conceptions of “clockwise” and “counterclockwise” didn’t come into use until people started telling time with geared mechanisms instead of the angle of the sun, sand running through an hourglass, bells rung in a church tower, or candles burning down past inscribed markings. Clocks with gears have been invented and reinvented using different driving forces—water, pendulums, springs—in ancient Greece in the 3rd century, China and Arabia in the 10th and 11th centuries, respectively, and Europe in the 14th century. The fact that most round clock faces count time by moving the hands from left to right—clockwise—is based on the usage of early sundials. These instruments track the Sun rising in the east, and therefore casting the shadow from the sundial’s gnomon in the west. Then the Sun moves to the south at midday, casting it’s shadow to the north. And finally, the Sun sets in the west, casting the shadow in the east. All of this, of course, is predicated upon the person observing the sundial facing north as a preferred direction. This daily rotation from west, or left, to east, or right, was so familiar that early clockmakers copied this movement.

Of course, all of these cultures that used sundials and invented mechanical clocks were spawned north of the equator and only lately spread them to cultures and European colonies established south of the equator in southern Africa, South America, and Australia. If those areas had been the home of a scientific, technically innovative, colonizing, and marauding culture, and if the peoples of the Eurasian continent had been inveterate stay-at-homes, then things would have been different.

Clockmakers originating in South Africa, Tierra del Fuego, or Australia might have faced in their preferred direction—south, toward the stormy seas and distant ice flows of Antarctica. And then they would have erected their sundials and drawn their clock faces based on the Sun rising at their left hands and casting a shadow to the west, moving to the north behind them and putting the shadow in front of their faces to the south, and finally setting at their right hands in the west and casting a shadow to the east. Their clock hands would have run in the direction we call “counterclockwise,” and the rest of the world would have followed suit. It all depends on your point of view, which is based on accidents of geography, demography, and historic migrations.

What else might have been different based on these historic accidents?

Certainly, our book texts and traffic signs reflect differing points of view. We in the European-based and -influenced world read texts from left to right, pretty much the same as the movement of our clock hands. But this was not universal. If we had kept the alphabets and scripts of the ancient Hebrews and Arabs, writing from right to left, and orienting their books from what we would consider the back cover to the front, then our literary world would be different and we would stack our library shelves in a different order. But we don’t, because we follow the Latin practice of the Roman culture that dominated the western, and eventually the eastern, end of the Mediterranean Sea and surrounding lands.

The earliest writing forms were different yet again. The Egyptians wrote in both rows and columns, depending on whichever was more convenient, and indicated the direction in which the symbols were to be read by the way that the animal signs—birds, snakes, and so on—faced at the top or side of the text. And anyway, hieroglyphs were for the priestly and aristocratic classes, intended to preserve the thoughts of important people for their future generations, and not for just anyone to read and understand. Early cuneiform writing from Mesopotamia was written from top to bottom and right to left, although they changed that direction from left to right at a later date. Chinese, Japanese, and other Asian scripts are generally flexible, written left to right when in horizontal rows, or top to bottom in columns and then mostly reading those columns from right to left—although sometimes also left to right.

Ancient Greek was the most practical of all, because texts were written and read from left to right for the first line, right to left for the second, back to left to right for the third, and so on. This was economical because they had no fraction of a second lag in brain time between the eyes finishing one row of letters on the right end and then tracking back to the left side of the page to start anew. This form of writing was called “boustrophedon,” or literally “as the ox plows.” Like most things Greek, it was eminently sensible—but it never caught on elsewhere.

And then, as to the shape of our books themselves, consider that what we think of as a “book” is really an invention of medieval monks with their manuscript codices,2 followed by Gutenberg and his printing press in Europe of the 15th century. Because Gutenberg was printing single broad sheets, folding them into pages, stacking them, and sewing the stacks together in a continuous, linear format, we have the modern book. Gutenberg probably inherited the idea of printing itself from Chinese books of pasted pages that were developed in the Song Dynasty around the 11th century.3

Before that, the Romans, Greeks, Hebrews, and just about everyone else wrote on scrolls. These were rolled up and packed into cubbies on library shelves, identified for the searcher by clay or wax tags attached to the tube ends. I have often thought that the order of the books we read in the Old and New Testament is rather arbitrary—except for Genesis, of course—and originally was based on whatever scroll you happened to pick up next. Someone must have written out a “cheat sheet” somewhere to direct you to some kind of chronological order after Genesis and throughout the New Testament. But things became easier when the pages were put in neatly linear order in a single sewn book.

A lot of the world we inhabit today—from clock faces, to the way we write, to which side of the road we drive on, to the shape of our keyboards—is pretty much a matter of geography, demography, and perspective. And the solutions we live with are not always the most convenient and sensible.

1. Short answer: The planets formed out of a cloud of dust and gas that started to spin in a particular direction—counterclockwise, when viewed from “above,” or from the Sun’s “north pole”—as it collapsed. But that gas cloud was already moving in another particular direction—clockwise, when viewed from “above” or “north” of the galactic plane. The opposite motions are more or less separate, arbitrary, and related to your point of view.

2. Codices (plural of codex) were handwritten single pages that were grouped together and bound between two wooden boards, as opposed to the rolled scrolls used in earlier times.

3. Printing was presumably invented by the Chinese about four hundred years earlier, where the entire page was carved from a single block of wood. Of course, this was just an advanced form of the rolling seals and stamps that had been in use for thousands of years. Carving a single page made sense when individual Chinese ideograms numbered about 50,000—too many to sort and select as single pieces of type. However, by the Song Dynasty the Chinese printers were doing just that. Gutenberg, with only twenty-six characters to choose from, plus capitals and punctuation marks, had an easier time with movable type.

Sunday, August 30, 2020

Absent the Middle Class

Girl with magic box

I was born just after the Second World War, which means I grew up and became politically aware—or at least what I think of as “aware”—in the Eisenhower, Kennedy, and Johnson administrations. This was a time when the United States was the “last man standing” among the nations that participated in the war, and we came out better than any on either side. We had our infrastructure intact and had built up a huge capacity in raw materials like steel and aluminum as well as manufacturing due to the war effort. We were on top of the world.

That was also the time when the middle class in America was doing its best. Soldiers returning from the war were getting free education on the GI Bill. Homes were being built in newly defined and rapidly expanding suburbs. Business was booming and, even with the returning soldiers, jobs were plentiful. Most people—there were exceptions, of course, especially in the Jim Crow South—were prospering as never before. It was the good times.

The middle class is a relatively new thing in human history. It didn’t really develop until political and social structures had changed: urban life became commonplace, rather than the exception; and capitalism, the free market, and international trade became encoded with commonly accepted practices and rules, rather than just things that happened casually at the village level. The middle class was the place where people who were not nobles and landowners, yet too ambitious and too well educated to remain peasants, could find profitable employment and eventually riches by engaging in large-scale trade outside of selling butter and eggs on market day, or manufacturing outside of single-family cottage industry, or taking on the new roles in banking and legal transactions that supported these intermediate activities.

The middle class was for people with hopes and ideas, for those who sought independence from the old social classes, for those who wanted to do better than their fathers and grandfathers, for those who hungered to prove that they were as good as anyone and a damned sight better than most. It was the class of the feisty ones.

From Roman times and into the Middle Ages and then the Renaissance, the landed class, the nobles and the gentry, despised these strivers. Going into trade or handling money professionally was all about getting your hands dirty. And while anyone might admire a legally trained mind in the Roman Senate or a lawyer at court doing the king’s business, the sort of person who argued about the price of injury to a cow or the placement of a fence line was little better than a conniver and a con man in his lordship’s domain.

And of course the peasants, lately serfs, and still working the land that their father’s had farmed and sharing the proceeds with the lord of the manor, all viewed members of the middle class as social upstarts, the big men from town, whose fathers might have been the local blacksmith or miller, and whose grandfathers had been serfs like the rest of us. People who wore britches and waistcoats rather than the peasant’s smock were already getting too big for themselves.

So the middle class has been under suspicion and under fire for a long time. It wasn’t just idle animosity that made Karl Marx and the other socialists of the 19th and 20th centuries despise the middle class with its striving and materialistic values as “bourgeois”—which is just the French word for this class—or worse, “petit bourgeois,” as if they were too small to be significant. And why not? When the politics you’re selling involves state ownership of the means of production, and puts them all in the hands of appointed technocrats, or the revolutionary vanguard, or the modern equivalent of Plato’s philosopher kings, then the people who know how to handle their own or their neighbors’ business practices and money, who will start new enterprises simply because they think they can make a profit from them, and who will obey rules but not wait patiently on instruction from their betters—these people are the bureaucrats’ natural enemies. These are the people who will upset the serenely floating boat of socialistic doctrine and practice. And so these are the people who must be the first to go up against the wall.

And the peasants, the modern blue-collar workers, the ones who are content to do what they are told and lack the ambition or the education to go out and start their own businesses, even as house painters and contractors—they will be quite happy to work in a factory owned by the moneybags class with protections from their union, or work in the factory owned by the state with those same protections in place according to state law, and still have their union—if that’s even needed. The fate of those middlemen, professionals, and entrepreneurs is irrelevant to the new peasant class, at least at the surface of their minds.1

The middle class has always been in, well, the middle, between two classes that would just as happily see it disappear. And the middle class is disappearing these days. Not only is the upper class getting bigger—in terms of its power if not its numerical size—with wealth beyond the dream of kings and emperors of old. But the lower class is also getting bigger, with more people finding it harder to get the education and the good jobs that will enable them to enter the middle class as professionals, business owners, and independent traders. It is getting harder to own a house rather than rent, buy a new car instead of a used one or a lease, ensure your children a good education, take annual trips on your vacation—if you even get one while working two jobs—and plan for a comfortable retirement.

The middle class is being squeezed. Whether this is a planned process or just the natural course of modern economics,2 it’s happening. It has been going on in every decade of my life since I became politically aware. And I don’t know if it’s because the upper class and the Marxists do well when the majority of the people are more dependent on government and the largesse of big corporations than on their own initiative, or because we’ve lost something of the entrepreneurial spirit that fed bright and hopeful people into the middle class.

But something’s missing. And neither the top nor the bottom seems to notice or care.

1. If the peasant or the blue-collar worker thinks deeply, however, he will wonder where the technologies and inventions of the modern age—electricity, telephones and televisions, personal computers and smartphones, numerous medical advances, and easy credit and banking—all came from, if not from the entrepreneurial spirit of those who have their philosophical roots, if not their family background, in the middle class. But I digress …

2. As Robert A. Heinlein noted: “Throughout history, poverty is the normal condition of man. Advances which permit this norm to be exceeded—here and there, now and then—are the work of an extremely small minority, frequently despised, often condemned, and almost always opposed by all right-thinking people. Whenever this tiny minority is kept from creating, or (as sometimes happens) is driven out of a society, the people then slip back into abject poverty. This is known as ‘bad luck.’ ”