Sunday, November 29, 2020

The Wichita Lineman

Powerlines

Glen Campbell’s “Wichita Lineman”—a poignant song from my college days—has been running through my head. And after ten years working in the electric and gas utility business, back in the 1980s, I still can’t get the inconsistencies out of my mind.

I am a lineman for the county
And I drive the main road,
Searchin’ in the sun for another overload.

Where to begin? Well, first, “lineman” is not usually a county function. In most service territories, the line crews are employed by the public utility that owns, operates, and maintains the powerlines. So I thought we had difficulty in the first line. But, on investigation, I found that Wichita is served by the Sedgwick County Electric Cooperative, so in that case the lineman probably does work for the county.

But to continue … In most utilities, the linemen don’t drive around looking for trouble. That job—which is a promotion above the classification of lineman—goes to a “troubleman.” This person works alone, as in the song, and does patrol the distribution and transmission lines. And when the troubleman finds trouble, he—sometimes nowadays, but rarely, a she—calls out a line crew to fix it. Note that while some powerlines, especially distribution lines in neighborhoods, have rights of way along “the main road,” most of the higher-voltage transmission lines cut across country. So the troubleman spends more time on back roads and in the dirt than on easy main roads.

And then, what kind of trouble does this person look for? An “overload” implies a powerline that is carrying too much electricity. That is, the utility is operating it at too high a load for conditions. Electricity passing through a wire creates resistance and sheds this energy in the form of heat. On hot and windless days, the heat can start to melt—or at least soften—a one-inch cable braided from aluminum strands, so that it sags between towers or poles and threatens to touch the trees, brush, or even ground along the right of way. That would be an obvious problem. And maybe in Kansas, in 1968 when the song was written, utility operators wouldn’t know when they were overloading their lines and causing fires. These days, powerlines are monitored by remote sensing equipment and the operator varies the load to match conditions.

What most troublemen are looking for is the location of a known fault. Sometimes, in a high wind, two of the three phases on the line—carried in the three separate wires strung from pole to pole—come together, short out, and cause a fault. Then, in a modern powerline, a device called a “recloser” works like a fuse: it pops, interrupting the flow of current, then tries to close again in case the touch was momentarily. If the fault persists, it stays open. The troubleman finds the hanging recloser, gets out a long extension pole, and closes it. But more often the trouble is a line brought down by someone driving into a pole, or the wind causing a tree branch to fall into the lines and bring them down. Then the troubleman calls out a line crew to fix it.

I hear you singin’ in the wire;
I can hear you through the whine.
And the Wichita lineman is still on the line.

I understand that you can sometimes hear the wind in the wires and think it’s your loved one singing. If you hear a whine or a buzz, then you’re not dealing with a neighborhood distribution line. More likely it’s a high-voltage transmission line, and the sound is caused by moisture or dust carrying some of that current outside the wire and along its surface, like a giant Tesla coil. And yes, a lineman or troubleman might hear this too.

I know I need a small vacation
But it don’t look like rain.
And if it snows that stretch down south won’t ever stand the strain.
And I need you more than want you,
And I want you for all time.
And the Wichita lineman is still on the line.

All right, this one’s a definite no. Well, maybe in Kansas, where the rain comes straight down and the wind doesn’t blow. In California, however, rain means winter storms with lots of wind. That means lines coming down and lots of trouble to repair. You don’t want your troublemen and linemen going on vacation then. You want everyone on call and ready to roll at all hours of the day or night. And maybe, in Kansas on the prairie, you can get so much snow that it weighs down the lines or the towers and causes them to collapse. But if a troubleman knew of a weakened section vulnerable to such weather, he would have put in an order to repair or replace it. Maybe the utility company or cooperative has not yet executed that order, but prudent management suggests you do that while the sun is shining and not wait for the snow to take out part of your system.

And then, this lineman is thinking about his lady love and needing her and wanting her. Some senior member of the crew is going to slap him upside his hardhat and tell him to get his mind on the job and get back to work. That’s the reality.

But then, “Wichita Troubleman” have been a different type of song. And none of it would have rhymed.

Sunday, November 22, 2020

On Abortion

Human embryo

I try not to be too overtly political on these blogs, because I have friends and readers on both sides of the aisle. Also, I am generally in the middle on most issues, polling about three points right of center on a scale of one hundred either way. But this issue has been batting around in my head recently, and writing it down is a way to get it out.

First, let me say that I’m not generally against killing. I mean, we humans do it all the time. I eat meat and feel unashamed. I support war as the last resort of the beset and desperate. War means killing. War means horror. But war is what you do when negotiations break down and surrender is not an option.

Like most Americans, I think, I am not opposed to abortion when it’s done in the first trimester. At that time the embryo is still developing and doesn’t have much of a nervous system, so not a lot of sentience. While I believe life begins at conception, the early-stage fetus is still just the potential for human life. A lot of things can go wrong in a pregnancy and do. And a miscarriage in the first trimester is more a dashing of the parents’ hopes than the destruction of a human person. Still, I don’t like abortion as a birth-control option, because it’s invasive and it seems to teach the woman’s body how to miscarry. But if she is beyond the option of less invasive measures and still wishes not to be pregnant, that will be her choice.

The second and third trimester become, for me, more problematic. The fetus is developing a nervous system, sensation, and—if we can believe the memes that would have you play Bach to your belly, talk to it, and send happy thoughts in that direction—some self-awareness. Destroying a fetus in these circumstances becomes more like the murder of a human person than the destruction of a clump of cells. I have some feelings about that, and so do many Americans, I think.

Abortion at the moment of birth—what “partial-birth” abortion would seem to advocate—is, in my mind, really the killing of a viable human baby. I understand that the birth may be induced for this purpose, but that does not make it better. I also understand that the people who advocate for this are less concerned with the mental or physical health of the mother than they are with the legal standing of the abortion issue. They are absolutists and legal purists: if abortion in principle is not made an absolute right at all stages of a pregnancy, then it can be challenged and overturned at any stage, including the moment of conception.

I am not an absolutist or a purist about anything. So the appeal of this argument leaves me cold. I believe people should be responsible for their actions: if a woman decides she does not want a child, she should make up her mind in the first three or four months, not wait until the baby is almost born. Bending the law and common sense to accommodate her every whim is not good practice.

Also, abortion at the end of a pregnancy crosses a line that, I think, most moral people are unwilling to cross. If it’s acceptable to kill a baby at the moment of birth, then why not two weeks later? Does the child keep you up at night? Do you have regrets about becoming a parent? Smother it! Does your two-year-old daughter throw tantrums and grate on your nerves? Drown her! Did your sixteen-year-old son borrow the car and dent it? Shoot him! Did your sixty-year-old daughter tell you it’s time you were put in the old folks’ home? Stab or poison her!

Again, I’m not completely opposed to killing—at least not when it’s done with cause. But I do believe people should take responsibility for their actions. And their response to pressure, aggravation, and opposition should be proportional to the incident, where casual murder would be an extreme reaction. While I don’t have an absolute respect for life—again, not absolutist about anything—I do believe being careful and respecting the rights of other sentient beings, especially among your own species, is a moral good. It’s something to strive for, even if we cannot always attain it.

Now, many women are also saying, with reason, “My body, my choice.” This is to say that other people, men in particular, and society in general, have no right to an opinion in the matter of abortion, nor should they be allowed to make rules about it. And, in my view, this is true up to a point—that point being somewhere in the second trimester. Until then, the fetus might be considered just a “clump of cells,” not much different from a tumor, and certainly without a separate identity or viability, perhaps with the potential for humanity but not exactly a human being. But after that point—and we can debate where to draw the line—the fetus becomes a separate entity with sensation, some awareness, and more than just potential. At that point, the woman is hosting, sharing her body with, another living being. And whether you like the biological facts or not, that becomes a societal concern.

To say, “My body, my choice” about the entire process, up to and through the stage of actually giving birth, is like someone saying, “My dog, my property.” After all, they own the dog, bought it, fed it, took care of it in their fashion, and can now decide what to do with it. If a dog owner wants to beat it, starve it, leave it out in the cold chained to a tree, or even abandon it alongside the road, then society should have no say in the matter. The dog is a wholly owned possession that may be disposed of at the owner’s whim. Right?

That’s one view of the legal process about ownership and responsibility, but most of us would disagree. A world where such mind-your-own-business callousness is the societal norm would be a cold and unfeeling place, without pity or concern for the weak and defenseless.

That is not a world I want to inhabit.

Sunday, November 15, 2020

The Unexpected Candidates

Puppet master

Something very strange is going on. Or, put another way with more emphasis, what the hell is going on? Or, as we used to say back in I think it was the 1960s or early ’70s, “Who the hell for President.” Simply stated, the American electorate over the past decade and maybe more has been choosing, or perhaps being offered, the most surprising, least expected, and sometimes least qualified candidates for the highest office in the land.

The Presidency is the most prominent and most powerful popularly elected position in the country. It ranks above Speaker of the House, who is only elected by members of Congress; above Majority Leader in the Senate, who is only elected by members of the majority party in Congress; and Chief Justice of the Supreme Court, who is appointed by the President and confirmed in the Senate. Of all the key players in our national government, the President is the only one we all get together and choose, first in the primary elections or party caucuses in each state and then in the national election.

Yes, the Republican National Committee and the Democratic National Committee have great influence on how the candidates of each major party are chosen. The national committees solicit and direct funding for campaigns and write the rules for party organization and choosing delegates to their national conventions, where input from the primaries are reduced to votes for and against potential candidates. And sometimes the national committees, whose members and influence may not be publicly recognized—that “smoke-filled room” thing—put their fingers on the scale. In both parties, the votes at their national convention include both “pledged” delegates, representing results of the primary election in their state, “unpledged” delegates, who presumably can vote their conscience, or the desires of the party structure, or whatever.

Up until 2018, the Democrats had a large number of “superdelegates” in this position, representing members of Congress, governors, and past Presidents. They could vote however they themselves wanted or at the direction of the party. After 2018, the superdelegates were forbidden to vote on the first ballot of the convention, effectively letting the people decide that much, unless the outcome was beyond doubt. In 2015, the Republicans ruled that unpledged delegates had to vote in accordance with the popular vote in their state’s primary election.

And then, there is the matter of whether the state holds an open, semi-open, or closed primary election, reflecting when and how people not registered with a particular party can vote for the candidates of other parties. Only thirteen states and the District of Columbia have closed primaries, where the voter is only offered the choice of candidates within his or her registered party affiliation. Fifteen states have semi-closed primaries, where only independent voters may choose among candidates on any of the affiliated ballots, or may change their registration on election day. Fourteen states have open primaries, where the voter chooses the party ballot on election day. Others, including my own California, have some kind of blanket primary, where the voters choose from a roster of all candidates from all parties.

So how much actual choice any individual voter has in the selection of the final candidates put forward on the November ballot is open to question.

Still … what the hell has been happening? Sometimes, the party’s candidate has been around so long, raised so much money, or tried often enough that the national committee, the primary voters, and the delegates decide that, come what may, “it’s his [or her] time.” This is apparently what happened when the Republicans selected the cold-natured Bob Dole to run in 1996 and Democrats promoted the unlovable and sometimes questionable Hillary Clinton in 2016. In both cases, party loyalists had to grit their teeth and vote the platform. At least, both candidates had solid careers in the Senate, and Clinton had been Secretary of State, a high and influential office in any administration.

But in 2008, the Democrats nominated Barack Obama, a junior senator with limited government experience, with sealed transcripts and a ghost-written autobiography—but selected presumably because he was the only obvious Black candidate, and “it was time”—and the Republicans nominated John McCain, an established senator from Arizona but one who had voted against his party’s interest so often that he felt like an independent.

In 2012, the Republicans nominated Mitt Romney, a businessman, son of the former governor of Michigan, and chief executive of the organizing committee for the 2002 Winter Olympics. He was a nice enough guy, but still not ready for the presidency.1

In 2016, the Democrats finally decided it was Hillary Clinton’s “time,” narrowly excluding the senator from Vermont, Bernie Sanders, whose party affiliation is officially “independent” and unabashedly claims to admire socialism. And the Republicans passed over a dozen able candidates with political experience including governors and senators as well as a nationally prominent businesswoman with executive experience to choose Donald Trump, a real estate magnate and reality-television star with no background in electoral politics.

And then in 2020, we almost got Sanders as the Democratic nominee but he was passed over at the eleventh hour by Joe Biden, a long-time senator, vice president under Obama, previous candidate for president—but also a man of obviously frail and perhaps failing mental and physical health. If it was “his time,” that was sometime in the past. Biden was joined on the Democratic ticket by Kamala Harris, the junior senator from California and former state attorney general, who dropped out of the field of presidential candidates before her first primary. These are hardly charismatic personalities.

It used to be that candidates for the highest office in the land would have extensive political experience, usually as a governor running one of the larger states or as an influential and long-serving member of Congress, at least as a senator. But lately we have seen a parade of candidates chosen for some other reason. And not all of them have outstanding service in some other line of work, such as Dwight Eisenhower in the 1952 election after a leadership role in winning World War II.

It is almost as if the parties, or the people themselves, are devaluing the office, saying “Who the hell for President.” And this is at a time when Congress defers more and more of the details in the laws they pass to the judgment of unelected bureaucrats in the Executive branch and lets the legality of those laws be decided in cases before the Supreme Court. You would think that the person who appoints the senior executives in the administration, sets its day-to-day tone, can veto legislation, and nominates the federal judges and Supreme Court justices should be a person of proven capability, probity, and reasonable judgment.

Instead, we seem to get more than our fair share of nonentities and, sometimes, thinly disguised crooks and buffoons. Who chooses these people? What the hell is happening?

1. And it was only in the last year or so that he became the junior senator from Utah, gaining the political experience that he should have had eight years ago.

Sunday, November 1, 2020

Electricity's Dirty Secret

Power lines

For the decade of the 1980s I worked in the Corporate Communications department of the Pacific Gas & Electric Company, PG&E, one of the largest energy companies in the country, with a service territory covering most of Northern California. One of the biggest things I learned from this time—aside from the fact that your local utility is made up of good people who support their community—is that there are many ways to generate electricity and the key to choosing among them is economics rather than technology.

By a quirk of geography and history, PG&E had—and still has, for all I know—one of the most diversified generating systems in the country, although some of that generating capacity has since been spun off to private owners and suppliers. The company inherited a network of dams and flumes in the Sierra Nevada that provided powerful water jets for hydraulic gold mining in the 19th century, and these were easily converted to run turbines in hydroelectric powerhouses up and down the mountains. It had four large thermal power plants—steam boilers driving turbine generators—that drew on the company’s abundant natural gas supplies for fuel. PG&E also operates smaller units that burn the gas to directly drive turbines, similar to a grounded jet engine attached to a generator. It built a major nuclear power plant at Diablo Cove in San Luis Obispo County, and built almost two dozen generating units drawing on The Geysers geothermal steam field in Sonoma and Lake counties. It draws power from the world’s largest photovoltaic power plant, on the Carrizo Plain, also in San Luis Obispo County, and from the Shiloh wind-power farm in the Montezuma Hills along the Sacramento River in Solano County, among others. The company buys electricity from the Western System Power Pool (WSPP) and the California Independent System Operator (CaISO).

With all of this diversity, PG&E’s energy cost is relatively low, depending on factors like snowfall in the Sierras to feed those dams and the state of the aquifer feeding the steam fields. The company does not draw on enough renewable energy—yet—to be much affected by variations in the wind and sundown over the state.

If the state ever fulfills its promise to get rid of all fossil fuels and provide all power from renewables like wind and solar, the remaining nuclear and geothermal assets will not be able to make up the difference from those abandoned gas-fired power plants.1 There is talk of making up the difference from windless days and dark nights with some kind of energy storage: batteries, compressed air in underground chambers, or superconducting materials that let an electrical charge chase round and round in a donut-like torus. None of these technologies has been tried or proven at any scale needed to supply a utility grid. There is also talk of mandating solar powered roofing in all new housing and retrofitting existing roofs, with transformers to convert the electricity to household current and with batteries to supply energy on dark days and at night. Aside from the initial cost and payback time, generally measured in tens of years, these plans are intended—at least in the promoters’ dreams—to put the local utility entirely out of business.2

The dream of “free electricity” without fuel costs or emissions, using wind and solar power, runs into some basic engineering realities involving energy efficiency and capital cost.

In making these technologies work, the engineer has to move from conceptual design—linking up components, energy flows, and costs in back-of-the-envelope calculations and drawings—to detail design—putting the components in place at the right scale, establishing the true cost of each component, and accounting for variables like heat loss and line losses.3

Engineers constantly work with another variable set, too. For them, there is no such thing as perfection, no solution that is best under all conditions. Everything is a tradeoff in the engineer’s world. Instead of “good” and “bad,” the engineer thinks in terms of “better” and “worse.” You can make electricity with a gasoline generator—if the EPA and county authorities will approve it—or with a hand crank, or by rubbing a silk scarf on a glass rod. The question is always—and this is what I learned at PG&E—at what site, with what investments, and using what fuel supply at what cost? How attractive or interesting or politically correct the technology might be is not a factor.

Solar photovoltaics—generating an electric current by using the energy in sunlight to pass an electron through a semiconductor substrate—is about 20% to 22% efficient, even in cells and panels of the highest quality. This means that three-quarters of the solar energy that falls on them is lost to heat or reflection. And how that efficiency is affected by dust or a layer of snow and ice is still undetermined in large-scale applications, although probably not to good effect. Perhaps, in time, research into new materials can boost that efficiency up to maybe 30%, but much farther than that doesn’t seem to be in the cards.

Wind turbines have an efficiency of about 50% to 59%. This is comparable to the energy efficiency of a gas turbine or thermal power plant. But wind farms require the right conditions, a place with strong, steady, and predictable winds. Like a geothermal steam field, such locations are a resource that can’t be established by fiat or political rezoning. And wind turbines, like any machine dealing with strong forces, are subject to mechanical stresses on the blades and shafts. Although their energy resource is free, the capital investment to harvest it is expensive, not easy to maintain—that is, a heavy generator on a tall tower, sometimes sited on a hilltop, is harder to fix than a turbine generator under cover in a power plant—and subject to depreciation and eventual replacement.

Either of these fuel-free, renewable resources would require the participating utility to maintain a commensurate amount of “spinning reserve”—an alternate, dispatchable generating resource all fired up and ready to come on line instantly to meet the system load dropped when the wind dies or the sun goes behind a cloud. In most cases, this reserve power would have to come from fossil fuels, because the small amounts of electricity available from hydro and geothermal power, and the supply from an operating nuclear plant, would already be spoken for. And some form of “battery backup” on a systemwide basis is not currently technically or economically feasible.

And finally, fusion—the dream of limitless energy by harvesting hydrogen isotopes from sea water and compressing them with laser blasts or electromagnetic fields—is still ten years away. Always ten years away. Yes, we can make deuterium and tritium fuse with either compression technology; we just can’t make them give off more energy than we must put into the reaction. For now, it seems, the only way to fuse hydrogen into helium reliably is to compress it in a steep gravity field, like the inside of a star. Until we find some magical gravity-manipulation technology, utility-scale fusion is going to remain a dream.

All of these renewable technologies—except for fusion—have their place in a diversified system. None of them is ready, yet, to satisfy all of our energy needs. And a modern economy runs on ready availability of energy the way ancient economies ran on resources of clean water and food. Maybe in a few hundred or a thousand years, when we have run out of conveniently obtained fossil fuels, we will develop efficient and low-cost solar4 or fusion power. But for right now, we run on bulk carbon energy.

And no amount of wishing will make it otherwise.

1. Of all the fossil fuels, natural gas is the most efficient in terms of high energy output with low carbon dioxide emissions. This is because the methane molecule (CH4) burns completely, breaking all of its hydrogen bonds in turning methane into carbon dioxide and water. Other carbon sources like coal and oil either burn incompletely or tend to put soot particles and other contaminants into the atmosphere along with greater amounts of carbon dioxide.

2. Of course, manufacturing plants that need large amounts of electric power to run their operations—more than rooftop solar can supply, like steel mills, auto factories, shipyards, and other heavy industries—can either run their own generating stations or leave the state.

3. Building a solar- or wind-power farm—whose energy resource and efficiencies are generally be weaker than a thermal plant’s, and which will generally have to be sited some distance from the end user—must take into account energy lost to resistance and heat on a transmission line. This is usually accounted as 5% to 15%, depending on distance traveled.

4. Probably from orbit, as in my novel Sunflowers, where sunlight has an energy density of 1,300 watts per square meter instead of the 130 W/m2 that strikes the Earth’s surface.

Sunday, October 25, 2020

Are Empires Always Evil?

Roman arm

If you read science fiction, the Empire is always evil, the Emperor is always a villain, and his officers and minions—we’re looking at you, Darth—are always either toadies or supervillains. It was so in the Star Wars movies and the Dune books. Generically, if there is an empire involved in the story, it is bad place and meant to be fought against by the forces of light, reason, and goodness.

Perhaps this is a cultural spillover from the political view—generally held by Marxists and Soviet-inspired Leftists—that all the troubles of the modern world stem from “imperialism.” And by that they usually mean the empires built by white Europeans in Africa, the Middle East, Asia, and South America. The equation is: “Empire bad, local governance good”—even when local governance is at the tribal level without any political refinement. And that equation holds right up until the empire in question is one managed by Soviets or Chinese Communists, and then the benefits of central control by a foreign power structure are not to be questioned.

The cultural spillover also derives from the depiction of Rome and its ancient Mediterranean empire from the Judeo-Christian viewpoint. That is, from the troubles the Romans faced in the province of Judea, particularly when Rome tried to impose its statist, polytheistic religion on people who only believed in one, true god. This dispute ended with the Siege of Jerusalem in 70 A.D. and the Jewish Diaspora. That jaundiced view of the Roman Empire was also fed to us by the persecution of Christ under Pilate and of Christians in general under the empire—until Constantine legalized their religion three hundred years later.

But was Rome an evil empire? Was life there such a hardship?

First, let’s count the negatives. For starters, most people outside the City of Rome itself were added to the empire through conquest. You started off by trading with Rome at a distance, then getting a road built into your territory, then seeing an army march in along that road, and then you had to fight for the right of self-determination. Sometimes the army came first and the road came second—to make it easier for Rome to send reinforcements and hold you down. Almost nobody welcomed Rome at first. But let’s be fair: when the Romans marched in, what they were fighting was mostly the local king, the ancient families who held positions of power, and the armies they could recruit and command. Whether the war was short—as in a few campaigns by Caesar among the Transalpine Gauls—or long—as in all that unhappiness in Judea ending in the reduction of the capital and a bloodbath—was usually a matter of whether and how involved the average person, the peasant in the fields, became in the struggle. That, and the cohesive nature of the civilization that the Romans were attempting to absorb. Gallic and German tribesman were culturally similar but independence-minded and locally divided, and by the standards of the day they were primitive. Judea was an advanced civilization with a unified culture, strong central government, and firm beliefs.1

Next, the issue of slavery. Rome had it and didn’t apologize for that. But then, so did most of the lands and kingdoms they conquered. But, unlike the South in the United States, Roman slavery was not race-based. Just because you had a certain heritage and skin of a certain color did not make you a slave, subject to harassment and capture even after you were freed. Roman slaves entered captivity by losing a battle—all those wars of conquest—or resisting so strongly that the Romans made an example of your whole family or town by selling them into slavery. Or you could become a slave after being found guilty of a crime or through indebtedness—having pledged your person as collateral for a loan. Still, a Roman slave was property and could be abused, sexually exploited, tortured, and even summarily executed—although it generally didn’t profit an owner to damage or destroy his or her property. But also, Roman slaves could earn their freedom, and Rome eventually legislated slave protections such as being able to lodge complaints against their masters and to receive medical care in sickness and old age. And finally, in the ancient world, as in much of the world today, unless you held a piece of property or were trained and engaged in a skill or trade, you always had someone standing over you and making demands on your labor, your time, and ultimately your life. Still, it was better to be a citizen of Rome than anyone’s slave.2

And then, there was tribute. As a Roman province, you were put under the administration of a governor known as a propraetor or proconsul—usually an ex-consul or senior government official out to make his fortune after years of public service. The Roman administration was there mostly to collect tribute—so much to be paid each year in gold or trade goods—or to secure some necessity that the City of Rome needed, such as grain from Egypt, which was the ancient world’s breadbasket. Along with the governor and his administration came the tax collectors, who were not always honest and not always working directly for Rome. It was hard being someone from an old family, landed, wealthy, or otherwise locally important in a newly established Roman province. But, as noted above, life was hard all over—still is in many ways.

And now, some of the good things. First, you were generally cleaner and safer inside the Roman Empire than out of it. The Romans were creative and compulsive engineers, and wherever they went they took with them their construction skills and their preference for clean water and a relaxing bath. They built huge aqueducts not just to serve the City of Rome but throughout the empire to provide clean water and introduce the concept of regular bathing to the general population. And you tended to be safer because the Roman administration frowned upon casual banditry—an occupation reserved to the state—and introduced a proven code of laws suitable to civilized urban living.

Next, your worldview and access to trade expanded. The Romans transmitted knowledge and trade goods from one end of the Mediterranean basin to the other and extending into the hinterlands. If you were part of the empire, you were a citizen of the world. That meant, for a person with ambition, an increase in opportunity and income. And for a citizen, either in the city or the countryside, who might not have owned a piece of property or engaged in a lucrative trade, there was always the army. You signed up for 25 years of service with the legion. After that time, if you survived, you were generally awarded land and a living in the province where you had fought or maintained order—and by then you usually had a local wife and children. Being a Roman soldier was more dangerous than being, say, a farmer out in the hinterlands—except for that casual banditry—but it wasn’t a death sentence, either. The Roman legions fought with a disciplined cohesiveness and regular tactics that tended to minimize wounding and death and favored applying massive and concentrated force against their enemies. It was good to be on the winning side.

And finally, if you were a good ally and willing supporter of Rome, you eventually became a Roman citizen yourself. You had to bathe, speak and read Latin, and obey the law, of course. No hot-headed rebellion—which anyway would be quickly crushed, at least in the times that the Republic and then the Empire were a going concern. Eventually, you could move to Rome itself and become part of the elite. And the consensus seems to be that, in the ancient world, the best time to be alive was Rome in the second century—that is, between 100 and 200 A.D. Not only was the weather mild—the “Roman Warm Period”—but the Mediterranean world was generally at peace. It was a lull between the political chaos of the Hellenistic Age and the rising cold and invading barbarians of the encroaching Dark Age.

There is a reason people submit to the rule of empires and emperors. Whether the Islamic Caliphate, the Mongol Empire, the Ottoman Turks, or the British Empire, the food is usually better, the arts and sciences richer, the trade more expansive, the rule of law generally gentler and less oppressive than the dictates of a local king or brigand, and the average person has a sense of being part of something really grand. Also, under the Romans, you got a hot bath, and under the British, a flush toilet. Not bad for minding your own business and occasionally tugging the forelock.

1. And Egypt was just a mess, having been conquered by Alexander three centuries earlier and then mismanaged by the Ptolemies.

2. The taint of slavery did linger, however, even after a person was set free through the process of manumission. “Freedman” was a separate class in Rome from “citizen,” although freedmen who had previously been owned by Roman citizens could vote and their children became citizens. Still, in the Republic it was rumored that the general and statesman Gaius Marius, one of the “New Men” whose family originated in the allied Italian states and not in the City of Rome itself, had slaves in his ancestry. This was considered a blot on his character.

Sunday, October 18, 2020

Too Many Superheroes

Superhero

It’s no secret that our movies, television, and to some extent also our popular fiction are inundated with superheroes.1 The main characters, or the essential focus of the story, is on people with some physical or mental enhancement: super strength, x-ray vision, ability to fly, increased lifespan, or genius-level perception. And I would include here people who are otherwise separated from the human race by exceptional circumstances: vampires, witches, fallen angels, and the victims of medical experimentation.

These movies, television shows—series, I guess you call them now, with extended story arcs—and books are aimed at the young adult, the middling young, and the young at heart. The trouble is that, in my view, they tend to arrest the normal human development from child to functioning adult.

Life’s problems, which all of us must deal with, cannot be solved by punching through walls, seeing through doors, outsmarting your enemies with a genius IQ, or becoming immortal. A functioning adult has to use the skills and knowledge developed through hard work, proper choices, and good use of time in order to gain confidence, capability, and self-esteem. These things cannot be granted by birth on another planet, a medical advance, or a fortuitous afterlife. There are no shortcuts to growing up.

One of my favorite science-fiction series is Frank Herbert’s Dune books, telling the fantastic far-future history of the accomplished Atreides family. The series actually climaxes in the fourth book, The God-Emperor of Dune. The main character there is Leto II, who is the ultimate superhero: emperor of the known universe, served and protected by fiercely loyal people, commanding a superb fighting force, as well as being virtually immortal, physically invulnerable, able to predict the future, and able to access the living memory of every one of his ancestors and so the entire history and example of all humanity. And yet, in Herbert’s brilliant style, he is brought down by two skilled but not super-powered human beings who resist being his slaves. The book is really the anti-superhero story.

To be an adult is to possess hard-won knowledge, to develop skills that cannot be acquired magically or through a pill or genetic manipulation, to have endured experiences that are both constructive and destructive and enable you to know and understand the difference, and to become adept at foreseeing and dealing with the consequences of your actions. All of this must be learned. It must be acquired by having hopes and dreams, working toward them, and sometimes—maybe often—seeing them dashed. It is acquired through working through your problems, paying attention to what happens and when, remembering those consequences, and formulating rules of living both for yourself and your children, if you have any. This is the process that every child, every young adult, and every post-adolescent goes through. If you are lucky to survive, you keep learning and updating your internal database through adulthood and into middle and old age. Perfecting who you are should never stop until you draw your last breath.

And that is the final lesson. To be an adult includes the sober knowledge and acceptance of the fact that you, personally, in your own self, will one day die.2 This is not a cause for grief, fear, rage, or despair. Humans die, animals and plants die, bacteria and funguses can be destroyed, cell lines come to an end. Even rocks and whole mountains wear away to dust and silt, then break down into their component atoms, and rejoin the cycle of life on this planet. In my view, this is the key understanding of the human condition. We are not immortal. We have no lasting power over death, only good fortune and small victories. We only have the strength of our bodies, the power of our intelligence, and the focus of our wills. That is all we human beings can command.

When you know that you will eventually die, then you know how to value your life, your time, and your effort here on Earth. To be willing to sacrifice your life for something you believe is greater than yourself, you have to know how to value your remaining time. This is a rational decision that our brains were designed to make—if they are not clouded by the veil of hope that we, in our own bodies, just might be immortal. That hope protects us when we are young and stupid and have little experience of death. It is a foolish thing to carry into adulthood and middle age, when we are supposed to know the truth and act accordingly.

Oh, and in addition to what we can command and accomplish as individuals, we can also work together, pooling our achievements and our knowledge over time. We can raise vast cathedrals, each person adding his own carved stone or piece of colored glass. We can build a body of scientific knowledge by researching and writing down our findings in a discipline that we share with others. We can join a company—in the oldest sense of that word, whether an economic enterprise, a body of troops, or a group of travelers—to attempt and achieve more than a single human can do. And if we cannot do any of these things directly, then we can support the efforts of others by mixing mortar for their cathedral, serving as an archivist of their scientific endeavors, or becoming the financier, accountant, or quartermaster to that company in whatever form it takes.

Any of these tasks shared with other humans requires a knowledge of self and your limitations, a willingness to hold your own dreams and desires in check and subvert them to the common will, and to take and give orders for the good of the common effort. And this is another aspect of becoming an adult: to put aside the me-me-me of childhood and adopt the us of a collaborative group.

Superheroes, in fiction and on the screen, leap over these everyday problems and concerns. If they experience disappointment and existential angst at all, it is usually focused inward, on their supposed powers and their failure when they meet a foe who exhibits a greater power. But it’s all a conception of, and played out in the mind of, the graphic artist, the writer, or the film director: the presumed power, the challenges, and the intended result. And, curiously enough, the superhero always manages to win in the end. That is the way of fiction.

Real life involves dashed expectations, failed attempts, physical and mental limits, rejection by loved ones, and sometimes rejection by society itself. It is what a person does with these situations, using only the strength and wits, skills and knowledge, that he or she has acquired through conscientious development, that marks a successful human being. And ultimately the extinction of body and mind comes for us all. If you’re not dealing soberly with these things—and superheroes don’t—then you remain a species of child.

Those developing-adult stories, dealing with growth and change, are really the ones worth telling.

1. In fact, about fifteen years ago, when I was still trying to find an agent for my science-fiction writing, one potential candidate asked, “Who is your superhero?” That was the literary mindset: the main character had to have extraordinary powers for any book that could hope to be optioned for a movie—and back then selling a million copies and making it to the big screen had become the sole purpose of publishing. Maybe it still it, for all I know. But Covid-19 and the closing of the theaters might change all that.

2. I believe I first read this in a Heinlein story—perhaps Stranger in a Strange Land, although I can’t find the reference—that the difference between a child and an adult is the personal acceptance of death. To that, one of the characters in the conversation replies, “Then I know some pretty tall children.”

Sunday, October 11, 2020

Modeling Nature

Mandelbrot fractal

A saying favored by military strategists—although coined by Polish-American scientist and philosopher Alfred Korzybski—holds that “the map is not the territory.”1 This is a reminder that maps are made by human beings, who always interpret what they see. Like the reports of spies and postcards from vacationing tourists, the observer tends to emphasize some things and neglect or ignore others. Human bias is always a consideration.

And with maps there is the special consideration of timing. While the work of a surveyor, depending on major geographic features like mountain peaks and other benchmarks that tend to stand for thousands of years, may be reliable within a human lifespan, mapmakers are taking a snapshot in time. From one year to the next, a road may become blocked, a bridge collapse, a river change course, or a forest burn—all changing the terrain and its application to a forced march or a battle. If you doubt this, try using a decades-old gas station map to plan your next trip.

This understanding should apply doubly these days to the current penchant for computer modeling in climatology, environmental biology, and political polling. Too often, models are accepted as new data and as an accurate representation—and more often a prediction, which is worse—of a real-world situation. Unless the modeler is presenting or verifying actual new data, the model is simply manipulating existing data sources, which may themselves be subject to interpretation and verification.

But that is not the whole problem. Any computer model, unless it becomes fiendishly complex, exists by selecting certain facts and trends over others and by making or highlighting certain assumptions while downplaying or discarding others. Model making, like drawing lines for topological contours, roads, and rivers on a map, is a matter of selection for the sake of simplicity. The only way to model the real world with complete accuracy would be to understand the situation and motion of every component, the direction and strength of every force, and the interaction and result of every encounter. The computer doesn’t exist that can do this on a worldwide scale for anything so complex and variable as weather systems; predator/prey relationships and species variation and mutation; or political preferences among a diverse population of voters and non-voters.

Computer modeling, these days—and especially in relation to climate change and its effects, or concerning political outcomes—is an effort of prediction. The goal is not so much to describe what is going on now but to foretell what will happen in the future, sometimes by a certain date in November, sometimes by the beginning of the next century. Predicting the future is an age-old dream of mankind, especially when you can be the one to know what will happen while those around you have to grope forward blindly in the dark. Think of oracles spoken only for the powerful or the practice of reading tea leaves and Tarot cards for a paying patron.

But complex systems, as history has shown, sometimes revolve around trivial and ephemeral incidents. A single volcanic eruption can change the weather over an entire hemisphere for one or several years. A surprise event in October can change or sour the views of swing voters and so affect the course of an election. The loss of a horseshoe nail can decide the fate of a king, a dynasty, and a country’s history. Small effects can have great consequences, and none of them can be predicted or modeled accurately.

When climate scientists first published the results of their models showing an average global temperature rise of about two degrees Celsius by the year 2100, the counterclaims were that they focused on carbon dioxide, a weak greenhouse gas; that the models required this gas to produce a “forcing,” or positive feedback loop, that would put more water vapor—a more potent greenhouse gas—into the atmosphere; and that the models did not consider negative feedback loops that would reduce the amount of carbon dioxide or water vapor over time. The climate scientists, as I remember, replied that their models were proprietary and could not be made public, for fear they would be copied or altered. But this defense also rendered them and their work free from inspection. Also, as I remember, no one has since attempted to measure the increase, if any, in global water vapor—not just measured in cloud cover, but also by the vapor loading or average humidity in the atmosphere as a whole—since the debate started. And you don’t hear much anymore about either the models themselves or the water vapor, just the supposed effects of the predicted warming that is supposed to be happening years ahead of its time.2

Add models that, for whatever reason, cannot be evaluated and verified to the general trend of results from scientific studies that cannot be reproduced according to the methodology and equipment cited in the published paper. Irreproducibility of results is a growing problem in the scientific world, according to the editorials I read in magazines like Science and Nature. If claims cannot be verified by people with the best will and good intentions, that does not make the originally published scientist either a liar or a villain. And there is always a bit of “noise”—static you can’t distinguish or interpret that interferes with the basic signal—in any system as vast and complex as the modern scientific enterprise taking place in academia, public and private laboratories, and industrial research facilities. Still, the issue of irreproducibility is troubling.

And, for me, it is even more troubling that reliance on computer models and projections are now accepted as basic research and scientific verification of a researcher’s hypothesis about what’s going on. At least with Tarot cards, we can examine the symbols and draw our own conclusions.

1. To which Korzybski added, “the word is not the thing”—a warning not to confuse models of reality with reality itself.

2. We also have a measured warming over the past decade or so, with peaks that supposedly exceed all previous records. But then, many of those records have since been adjusted—not only the current statement of past temperatures but also the raw data, rendering the actual record unrecoverable—to reflect changing conditions such as relocations of monitoring stations at airports and the urban “heat island” effects from asphalt parking lots and dark rooftops.
    As a personal anecdote, I remember a trip we made to Phoenix back in October 2012. I was standing in the parking lot of our hotel, next to the outlet for the building’s air-conditioning system. The recorded temperature in the city that day was something over 110 degrees, but the air coming out of that huge vent was a lot hotter, more like the blast from an oven. It occurred to me that a city like Phoenix attempts to lower the temperature of almost every living and commercial space under cover by twenty or thirty degrees, which means that most of the acreage in town is spewing the same extremely hot air into the atmosphere. And I wondered how much that added load must increase the ambient temperature in the city itself.