Sunday, June 28, 2020

The Antifanatic

Apple tease

Everyone these days is talking about “Antifa,” the “antifascists,” who enforce their views with bricks and fists. Well, I have a cure for that. It’s the realization that no one is morally or ethically or humanly required on any and all occasions to speak out, speak loud, speak proud. I call it the “Antifanatic.”

The call to action—putting your heart and soul into discussion or pursuit of a particular proposition—is all too often the call for absolutism. Distinctions are pared away. Qualifications are eliminated. Shades of gray are painted over. The pursuer—or defender—is left with a nice, clear, easy to understand, impossible to refute “yes” or “no.” Everyone is either with us or against us. Do or die. Death or glory. … And this is a remarkable renunciation of the human capacity for reason.

That said, you don’t want to enter into hard battle, a shooting war, life or death, fight or flight while you are beset by doubts. When the bullets are flying your way, you don’t want to be—and you don’t want those on your side of the trench line to be—questioning, maybe, your responsibility, just a little bit, and your moral justification for receiving those bullets. When war comes, you put your quibbles aside and take up the gun.

But not every social situation is a war. Not every injustice is a cause for rebellion and revolution. And that seems to be the problem today, in our politics, our economics, our culture, and our social connections. Every issue rises to the level of New York Times 48-point Cheltenham Bold Extra Condensed all-caps headline font, usually reserved for a declaration of war. The volume is turned up to twelve on a scale of ten. And so distinctions and qualifications drown in the noise of drumbeats.

I say this because most of life is gray. Living is a matter of distinctions and qualifications. Choices are seldom “yes” or “no” but more a case of “better” and “worse,” of “maybe” and “it depends.” In very few cases is the benefit or the risk absolute but rather a matter of calculation. And that is what the human brain was developed for, after a million years of deciding how to live in the world. In the hunt, there is no perfect knowledge and no obvious choice. Instead, the game may be in sight but at just a bit too long distance, or with the wind or foliage causing interference. There is seldom a perfect moment for the shot, just a collection of perceived opportunities and risks. And in the gathering of foodstuffs, there is no single, perfect berry or tuber to be chosen and plucked, only a selection of those on either side of optimum size or ripeness. How willing a person might be to eat an overripe apple, and maybe eat around a spot of brown decay or maybe a maggot on one side, depends on how hungry that person is. Circumstances and perspective are everything.

But it’s hard to get a political movement or a religion—and these days they can often be confused—started with an overlay of rational analysis, the weighing of distinctions and qualifications, the necessary consideration of “maybe” and “it depends.” The political leader wants his followers to shout “Yes!” The pastor wants his congregation to give a big “Amen!” That’s how you build a compliant crowd—by paring away individual thoughts and distinctions and cutting away any middle ground.

As a writer, I do not have the luxury of retreating into absolutes, of indulging the human craving for pure black or pure white, of sliding into absolutism. The writer’s job is to see multiple viewpoints, or so I believe. My characters are not—should not be, cannot be—creatures of simple tastes and absolute beliefs. My heroes need to have doubts and failings. My villains must have thought and reason.1

Every human has a story. Most humans—except those who were mentally impaired to begin with, or have lost their minds through fever, dementia, or other illness—have reasons for their choices and actions. The story may be tragic and leave the person bruised and selfish. The reasons may be faulty and the choices hasty. But those are all clues to character. Every human being is a ball of string set in the glue of circumstance and opportunity. And it’s the writer’s job with an imagined fictional character—or the psychiatrist’s, with a real patient—to untangle or cut through the string, to see how the glue seeps into the layers of choice and desire, and to find the main strands of personality. This is the excitement of exploring a character as he or she navigates the obstacles of a plot and action.

The fanatic would dismiss all of this discernment and analysis as irrelevant. The fanatic pushes a single thread, a single viewpoint, to the edge of absurdity—and sometimes goes right over into virtual madness. Any position taken as an absolute will lead to absurdity. In a world of “zero tolerance” about personal weapons, a nail file or a butter knife becomes a verboten possession. In a world of absolute support for a woman’s right to choose, strangling a baby at birth becomes a reasonable option. In a world of absolute justice without compassion or mercy, a man can be hanged for stealing a loaf of bread. Fanatics always end up defending positions that any reasonable person would find questionable, unjustified, and sometimes abhorrent.2

Reason is the human birthright. The ability to think and act as an individual, rather than as part of an arm-waving crowd, is every person’s prerogative. This is why human society is so varied and diverse: we each see a different set of distinctions, shades of gray, opportunities and obligations, benefits and risks.

Let’s keep it that way.

1. This is one reason I don’t write fantasy. It’s too easy to paint the villains as mindless evil for evil’s sake. Even a psychopath or a sociopath has a background story and a reason for his or her choices. In The Lord of the Rings, for example, the titular character and driver of the entire story line, Sauron, is never shown as a viewpoint character or even as any kind of living presence. Sauron is pure evil, malice without reason, power for power’s sake, the dark cloud that stretches in the east. We never get inside Sauron’s mind because there is no human mind there, just the darkness. This is not a failure on Tolkien’s part, but Sauron is the one character in the whole saga who operates without a discernible motivating cause and cannot be mistaken for human. Sauron is the fire, the storm, the mindless force of raw nature, not someone with whom we can identify—or bargain, negotiate, and reach agreement.

2. One of the reasons I am not a serious collector of things—stamps, coins, Hummel figurines—is that the initial attraction and the “Hey, neat!” response of the beginner eventually becomes the feverish pursuit of the rarest, least obtainable, and sometimes the ugliest or most flawed member of the defined group. (This is because ugly or flawed things usually don’t get produced in large numbers. Cf. the U.S. “Inverted Jenny” 24-cent airmail postage stamp.) I want to preserve the beginner’s initial fascination and avoid the fanatic’s ultimate heartbreak.

Sunday, June 21, 2020

The Static Viewpoint

Perspective

I sometimes wonder if the average human mind, born and raised into a particular place and time, has actually caught on to the reality that everything in life, the universe, and your own personal patch of ground is changing. All the time. With or without—usually without—your help. We all live in a kinetic universe. So get over it.

The adjective static and its noun form stasis come from Latin, which derived it from Greek, meaning “to stand.” And since this is not standing up or sitting down, rising or falling, coming or going, or moving from place to place, it implies “to stand still.” Everyone is guilty, depending on the subject and the circumstances, of thinking that the world around them hasn’t changed or won’t change, or when observing the change for themselves, hoping that it will eventually reverse and things come back to being the same.

To start with, most parents are by turns amazed and disappointed when their children grow and develop from cute babies to fretful toddlers, to winsome seven-year-olds, to moody, snarling teenagers. We want them to stay at each biddable stage and not progress. Sometimes they grow up to be adults we recognize and with whom we can become friends. But sometimes they grow up into strangers and even enemies. This is the pattern of life. And wouldn’t we hate it if our child failed to develop, through autism or some intellectual or emotional disability, so that he or she remained a charming imbecile, or a ranting child, in an obvious adult’s body? That would be the tragedy.

You can see this penchant for static thinking in the heritage of the oldest religions. The world of gods and men is fixed and unchanging. Yes, there are storms and earthquakes, wars and plagues, all of which change the fortunes of individuals and families, but the nature of the world itself remains static. You see this most plainly in the creation myths, where the supreme deity creates the land and the animals and plants that live on it, and they are all one-offs. A horse is the representative of a fixed pattern—essentially the Platonic idea of horse-ness—that was set at its creation and named as “the horse.” Yes, there are obviously related animals like donkeys and zebras, but they, too, were supposedly singular creations. And the horse is designed from scratch with teeth and stomach to eat grass and with neat hooves to run over the gentle turf in valley bottomlands. Just as camels were designed from scratch with floppy feet to walk on shifting sands and with water-retaining flesh in their humps to survive many days between drinks.

Of course, the religious writers and theorists of two and three thousand years ago did not have a true appreciation for the age of the Earth. For them, creation was just a few hundred generations ago, at most, or perhaps a few thousand years in the distant past. A world that cooled out of the solar disk four billion years ago would have been a thing beyond their understanding. Usually, they did not even have words and concepts for that large a number representing that long a span.

They also did not have a conception of tectonic forces, as I have noted elsewhere. If your world is just a few thousand years old, then you don’t ponder too long on the fact that you can see hills slump down in continual landslides, and mountains erode into silt and wash out to sea, and yet you don’t wonder why, over time, all the mountains and hills have not become flat. While you can see the agents of erosion and collapse, you never see the forces that built up the mountains and hills in the first place. It’s only when you have an appreciation for the great age of the planet and a working theory of plate tectonics—which wasn’t widely accepted until the 1960s—that you can understand the vast changes that take place over time on any part of the Earth’s surface and the necessity for all of these “created” life forms to have some mechanism for adapting and evolving to fit their current environment.

But antique religious explanations are not the only place that static viewpoints are held and nurtured. Political thinking of many types is bound up in the notion that the world and human consciousness does not or should not change.

Most obviously, there is the “conservative” view. It is a truism that conservatives1 favor tradition, that we are not offended by the vagaries of recent history, by the values we were taught and grew up with, or by the world as it presents itself to us. We are not ready to throw all of that—our parent’s world, or Western Civilization, or political values that pertained before we were born—away on the promise of a brighter, or at least different, and perhaps utopian future.

Conservatives are not always backward looking—or “reactionary” in the words of Marxists and Hegelians, meaning opposed to the just and natural evolution of liberal human thought. Conservatives can view the future with equanimity and a certain amount of hope. But we also know that history is not a just-so story, that it does not always bend in a particular direction, or not always in the direction that certain political parties—followers of those 19th-century Marxists and Hegelians—believe to be natural and inevitable. Conservatives can understand that history, and much of reality, is cyclical: ideas, political movements, economic and political conditions and parties, rise and fall against a background of human nature that has certainly evolved over many generations but is unlikely to change much from one generation to the next.

And then there is the stasis of the political left, the so-called “progressives.” They may believe that human nature is malleable and can be bent in certain directions toward idealistic perfection. But in the economic sphere, they clearly believe that a country’s goods and services, its money supply, and its scale and rate of personal and business transactions are all fixed. They believe the economy is a zero-sum game, where one person wins and prospers only through another person’s loss and misery. If one person grows rich, then he must have made a number of other people poor in proportion. As I have written before,2 the economy is not a pie, where if I take a bigger slice you must then get a smaller one. Instead, the economy is an ecology, where the more life that exists and more transactions that take place, the more wealth is generated for people to garner and to hold onto or share. The difference between a rainforest and a desert is not simply the amount of rain that falls, but the scale of interactions taking place on the ground and in the forest canopy.

And then, in the natural world, the progressives believe that conditions are—or should be—fixed and unchanging. They revile “climate change,” which also runs under the heading of “anthropogenic global warming,” because without humankind’s burning of fossil fuels and other environmental insults, they believe, the world would not change around them. They view a rise in global temperatures by a few degrees centigrade in a hundred years as an absolute catastrophe, missing the point that global temperatures have been rising steadily since the Maunder Minimum—a blank-faced Sun blank lacking in magnetic disturbances and sunspots—in the 17th century. They dismiss any historic temperature fluctuations—the Roman Warm Period, the Dark Ages Cold Period, the Medieval Warm Period, and that Little Ice Age starting in the 1600s—as a political fantasy. Similarly, they believe the sea level won’t rise without anthropogenic causes, as if shorelines haven’t been shifting, advancing and retreating, causing the relocation of city centers and populations for millennia, if not centuries.3 They also believe in a fixed amount of resources—such as Malthusian famines and “peak oil”—in relation to human population growth and density.

And finally, progressives appear to believe that, once they have achieved their goals, once human nature has been adjusted and perfected, society is ordered according to rational and egalitarian principles, and the world is led by like-minded professionals, then history will stop. That humankind, having reached the socialist or communist utopia, will cease to change and will simply roll forward, from one five-year plan to the next, in peace and personal safety, forever. For any thinking person, this idea is as narrow and stultifying as the religious conception that when we die we migrate to a place of either perfect bliss or perfect torment and reside there, as a disembodied but functioning and remembering consciousness, forever.4

Disbelief in and fear of change are written into most of the human experience. Change that we do not consciously plan for and initiate engenders the unknown. We believe we can plan for the future, and so a future that is moving ahead without us creates anxiety and dread. This is the product of having an active, thinking, projecting mind—the workings of the brain’s prefrontal cortex—that tries to meet an active, changing environment with an unknowable future.

And that, too, is part of the human condition.

1. Consider the very nature of that word, conservative. It means someone who “conserves” or “preserves.” A conservator is one who enters into another person’s life—when this person’s health or mental condition is failing—and helps them preserve their living situation, their wealth, and their dignity. To conserve something is to protect it and keep it from the ravages of time, the vicissitudes of nature, or careless waste by thoughtless humans.

2. See, for example, It Isn’t a Pie from October 3, 2010.

3. As if 11,000 years ago—well within the experience of modern human beings—the glaciers did not cover North America in ice a mile deep, and the shoreline did not extend down 400 feet to the edges of the continental shelf, all without human intervention.

4. And isn’t “forever” itself a static concept? Nothing is forever.

Sunday, June 14, 2020

Seeding the Stars

Starfield

I have already written about the ubiquitous nature of DNA on this planet,1 how every life form we can find—plant, animal, bacterial, and so on—uses the same basic replication system no matter how distantly isolated or temporally antique that organism may be. DNA is so successful that it may not have evolved on Earth at all. And that raises the question of how an intelligent species goes about adapting its kind of life to the different kinds of planets likely to be found in nearby solar systems.

In James Blish’s The Seedling Stars from 1957, scientists from Earth visit various exoplanets, sample their environments, and then genetically modify human beings to exist and thrive under those conditions. Perhaps they will have to breathe a different atmosphere, or stand up under heavier gravity, or—in one case—live as microorganisms in a bacteria-rich puddle on an otherwise dry and barren planet. It’s an interesting concept: making humans adapt to the environment on the ground, rather than living under domes of tailored atmosphere or trying to terraform the planet to fit human needs.

But Blish’s story requires a lot of hands-on synthesis. The geneticists and environmentalists on the seeding ships have to do a relatively quick study of the planet, figure out all of its advantages and pitfalls, and then redefine the human structure in exquisite detail—adapting the entire respiratory or alimentary system, adding muscle mass, or—in the case of that puddle—reducing the structure to microscopic size. While the concept makes a good story, I can imagine that if the scientists were wrong by a few degrees of deviation or just a couple of genes, the adapted humans would fail to function or die out in a generation or two.

But then, as we learned in the Biosphere 2 experiment, it is extremely difficult to develop a self-sustaining environment with resident human beings that is not continuously replenished from outside. We can maintain a crew aboard a ballistic missile submarine that operates submerged for months at a time only by having it surface, restock, and refit its systems after every mission. Living that way on the Moon or Mars without a continuously operating and available supply line from Earth may be more of a challenge than we think. Setting up to live under domes on a planet light years from our home world will likely be impossible.

The best way to adapt an organism to an alien planet may be to start small and literally from scratch. Send in a hardy bacterium that has only a few basic requirements for temperature, water, and food. Let it mutate, adapt, evolve, and spread by relying on the remarkable ability of the DNA-RNA-protein system to mutate and thrive according to available conditions. And then see what kind of life form—perhaps as intelligent as human beings, perhaps only as intelligent as dinosaurs and dogs, perhaps no more intelligent than slime molds—develops after a couple of million or a billion years.

Since you wouldn’t be sending your own life form—with its already developed physical characteristics, intelligence level, and environmental requirements—to the new planet, you wouldn’t be staking much on the “humanitarian” value of the colony and its fate. You might not even know that the planet you were sending your probe microbes to was capable of supporting life in the first place. You probably wouldn’t bother if they were going to land on a frozen gas giant or a hell world too close to its star. But close observation and exact measurement is difficult at interstellar distances. You could be wrong by a few degrees of deviation and still be successful in gaining a foothold for life—maybe not life in your own physical form but something sharing your basic gene structure—on a new world.

The odds of complete success—creating a life form that has recognizable intelligence and the ability one day to communicate back to your home world—would likely be small. And the timeframe would likely be long, on the order of millions or billions of years. So you would not place a great deal of trust or effort in the success of any single seeding mission. That being the case, you would not send just one or two missions to the most obvious planets and hope for the best. You would make minimal investment in hundreds or thousands of missions.

You might not be sending your designated seeds of life inside sealed and monitored environments aboard expensive, guided rockets with their own navigation and braking systems, as well as some method for gently sowing your seeds upon arrival. Instead, you might just reduce your hardy bacteria to a spore state that’s able to endure and protect its DNA-RNA-protein system for a thousand or a million years in space, scatter it on a comet or carbonaceous asteroid, and fling it out of your own solar system in any likely direction. Eventually, it will hit some solid body. Possibly, the spores will find a conducive environment and germinate. And maybe one time in a thousand, those seeded bacteria will mutate, evolve, and spread into something like complex life.

Do I think that’s what happened here? Do I think that we—the advanced hominids of planet Earth, product of three billion years of evolution—are the long-lost colonists of some ancient race of intelligent beings who were attempting to seed the stars with microbial DNA? Is there any way to know?

Not really. But it’s as likely an explanation as any other.

1. See Could DNA Evolve? from July 16, 2017. This blog describes the ubiquity on Earth of the DNA-RNA-protein system, not just in its basic pattern or structure but also in its finest details—like the choice among the various purines and pyrimidines for its four bases, the three-codon reading frame, and the limit of just twenty amino acids it uses in protein synthesis. This replication system seems to have evolved here without any significant or lasting competition—or perhaps it didn’t evolve here.

Sunday, June 7, 2020

The Other

Colored pencils

I may be naïve, especially when it comes to political theory. But I believe the root problem we face in this country in a time of divided politics is that we tend too often to see people in cohesive and monolithic groups and then identify them as “the other.”

“The other” is scary. It is defined as unknown, also unknowable, not like us, not approachable, not possessing our own core values, not responding to our most basic hopes and fears … not really human. The other is the bogeyman, the monster, the unnamable thing that hides plainly visible but also out of sight, under the bed or in the closet. The other is the dark. The other is danger.

This fear of core difference is so buried in the human consciousness that it might go back to earlier times, when human beings were not all one species but divided by actual differences in their basic genome. Forty or fifty thousand years ago—that is, two thousand generations or more ago1Homo sapiens wandered up out of the African Rift Valley and met an earlier form, H. neanderthalensis, in the forests of southern Europe. How the older model differed from the newer in terms of bone structure, braincase size, intellectual capabilities, motor skills, and social structure is still open to question. We know that a certain amount of interbreeding took place, because modern humans of European descent have been shown to carry about four percent of genes identified as Neanderthal, which are not present in humans who remained in Africa or took a different, more easterly route out of the continent.

Back at that meeting in the forest, the Neanderthals really were the other. Maybe they were tougher, or stronger, or clumsier, more or less energetic, or simply less sophisticated in their speaking, grooming, and eating habits. Maybe they just smelled wrong, so that only the most sexually adventurous among the H. sapiens band would try to pin down and mount one of their females—or not resist too strongly when one of their males tried the same thing. But while there may have been some interbreeding, there never was a coalescence. H. sapiens and H. neanderthalensis remained as separate human species and did not disappear into a new and improved—or devolved—human form. We know this because Europeans, even with that admixture of Neanderthal genes, are still thoroughly H. sapiens. Their genetics do not differ from those of human beings that remained in Africa or settled in Asia, other than those unimportant and easily blended genetic variations that all humans carry.2

But that early encounter with a different form of human being, layered on previous encounters up the line with Denisovans, members of H. erectus, and other hominins who were almost but not quite human, may have left a psychic scar. Certainly we don’t feel about other, more distant species—our dogs, cats, and horses—in the same distrustful way. And we fear tigers, bears, snakes, and spiders only because we have found them, or some of their related species, to be innately dangerous. But bears and snakes are not, for us human beings, “the other.” That status is reserved for people who are like us but definitely not us.

To see the other in people is a kind of blindness. It is to overlook the obvious case that we are all individuals with differences, some good traits and intentions, some bad, but mostly bland and indifferent. If you take people one at a time, as individuals, with a certain amount of basic respect for our commonalities and tolerance for our differences, then you don’t fall into the trap of treating whole groups of people as “the other.”

But these days it’s politically advantageous to divide people into groups, to define them by their difference rather than their commonality. And this tendency applies not just to physical, mental, and emotional differences but also—and sometimes more importantly—to differences in class, status, and sometimes even profession.

For the many consumers, the manufacturers and retailers of basic necessities have become “the other.” For many who identify as blue collar or working class, the business owners and financial services who support them—collectively, capitalists—are “the other.” For many people with a chronic medical condition, doctors, hospitals, pharmaceutical and insurance companies become “the other.” In every case, the other is composed of people whose interests and intentions are markedly different from yours, therefore unknowable, therefore dangerous.

I just viewed the film based on John Le Carré’s The Constant Gardener, about a British diplomat uncovering the scandal of a European pharmaceutical company testing a flawed tuberculosis drug in Africa and hiding the negative results. The presumption is that the drug makers, the clinical trial operators, and the government diplomats coordinating between them are all so driven by profit and so desensitized to human suffering that bringing a failed drug with deadly side effects to market would seem like a good strategy. That is, the drug companies and their technical supporters are the unknowable and dangerous “other.”

For a number of years I worked in the pharmaceutical business as a documentation specialist. I can tell you that these companies at every level, from operators in the labs and production suites to executives in the home office, care very much about the health of their potential patients and the safety and efficacy of their products. This is because these people are fully human and have respect for themselves, their endeavors, and their fellow human beings. Bringing out a bad drug that kills people, however profitable it might be in the short term, is a bad business decision in the long term. Consequences catch up with you. No one wants to be ashamed of the company they work for because it carelessly killed people. The scientists and technicians I worked alongside were not “the other” but instead were respectable human beings who cared about human life, as well as the laws and accepted practices of the society in which they operate.

To suppose that pharmaceutical employees are soulless demons driven solely by the profit motive—or that bankers are heartless demons seeking to foreclose on their borrowers, or that food processors and grocers are mindless demons selling poisons disguised as nourishment—is to miss the fact that these are all people endowed with much the same sensibilities and concerns as yourself. If you believe that humans in society are basically corrupt and conscienceless, then you might also believe that respected local museums, regardless of their dedication to art and history, quietly sell off their most valuable artifacts to private collectors for profit and replace them with fakes fabricated by their curators. Or that doctors, despite their Hippocratic oath, routinely treat patients with unnecessary tests, procedures, and medicines simply to inflate their billings. Or that police officers, regardless of their oath to protect and serve, are sociopathic bullies who use their power to mistreat innocent civilians.

To see only the differences in people and fail to grant them provisional respect and tolerance as human beings is to succumb to the fear and loathing of “the other.” To see people as faceless masses with unknown motives and intentions is to succumb to the myth of “the other.” Casual discrimination, blindly lumping people together as types, remaining deaf to individual traits and capabilities, is easy. It’s lazy. It’s the failure to act as a thinking being.

And it’s tearing us apart.

1. Counting a generation as approximately twenty years. Of course, in hunter-gatherer societies, where life was closer to the bone, child bearing and so the start of the next generational cycle might begin soon after puberty. That would put a “generation” at more like twelve to fifteen years. But we’re dealing in approximations here.

2. Here I’m talking about inherited characteristics like peculiarities in the shape of eyelid, nose, or lips, certain distinctive body types, and certain predispositions to or protections against disease. The most obvious genetic difference, that of skin color, is the most widely variable and pliable characteristic. It has more to do with where a population most recently resided—in the sunlight-rich tropical regions or in the relatively sunlight-poor higher latitudes—than any immutable genetic predisposition. Take a Congolese family to Finland and let them live there for a hundred generations without interbreeding among the locals, and their skins will naturally lighten. Transplant a Finnish family to the Congo for a hundred generations, and their skins will darken. Skin coloration—an increase in melanin distribution—is a protective adaptation against ultraviolet radiation and not a permanent feature of the visible characteristics we tend to associate with “race.”