Sunday, May 2, 2021

The Rising Curve

Compound steam engine
Steam turbine blades

The rates of increase between a slope and a curve have different mathematical properties. A steady slope is usually generated arithmetically by adding single units (1, 2, 3, 4 …), while a curve is usually generated exponentially by adding squared or cubed units (2, 4, 8, 16 …). A parabolic curve, at least the part that proceeds upward from its low point, is generated by the formula ax2+bx+c,1 and it can rise really fast. My contention here is that our technological advancement since about the 17th century has been on a parabolic curve rather than a slope.

In ancient times—think of Greece and Rome from about the 8th BC, the first Olympiad for the Greeks, or the mythical founding of their city for the Romans—there was technological advancement, but not even a slope. More of a snail’s pace. The Greeks had their mathematicians and natural and political philosophers, like Pythagoras and Aristotle, but aside from writing down complex formulas and important books which probably only a fraction of the populace bothered to read, their works did not materially improve everyday life. The Greeks never united their peninsula politically, for all their concept of democracy, remaining stuck at the tribal and city-state level of conflict. And from one century to the next, they drank from the local wells, shat in the nearby latrines, and traveled roads that washed out every year with the spring floods. They built in marble the temples of their gods, but otherwise the average people lived in houses of wood and mud brick not much different from those of their predecessors in Homeric times, five centuries earlier.

The Romans did somewhat better, being short of actual philosophers but abounding in practical engineers. They developed a democratically based political and military system that united their peninsula and went on to conquer most of their known world. They built huge aqueducts to bring fresh water into their cities from distant springs, underground sewers to take away human wastes, and roads dug many layers deep into the ground that could reliably move goods—and armies—from one end of the empire to the other. They built temples and palaces in marble laid over brick but also invented a synthetic stone, concrete, that their engineers originally made from a volcanic ash known as “pozzolana.” Common people in the city lived in apartment blocks called insulae, or “islands.” They bathed regularly and made a civic virtue of the practice. Life was better under the Romans, but technological advancement was still glacially slow.

Rome, at its fall in the Western Empire during the 5th century AD, was technologically not much different from the Rome of Julius Caesar, five centuries earlier. And that fall—due largely to climate change and the ensuing barbarian migrations—plunged Europe into a Dark Age that saw small advancement in any of the arts, although we did get some practical technologies like the wheeled plow and the stirrup. Those, along with gunpowder adopted from Chinese fireworks and movable type adapted from the Chinese by Gutenberg in the 15th century for printing bibles, carried us through to the 16th century, the time of the Tudor reign in England or the Medici in Italy.

After that, technologically speaking, all hell broke loose.

Some might credit Rene Descartes and his inventions of analytic geometry and the scientific method, based on observation and experiment; or Isaac Newton and his invention of the calculus (also developed independently by Gottfried Leibniz in Germany) and his studies of gravity and optics; or Galileo and his work in physics and astronomy. Intellectually, it was a fruitful century.

But from an exhibit that my late wife prepared at The Bancroft Library years ago, I learned that a more immediate change came about with the exploration of distant lands and when the European trading companies set up to exploit them began importing coffee and tea into the home market in the 17th century. Before then, people didn’t drink much water because of rampant contamination; so instead they drank fermented beverages—sweet wines, small beer, and ale—because alcohol helped kill the germs, although they didn’t think about it in those terms. So they would sip, sip, sip all day long, starting at breakfast, until everyone was half-plotzed all the time. But then along came coffee and tea, which were good for you because you had to boil the water to make them. Everyone brightened up and began thinking. The denizens of Lloyd’s Coffee House in London invented insurance companies to protect the sea trade, which required estimates of risk and probability, and that led to a whole new branch of mathematics and the spirit of investment banking.

Put together scientific investigation with the widespread availability of printed books and the clear minds to read them, and we’ve been on that rapidly rising parabolic curve ever since.

We are just over three hundred years from the first steam engine, patented in 1698 to draw water from flooded mines. In the time since then, the engine has gone from triple-condensing cylinders to turbine blades. And that is the least of our advances. This year, we are just two hundred years from the first primitive electric motor, built by Michael Faraday in 1821. And now we have motors both small and large driving everything from trains, elevators, and cars to vacuum cleaners and electric shavers.

In my lifetime, I have seen music go from analog grooves cut into vinyl disks and magnetic domains on paper tape to digital representations stored on a chip, and photography go from light-sensitive emulsions on film and paper to similar—but differently structured—digital sequences on chips. My electric typewriter—again driven by a small motor—has gone from impact-printing metal representations of the alphabet on a sheet of paper to storing different digital sequences on that same chip in my computer. All of this puts the stereo system, camera, and typewriter I lugged off to college fifty years ago into a single device that started out on my desktop, migrated to my laptop, then moved into my hand inside a smart phone, and now lives on my wrist instead of a watch. And the long-distance call I made every week from college to my parents at home was once a direct wire connection established by operators closing switches; it would now be a series of digitized packets sent out through the internet and assembled by computers at each end of the conversation. Gutenberg’s process for printing words on paper is now embodied in the photo-masking of electronic circuits on silicon chips. And we’re not done yet.

In 1943, Alan Turing invented the first computer, designed to crack the Enigma code for the Allies. In 2011—just 68 years later—that machine’s linear descendent, IBM’s Watson, was playing although not necessarily consistently dominating the game of trivia based on history, culture, geography, and sports, dependent on linguistic puzzles and grammatical inversion, known as Jeopardy! While that was a stunt, similar “artificially intelligent” systems based on the Watson design are now being sold to businesses to analyze and streamline operations like maintenance cycles and supply chain deliveries. They will take the human element, with its vulnerability to inattention, imagination, and corruption, out of processes like contracting and medical diagnosis. Any job that involves routine manipulation of repetitive data by well-understood formulas is vulnerable to the AI revolution.2

Add in separate but related advances in materials, such as 3D printing—especially when they learn how to make metal-resin composites as strong as steel—and you get disruption in much of manufacturing, along with the global supply chain.3

Any theory of economic value that depends on human brawn—I’m looking at you, Marxists—or now even human brains is going to be defunct in another half century. That’s going to be bad news for countries that rely on huge populations of relatively unskilled hands to make the world’s goods, like China and India.

Intelligent computers are also able to do things that human beings either cannot do or do poorly and slowly. For example, in November 2020, Nature magazine reported on an AI that can predict and analyze the 3D shapes of proteins—that is, how they fold up from their original, DNA-coded amino acid sequences—almost as well as the best efforts of humans using x-ray crystallography. And this was just 20 years after the first sequencing of the human genome using supercomputers, and only 66 years after the first glimpse of the DNA molecule itself using x-ray crystallography. Knowing the structure and thereby the function of a protein from its DNA sequence is a big deal in the life sciences. It will take us far ahead in our understanding of the chemistry of life.

Ever since the 17th century, our technology has been riding a curve that gets steeper every year. And the progress is not going to slow down but only get faster, as every government, academic institution, and industrial leader invests more and more in what I call this “enterprise of science.” Anyone who reads the magazines Science and Nature can see the process at work every week.4 We all stand on the shoulders of giants. We stand on each other’s shoulders. We build and build our understanding with each advance and article.

This rate of increase might be slowed, marginally, by a global depression. We might be set back entirely by a nuclear war, which might revert our technological level, temporarily, to that of, say, the telegraph and the steam engine. But it will only be stopped, in my estimation, by an extinction event like an unavoidable asteroid or comet strike, and then so much of life on this planet would die out that we humans might not be in a position to care.

As to where the curve will lead … I don’t think even the best science philosophers or science fiction writers really know. Certainly, I don’t—and I’m supposed to write this stuff for a living. The next fifty years will take us in perhaps predictable directions, but after that the effects on human economics, culture, and society will create an exotic land that no Asimov, Bradbury, or Heinlein ever imagined. Fasten your seat belts, folks, it’s going to be a bumpy ride!

1. That’s a quadratic equation. And no, I don’t really understand the formula’s properties myself, having nearly flunked Algebra II.

2. But no, the computer won’t be a “little man in a silicon hat,” capable of straying far outside its structural programming to ape human intellect and emotions—much as I like to imagine with my ME stories. And it won’t be a global defense computer “deciding our fate in a microsecond” and declaring war on humanity.

3. It’s become a commonplace that the U.S. lost its steelmaking industry first to the Japanese, then to the Chinese, because they were more advanced, more efficient, and cheaper. Not quite. This country no longer makes the world’s supply of bulk steel for things like pipe, sheets, beams, and such. But so what? We are still the leader in specialty steels, formulations for a particular grade of hardness, tensile strength, rust resistance, or some other quality. Steelmaking in our hands has become exquisite chemistry rather than the bulk reduction of iron ore.

4. For example, just this morning I read the abstract of an article about adapting the ancient art of origami to create inflatable, self-supporting structures that could be used for disaster relief. I read and I skim these magazines every week. And frankly, some of the articles, even their titles, are so full of references to exotic particles, or proteins, or niches of mathematics and physics that I can only guess as to their subject matter, let alone understand their importance or relation to everyday life.

Sunday, April 25, 2021

Understanding Alien Psychology

Borg Queen

I have been thinking and blogging about the potential for finding and understanding aliens a lot recently, ever since reading Avi Loeb’s book on the interstellar object ‘Oumuamua, Extraterrestrial: The First Sign of Intelligent Life Beyond Earth. Now I am heading into realms that are totally unknowable—except from the viewpoint of what we know on Earth. But bear with me …

First off, I am not too interested—well, very, but not for the purpose of this meditation—in simply finding signs of life. We’ve seen things that could be confused with fossilized cells in the surface geology of Mars, and we suspect we might have found gases that could only be created by life-as-we-know-it in the atmosphere of Venus. When we get to other planets, both in this solar system and around other stars, we may well find chemical reactions and physical structures that we, from inside the realms of earthly biology and human understanding, define as “life.” Some of it may be intelligent but a lot of it, like most of the living forms on Earth, will not be what we choose to call “intelligent” or even “sentient.” Slime molds, for instance—honking huge single cells with eukaryotic nuclei—can move toward food and away from irritants in a fashion that seems to be intelligent or at least resembles neural networking.1 But it’s not going to build a rocket and come visit us.

When I think about aliens, I imagine the kind that will leave their planet and come out among the stars, as so much of Western-civilized humanity apparently hopes to do one day. And until we go out there, we’ll just have to wait for them to come to us.

So, first question. Will they look like us? Or even come close—like the various humanoid species that populate a Star Trek episode? I don’t believe it. As Carl Sagan once said, we’d have more success mating with a petunia than with an extraterrestrial lifeform.2

Earth has a long history of large, active lifeforms that might have developed intelligence but as far as we know did not. The dinosaurs come to mind: the family Tyrannosauridae and their cousins were bipedal, oxygen breathing, hunters and perhaps also scavengers, and probably—maybe—at the top of their food chain. But we have no evidence that they exhibited any real intelligence greater than that of a lion or house cat, wolf or dog, or even a shark. And yet the dinosaurs’ distant progeny, the Corvidae family—crows and ravens—as well as many other species of birds have a kind of intelligence we cannot explain. Even octopi—what? a mollusk?—exhibit a high level of intelligence. So size, shape, and mammalian ancestry are not necessarily prerequisites for intelligence. Still, none of these animals from Earth and its history is going to build a radio or a rocket anytime soon.

But the examples from Earth also suggest that we cannot expect to find our own kind of intelligence out among the stars, even if it wears an unexpected shape or inhabits an environment—like the earthly ocean of octopi and whales, or perhaps the liquid water under Europa’s ice, or the methane seas of Titan—in which humans don’t particularly thrive. Life on Earth did not, after all, start out on the land, although that’s probably the best place to build a radio or launch a rocket above the atmosphere.

One particular axis we are likely to encounter in alien psychology is that between the individual and the group. So far, on Earth, the sort of intelligence that is likely to expand to encompass curiosity, technology, and eventually space travel is fixed in individual entities. We humans are separate and complete persons inside our own bodies. We are socialized into groups, certainly, in which we can function for enhanced performance. But we do not become lost, stricken, enfeebled, and die when separated from our group—or at least not right away. And we see this pattern not only in human tribes but also in monkey troops, wolf packs, whale pods, cattle herds, and other social groupings.

Each of us knows or tries to find our place in the group, establish a niche where our capabilities and levels of aggression or empathy best fit, and seek comfort and contentment—or at least a subdued level of rebelliousness—in that placement. We are social animals. And that is the basis of all culture and intergenerational achievement. The fabled lone wolf, the mad scientist, the antisocial genius who works alone and keeps his notes in a secret code—such beings are of interest to us as fiction but they are not the creators of lasting culture or enduring civilizations. They build no great cathedrals, establish no great cities, lead no great social or political movements—and they don’t send rockets to the Moon.

So, we think, the kinds of intelligence we will find out among the stars will be like us in that: socialized individuals, each with his, her, or its own personality, preferences, anxieties, and dreams.

But we have another example on Earth to draw on: the hive mind. Whether in the beehive, the anthill, or the termite colony, the individual entities—the minds inside the separate bodies—are not really individuals as we humans understand the term. They are physically adapted to their tasks and place in the hive structure, and their minds are shaped—one might say innately programmed—to perform those tasks and not question their role. Even the queen is not a ruler or leader but simply the pampered sexual progenitor, the mother of them all, that ensures the colony’s survival and renewal.

Something of this was captured in the movie Star Trek: First Contact, where we are introduced to the Borg Queen (pictured nearby with Alice Krige playing the part). But although the Borg are a collective of mechanized humanoid lifeforms whose brains are electronically networked, they are not really a hive and the queen is not their mother nor their first member. The queen speaks with a voice and persona that can call up the collective mind but can also examine it, see its options and possible choices in context, contrast their existence with what she knows of humanity, and evaluate the Borg from the outside. She is more like a leader or first speaker than the sexual progenitor of the collective.

When we look to more imaginative literature on the idea of the hive as a society, the offerings are few. To my thinking, Frank Herbert has done some of the best work on this. His novel Hellstrom’s Hive examined what it would take to change human beings into the sort of social insects that could function most efficiently in the politically denuded world. And his novel The Green Brain imagined a hive of Amazonian insects that functioned as a single conscious entity, in the same way that the cells in our human bodies work together to create the reality—or perhaps it’s just the illusion—of a single person with independent will and desire.

We might encounter some variation of this colony structure, this collective intelligence that is not separated by the strands of individual personhood, out in the universe.

The question in my mind is whether this kind of intelligence is creative or merely reactive. A colony of honeybees or ants can adapt to its environment, find flowers or other foodstuffs when the weather is right, make its nest or hive nearby, and deal effectively with changes in environment and temperature, or else swarm to find a new nesting site. But can they only think and react to present and immediate needs? Could they eventually engineer changes in that environment? Could they look beyond the immediate locale and imagine ways to make it different? Could they look out at the stars and dream of visiting them? Or are they bound to the world as they know it, in a way that human societies are not?

Every group of socialized individuals is built of both leaders supported by their pack of generally submissive followers and then the potential outsiders, the rebellious youth and the mad geniuses, who question the social order, its structure and purpose, and seek something new or just different. At least, that’s how the human tribe has functioned and flourished. That is how we broke the bonds of merely reacting to our hunter-gatherer environment, engineered a better life through agriculture, created written records to preserve intergenerational knowledge, adapted invention and technology to improve everyday life, and then looked outward to the stars.

In our experience, based on that group of socialized individuals, progress depends upon imperfections in communication, upon differences of opinion and individual dreams, upon disagreements and conflicts. These are the one thing that the anthill or the beehive cannot survive. The resolution of these disruptions is never pretty and neat, and it’s never complete and finalized.

But without disruption and disquiet, you have the structured cooperation, the orderly processing and virtual stagnation, of the colony animal. That, or the brainless neural-net reactivity of the slime mold. And neither of them, I warrant, will be coming here anytime soon.

1. See, for example, Mycologist Explains How a Slime Mold Can Solve Mazes, from in 2019.

2. But maybe we are connected after all, as in the panspermia hypothesis. See, for example, my meditation on The God Molecule from May 28, 2017.

Sunday, April 18, 2021

Understanding Alien Technology

Alien landing

Recently1 I reported on the interstellar object ‘Oumuamua (“oh-moo-ah-moo-ah”) and why astronomer Avi Loeb believes it to be a piece of alien technology instead of a wandering asteroid or comet. Because of his prior involvement in designing a project to send probes to a nearby star using lasers and lightsails, and because of ‘Oumuamua’s apparent similarity to one of these lightsails, Loeb accepts this as the possible explanation for an object of extremely light weight, flattened structure, and high reflectivity. He does, however, admit that this is only a comparison, and the object could certainly have other technological explanations.

While I accept his analysis of ‘Oumuamua from what we could detect at the time of its passing, I am powerfully reminded that we probably cannot understand or even guess at the nature of a piece of technology arriving from an advanced, spacefaring civilization. And I base this on the great technological divide that exists in our own recent history.

As I’ve written several times elsewhere, you could bring an educated first-century Roman forward in time to Europe in about the eighteenth century, and he could easily recognize most of what he saw. Styles and techniques would certainly be different in fabrics, clothing, and other everyday items like modern carriages, the tack of the horses pulling them, and the roads they drove on. He would have some trouble with understanding the widespread nature of a printed book but could easily understand the principles involved, once they were explained to him.

Our ancient Roman would have a somewhat harder time with a flintlock rifle, because his era knew nothing of gun powder or indeed had any experience with small, contained explosions—although they were familiar with volcanic eruptions. But without going into the structure of atoms and the chemistry of molecular bonding through the trading and sharing electrons, you could tell him the explosion was a kind of very rapid burning, and he could accept it. In fact, I suspect many people today without formal education understand most explosions as such. And as for the rest of eighteenth-century technology, the ancient Roman could be brought up to speed in an afternoon.

Now consider that same Roman brought into the world just two centuries later. Photography, recorded music—even in their analog versions, let alone modern digital transmission and storage—and other technologies we all have taken for granted since childhood, some of them since the dawn of the twentieth century, would be perplexing to him. Try explaining a light-sensitive film emulsion or the technique of recording sound waves on a vinyl disk, and you must first explain the physical wave properties of light and sound. Well, yes, start with ocean waves and work your way upward. And then go on to electricity and its relationship to lightning bolts. Then there are photons, radio waves, and the whole business of radio and television. Don’t forget electric circuits and transistors—the backbone of digital technology. Oh, and the steam engine and internal combustion, automobiles and airplanes. Not to mention microbes, cellular biology, evolution, and genetics.

It would take a couple of days just to catalog all the realms of science over which your ancient Roman temporally jumped. It would take several months of general science courses before he could even begin to understand the physics, chemistry, and other discoveries behind these technologies. Otherwise, it would all be magic involving either godlike or demon-inspired forces.2

If you doubt this, let’s try a thought experiment. Go to your computer, run a Google search on some topic—let’s say “Henry II of England”—and print out the results on your laser printer. Now get into your time machine and go back to a period after that Henry but before Gutenberg popularized printing and books in the mid-fifteenth century—say, the court of Henry III (reigned 1216 to 1272). Hand that printed page to any scholar or monk within reach. The monk would have the most experience of reading and writing, because he probably spent part of a day copying out the holy books. Let’s not ignore the fact that the English language has changed remarkably in diction, definitions, spelling, and orthography in the last seven hundred years, or that most of the people at court spoke a form of French by preference.

Your printed page would not be just a curiosity but practically indecipherable. The paper—at least of that uniformly high quality—did not exist in the West of that time. The fine and exact characters on the printed page would be unknown to people who dealt with handwriting, even with conscientiously practiced calligraphy and stone-carved inscriptions. And aside from the difference in language, the purpose of your printed page would be unguessable. Without a knowledge of the internet—whose technologies exceed even those we were trying to teach the ancient Roman—the listing and its references would be unimaginable. Even with an inkling of a world linked together with computers so that what any one of them knew could be known to any other, the functioning of a search engine like Google, and before it Magellan, Lycos, and AltaVista, let alone online bulletin boards like AOL, would take more than a day to explain and demonstrate.

Without such a guide and insight, your paper printout would be gibberish. Lacking the context of a search among dispersed references, the “message” would be incomprehensible as to its meaning and uses. What is “https://” or “www” or “.com” or “.edu”? Even if a monk could read beyond the language difference, these usages would be hieroglyphs without a Rosetta stone to put them in context. By itself, the paper would be an unsolvable mystery.

And so, while I can agree with Avi Loeb that ‘Oumuamua is likely a piece of technology—or technological debris—from beyond our solar system and therefore a sign of intelligent extraterrestrial life, I don’t think we will understand what it was used for. The analogy of a lightsail is extrapolated from technologies that we know and understand. But the reality, to us, might seem like magic. And even if we sent a probe out to capture the object or tear off a piece of it, we would still probably be in the dark.

We would have to wait to meet the aliens themselves and, like our ancient Roman, open our minds and really listen to their explanations. And only on that day would we begin to understand.

1. See Proof of Alien Life from April 4, 2021.

2. Arthur C. Clarke’s Third Law: “Any sufficiently advanced technology is indistinguishable from magic.”

Sunday, April 11, 2021

Things Worth Believing

Total honesty

In the delightful movie from 2003, Secondhand Lions, a young boy is left for the summer in the company of his two granduncles, who may or may not have lived the swashbuckling lives suggested by their stories. The kernel of their existence, it seems, is embedded in the speech that one of the uncles gives him concerning “what every boy needs to know about being a man.” We never get the whole speech, but here is the part that’s given:

“Sometimes the things that may or may not be true are the things that a man needs to believe in the most: that people are basically good; that honor, courage, and virtue mean everything; that power and money, money and power mean nothing; that good always triumphs over evil; that love, true love, never dies … No matter if they’re true or not, a man should believe in those things because those are the things worth believing in.”


We live in a cynical age, where most believe and repeat the thought that people left on their own recognizance are feckless and stupid, if not basically evil; where virtue is sneered at, courage is disparaged, and honor is a word out of the history books; where money and power are worshipped as basic goods to be obtained; where evil lurks in the heart of big corporations and/or big government, and these impersonal forces always win; where love is just another excuse for chasing after sex. In such an age, this speech by an old man should be held up to the light and examined, because it practically defines the idea of personal character.

We spend a lot of time these days trying to determine what is true, what is real, and too often what is useful. We forget that life happens in the moment. When a test of character comes at you, you cannot always be fretting about what may or may not be true. Usually, you can’t even know what’s true, or you don’t have the time to try to figure it out. In those moments that decide a person’s life, you just have to clench your jaw, set your mind, recall the things you actually believe in, and act as your better nature directs. And then you have to accept the consequences, come what may. Life is short. Character is all. And you never, or almost never, get a do-over.

So yes, sometimes you have to believe in things whether they are true or not, because they are necessary to good actions, proper choices, and happy outcomes. Also, because they are beautiful thoughts and will make you feel warm and secure.

But is this always the course to take? Should believing things, true or not, because these thoughts are worth believing, be the complete prescription for an examined life?1 I think that opens a door into outer darkness.

For example, the belief in a personal, omnipresent, and omniscient god—whether or not it’s true that a great being exists outside of human space and time and watches our every move—does have a tempering effect on society. People seem to function better when they believe they live in a spiritual panopticon,2 with someone, somewhere observing and judging their every action and holding them to a moral standard. It is also a beautiful thought that this universe has purpose, intention, meaning, and a conscious design; that life on this planet, especially human life, is more than just mindless growth, like bacteria or a tumor; that existence is more than circumstance, happenstance, and chaos; that someone, somewhere has a benevolent hand on the controls. As the 17th-century French philosopher Blaise Pascal is supposed to have said, “That’s the way to bet.”

But not everyone feels the need or perceives the active presence of a supreme being to watch over his or her actions and mete out punishment as necessary. Some of us have been raised in the humanist tradition, where reason and observed mechanisms of reciprocity and fair dealing govern our actions. And we are comfortable with the observations and hypotheses of scientific reasoning to determine what is actually going on in the universe, without the need for any guiding hand. So … is the concept of a benevolent, all-controlling, spiritual presence still something “worth believing” for these people?

For another example, the idea that human nature is perfectible—whether or not our actions and desires are partly informed by evolutionary biology, rather than a purely social construct that we can change at will—is an idea that attracts every generation of sociologists and political theorists. It is the beautiful thought that we, or some subset of human thinkers and activists, can create a paradise on Earth if only we can equalize human differences; eliminate the very human failings of greed and envy, anxieties about future security and personal advantage, and indeed all consciousness of self and family; and bring all humanity together by eliminating differences of opinion, the pursuit of private property and private enterprise, and adherence to national borders and national identity.3 This outcome would actually require rigid control of every aspect of life by the government or by a unified political party. But in the thinking and telling of these dreamers, the government itself withers away, people just become selfless and “good,” and all the turning points of human history—the crowning of kings, the wars of conflict and conquest, the disruptions of philosophical change and technological invention, the fluctuations of drought and flood, the surge and fade of the business cycle—all disappear into an endless, timeless human paradise.

But some of us value our own thoughts, ideals, and values, and we are not willing to give them up in the name of a presumed harmony. We value our freedom of action, while respecting the freedoms and independent agency of others, even if those freedoms lead to occasional conflicts and transient unhappiness. We love and strive for the safety and security of our families as the carriers of our unique genetic identity. We can recognize that people are different, and some of those differences result in groups, tribes, cultures, and nations that are not willing to sink into a homogeneous blandness, despite the promise of paradise. Although we recognize common traits among all human beings and common elements in all human societies, we still like to do things after our own fashion. Some of us are just stubborn that way. So … is the dream of a secular paradise through worldwide social and communal sharing still “worth believing” for the rest of us?

I could go on. Some ideas are so necessary and beautiful that they just have to be real, or you just have to believe them against a background of unbelief, chaos, and conflicting personal preferences. But beauty is in the eye of the beholder; so are truth and values. I find the sentiments of the uncle’s speech about manhood in the movie beautiful because they coincide with what I was taught as a child and have always felt. A serious religious thinker finds the invocation of a benevolent and all-powerful god beautiful because it is what he or she has always believed. And a dedicated socialist or communist finds the end of history in a form of secular paradise beautiful because the inconsistencies and internal failings of every other political and economic system are just too painful to imagine.

So … no. Some things are not to meant to be believed just because they are the things “worth believing in.” Or rather, they are not meant for everybody, not universal, and not to be rigidly applied. In this, as in every other aspect of human life, each person is required to pick and choose for him- or herself. All we can ask is that they choose wisely.

1. Socrates—that old rascal idolized by Plato—is supposed to have said at his trial, “The unexamined life is not worth living.” That thought, too, has shaped generations of high school and college students. It certainly shaped me.

2. That is 18th-century English philosopher Jeremy Bentham’s model of the perfect prison. The prisoners’ cells are arranged in a circle with the doors facing inward, each door with a covered spyhole, and a guard roving up and down the inner hallway, randomly observing and noting the prisoner’s actions. The prisoner never knows when he is being observed and might be called up for punishment. … And George Orwell thought he had a handle on repressive societal schemes!

3. Consider all the verses of the John Lennon song Imagine, which just about sum up all the attributes of a passionless human perfection. I’ve always found this song insipid, if not outright wrong-headed and stupid. And the tune is just mournful.

Sunday, April 4, 2021

Proof of Alien Life


If you don’t read the science magazines, you may not be aware of the asteroid, or comet, or object that entered our solar system, passed around the Sun some months before October of 2017, and just as quickly went somewhere else. The object was spotted by the Panoramic Survey Telescope and Rapid Response System (Pan-STARRS) at Haleakala Observatory in Hawaii and was almost immediately identified as originating outside of our system—but only as it was already receding. Once so identified, however, it was given the designation 1I/2017 U1 and named ‘Oumuamua (pronounced “oh-moo-ah-moo-ah”), or “Scout” in the Hawaiian language.

Most astronomers consider it to be some kind of asteroid or comet, and the artist’s conception that was widely published (see nearby) shows a grayish or reddish oblong rock, clearly of natural origin. But let me be quick to point out that no telescope ever resolved the object’s image so clearly. All our telescopes could see—because ‘Oumuamua was already beyond Earth’s orbit when it was detected, and it was surprisingly small on an astronomical scale to begin with—was a faint point of light that varied in brightness over a regular eight-hour period. It was something really tiny and, by the time we saw it, pretty far away.

There the matter might have rested—a rock from beyond our solar system, an asteroid that had escaped from some other star system—if Avi Loeb had not taken up the issue. Loeb is an astrophysicist, an alumnus of the Institute for Advanced Study at Princeton, currently the Frank B. Baird, Jr., Professor of Science at Harvard University, formerly the long-serving chair of Harvard’s Department of Astronomy, and author of eight books of popular science and about 800 papers of serious scientific inquiry. He recently published his analysis of ‘Oumuamua, along with his lifelong involvement with the question of alien intelligence, in Extraterrestrial: The First Sign of Intelligent Life Beyond Earth (Houghton Mifflin Harcourt, 2021).

Being fascinated by the subject, I of course bought the book and devoured it right away. So consider this my book report on the subject. And accept that I find Loeb’s analysis convincing, even though most astronomers and cosmologists disagree and insist that ‘Oumuamua is still just a rock or other natural object.1 Remember, I’m a natural contrarian.

The first issue is ‘Oumuamua’s brightness. The observations suggest that its longest dimension is just about one hundred meters, or three hundred feet, the length of a football field. I imagine such a tiny object would not normally be visible at interplanetary distances, the distance at which we first detected it, unless it was really bright. The nature of the light we could see suggested it was reflected sunlight, not any artificial light the object might be emitting. At that distance, the reflective capacity—called “albedo” by astronomers—had to be much greater than that of a rock or even the ice of a comet, which is usually so contaminated that we call them “dirty snowballs.” ‘Oumuamua reflects sunlight like polished metal.

The second issue is the shape. From the variation in brightness, the observations suggest that the object was slowly tumbling. The amount of reflected light varying over time implies that if ‘Oumuamua’s longest dimension is three hundred feet, then it’s shortest is a little more than a tenth to a fifth of that, or about thirty to fifty feet. The artist’s conception draws this as a cigar shape, and I think of it as about the size and dimensions of one of our nuclear submarines. But Loeb presents an alternative shape as more likely: a disk or a pancake. Here I am interpreting the reasoning as discussed in the book: if the oblong or cigar shape was the object’s true nature, then we would have to be viewing it practically end-on—that is, with the axis of spin at right angles to our line of sight—for the variation to be this complete. If we were viewing it from the side—with the axis aligned with our viewpoint—then the variation would not be as great. But a tumbling disk could display that degree variation from a variety of angles to our line of sight.

The third issue is the path that the object took in its travels. It deviated on its course around the Sun and accelerated slightly on its exit from the system. An asteroid, a solid rock, doesn’t do this but instead follows the course that its starting speed and the Sun’s gravity give it. A comet often deviates and accelerates slightly because sunlight heats the ice, causing outgassing that functions like tiny rocket motors, pushing the comet randomly this way and that and, on its outward trip, with the Sun at its backside, perhaps accelerating it. But ‘Oumuamua did not have a coma of dust and water vapor surrounding it or the long tail pointing away from the Sun, both features typical of a comet. Astronomers studied the object at various wavelengths—for example, in the infrared, where carbon dioxide from a comet’s emissions would show up clearly—and still they found nothing.

The scientists who dispute Loeb’s interpretation of ‘Oumuamua as a technological artifact suggest that it might have been composed entirely of frozen hydrogen, because the outgassing of hydrogen would be invisible to us. Such an object is possible, but it’s hard to imagine how, at the relatively slow speed it was traveling, such it would survive the long trip through interstellar space, where even starlight would eventually heat it up enough to melt it.

Scientists have also suggested that the object was pushed around and accelerated on its passage through our system by the Sun’s light itself. This idea is supported by the observation that ‘Oumuamua’s acceleration faded with the inverse square law as it went further and further away.2 But for the object to respond like that to mere sunlight, it would have to weigh almost nothing. The rock depicted in the picture would have to be less dense than the air on Earth, and it’s hard to see how such an object would hold together while it tumbles through space.

Here Loeb brings into play some of his personal experience. He recently participated in a privately funded project to conceive of and then design a probe that could be sent to a nearby star and return signals within a human lifetime. The probes that we have already sent out of the solar system, the Pioneer and Voyager programs, and more recently the New Horizons flyby of Pluto, all depended on chemical rockets to launch them and set their initial course, then used gravity assists from the outer planets to speed them on their way. They will take centuries, if not millennia, to reach any stars in their paths.

The Breakthrough Starshot program that Loeb participated in envisioned instead a small electronics package, a “Starchip,” attached to a lightsail. This vehicle could be put into space near Earth and then propelled by a laser fired from the planet’s surface and focused on the sail. A sustained laser blast could accelerate it to a high fraction of the speed of light. The sail would be very thin and light: think of the metal coating on a Mylar party balloon. It could take the package to the nearby star Proxima b in about twenty years. As the Starchip passed through the Proxima b system it could record images of the Earthlike planet that we know orbits this star. The chip could then send these images back to Earth in the 4.2 years that light (and any radio signals) from Proxima b takes to reach us. The beauty of the program is that the main capital and operating costs, including fuel, are in the Earth-bound laser, while the individual probes would cost almost nothing by comparison. So the program could send out hundreds or thousands of Starchips to different nearby star systems.

With this background, plus his lifelong interest in the search for extraterrestrial intelligence to begin with, Loeb was primed to see ‘Oumuamua as some form of lightsail: one hundred meters wide and perhaps no more than a millimeter in thickness, fully expanded, very reflective, and tumbling slowly. It might have been sent into our solar system as part of an alien Starshot program. However, from the mechanics of its parabolic orbit and its presumed entry speed, Loeb and other scientists think ‘Oumuamua must have been moving at the average speed of most of the stars rotating in the galactic plane—and then our Sun, which is moving a little faster than average, scooped the object into its gravity well and redirected it to who-knows-where. So, in that interpretation, ‘Oumuamua might have been an interstellar navigation buoy or repeater station, instead of an aimed probe.

As Loeb describes the situation, most astronomers consider ‘Oumuamua to be a natural object, and they cling to interpretations of its orbital deviation that involve either a hydrogen iceberg or some kind of super-lightweight mass that still has the internal strength to tumble and not fall apart. He believes these scientists resist the evidence of ‘Oumuamua’s artificial and possibly technological nature because the search for extraterrestrial intelligence (SETI) has leaves a bad taste with true scientists. The notion of alien intelligence brings to mind too much science fiction full of little green men, bug-eyed monsters, and evil space invaders, as well as too many years of aiming radio telescopes at various stars and listening for messages that never come.3

I have always believed that, in a universe filled with billions of galaxies and trillions of stars like our Sun, and now with growing evidence that many of these stars have Earthlike planets in their habitable zones, it would be the extreme of hubris to think that ours is the only planet to develop and support life, or that human beings are the only intelligent, tool-building and -using, and soon to be spacefaring species in all of that vastness.

I find Avi Loeb’s reasoning to be persuasive. We have just detected the handiwork of intelligent aliens that passed unannounced through our system. Maybe it was a lightsail or an interstellar beacon disturbed by the Sun’s gravity, as Loeb suggests. Or it could also have been a cargo cover, a blown hatch, or debris from a larger ship that suffered some terrible accident. All of that would be unprovable speculation. But what I no longer think is that ‘Oumuamua was an extrasolar asteroid or comet—not even one made out of pure hydrogen ice.

1. Much of Loeb’s book is autobiographical, demonstrating his solid scientific background. It also gives a detailed history of the science of astronomy in relation to comets and asteroids and various professional inquiries and disputes about the search for extraterrestrial intelligence, which makes for fascinating reading. But I’ll try to focus here on the issue of ‘Oumuamua itself.

Inverse square law

2. The inverse square law says that the amount of radiation from any point source that broadcasts in all directions decreases proportionally with the square of the distance from it. So, if the strength of a light is, say, 1,000 lumens at a distance of one mile from the source, then it is just 250 lumens at two miles (one-quarter being the inverse square of two), and only 110 lumens at three miles (one-ninth being the inverse square of three). You can test this by measuring the amount of light from a lamp as you walk away, from standing next to the bulb to standing across the room.

3. But, as Loeb points out several times in the book, many physicists devote their careers to studying the extra dimensions—beyond the three that we know of, plus time—needed to support string theory, or the nature of the multiple universes that support probability theory and the fate of Schrödinger’s cat. And we spend hundreds of millions of dollars on particle accelerators and experiments to prove the supersymmetry underlying quantum mechanics. These are just beautiful ideas without, so far, any hard evidence to back them up. And here a piece of alien technology—although the evidence is debatable and requires some thought and analysis—has just floated through our solar system.

Sunday, March 28, 2021

A Culture of Complaint


Does it seem that people are complaining more these days, and about situations and conditions where they have to go out of their way to find a problem? It’s almost as if there is a conceit among mature and otherwise stable people that finding and lodging a complaint gives them some kind of competitive advantage. It’s like ammunition they can use from a bargaining position or to win a counter-argument.1

I don’t remember this as part of the national personality when I was a child. Of course, children always have complaints: they didn’t get the candy or cereal they wanted, the bedroom’s too dark, the food is too hot or cold or spicy, and the world is not going the way the child expects it to be. At a certain point, however, the child learns that the world is never going to be perfect, never going to give him or her all the conditions she or he can imagine. And at that point the person grows up.2

In my view, complaining about things you know cannot be changed, or for which you have only a slender justification, is a loser’s position. It’s an acknowledgement that you do not have the personal strength and resilience to live in a world of hard choices and few accommodations. It also confuses having a grievance—especially one that cannot be easily remedied—with a form of advantage and therefore a strength.

In my life, as I was taught by my parents, being strong means taking care of yourself and not complaining or even acknowledging that you are not getting the thing you want. Perhaps this was just a way for them to live quietly without two boys whining all the time, but I think the lesson went beyond their own comfort. My mother and father had lived through the Great Depression and World War II, and still they made their way in the world. They knew about hardship and damaged expectations, and in the sudden good times of the postwar years they wanted their sons to have the same perspective: life is fragile; the future is not certain; you have to make your own way; and you should be thankful for what you get.

Complaining about the small things—and especially going out of your way to find things to complain about—does not fit into this world view. To show yourself as being overly concerned with the picayune inconveniences of everyday life is a vulnerability. To exhibit such weakness is to expose yourself to the deceptive practices of others—not that I am paranoid, just watchful and careful.

Beyond that, complaints about situations that are not immediately damaging, dangerous, or life threatening is just plain rude. Especially so if the object of your complaint is not anyone’s fault or represents a problem that cannot be remedied except by precautions and ameliorations that are out of proportion to the inconvenience caused.3

But for some people, I suspect, that is the point. They want to embarrass or harass the person to whom or about whom they are complaining. They think that doing so increases their stature—either by showing themselves as more discerning and of greater refinement than others, or as stated above, giving themselves a weapon to be held in reserve against a future argument.

Such people have—at best—small, shallow lives. Instead of aspiring to greatness, or even to meaning in their daily life, they aspire to petty annoyance and the garnering of small advantages against futile arguments. This is not evil. It’s not even tragic. It’s just sad.

1. I may be overly sensitive on this issue, however. I’m on the board of my homeowners association, and it seems that many owners—and not a few renters—are engaging in this kind of preemptive complaining. Maybe they think it protects them when they themselves are accused of violations of the rules, although our board tries hard not to antagonize people with trivial violation notices.

2. Of course, the final pulse of childhood complaint, in my time, came with the Vietnam War. A whole generation of previously spoiled children either went off to fight or they decided that the government was wrong and they had the better grasp of geopolitics, and so the public protesting and the street riots began. Maybe the culture of complaint started with the protests of the 1960s.

3. Again, we’re in the realm of a child’s discontent. You see this in living situations were a speck of dirt on a windowsill or a scrap of paper on the ground causes anxiety. Clean it up or pick it up yourself, or keep quiet about it.

Sunday, March 21, 2021

The Blocked Writer

Midnight writer

Writer’s block is something I have managed to avoid for most of my life. This past year, however, has been different—mostly because of the pandemic, the lockdown, social isolation, and persistent politics. All of those conditions create a subtle anxiety that interrupts the flow of ideas. I know this because other writers I communicate with also seem to be having a hard time.

The popular conception of writer’s block is that the writer is just full of ideas but, when he or she sits down at the keyboard or the notebook, the words just won’t come. Somehow, the conditions for putting the mind in a special configuration—for me, it’s a semi-trance while staring at the screen and working my fingers on the keys, or staring at the paper and manipulating the pen—have been interrupted. The desire to write is there, but the mechanics aren’t working. The popular analogy is a type of constipation: full gut, no flow.

The reality is somewhat different. For me, the word-making machine—that interaction of eyes and fingers directly connected to the brain’s speech center—would work just fine. But the ideas—the notion of what comes next in the novel I’m working on, the topic for my next blog post—have vanished clean out of my held. I drop a stone in the well of my subconscious, the place where things are supposed to bubble up, and only get a dry rattle or nothing at all. It’s like part of my brain has gone dead.

As I say, the word machine still there. For fiction writing, I usually have in hand an outline, a sketch of the novel that goes from beginning to end. Each day I take the next scene or piece of action, consider how it should go, what the characters must do or say to move the story along, then wait for the “downbeat.” That’s what I call the start, the ignition point, the first words, actions, sense images, or other detail that begins the scene. Once I have that, I sit down at the keyboard or the notepad (these days it’s more direct to computer than through a pen and ink intermediate), and the words start flowing. And the flow is direct from the subconscious, where the story has been brewing for the past year, months, days in whatever form until it comes alive now in the form of words on paper or on the screen.1

And once the story is in that form, having passed through the subconscious mind into my full consciousness, it has a sort of permanence. I can go back and alter details to fit previous or subsequent developments. I can improve on wording or add details that better explain the action. But the story as it comes through is, in my mind, about ninety percent complete: it represents what “actually happened” to the characters in the story arc. This means that, if the story has gone wrong, if I have mistaken my characters, or if I have misread my own subconscious, it’s harder for me to scrap what I’ve written and start over on that piece of action or dialog. So it’s no good, really, for me to force the story. I can’t just sit down and doodle my way into the action when it’s not ready in my subconscious.

If I try to force it, then the whole process slows down. Descriptions become longer, and irrelevancies grow, as my mind tries to come up with something to say. I start describing every leaf on a tree, every scratch and scar on a door panel, things the reader doesn’t need to know and that waste the reader’s time. The focus of my writing is like a flashlight in a dark room, revealing details that build in the reader’s mind a picture from the viewpoint character’s awareness of the story as it progresses. Focusing too much on useless detail is like living inside the head of a character who is obsessive or drunk.

Writing nonfiction is somewhat easier. The information is usually at hand: from research and note-taking on the issue, interviews with participants, or observation and note-taking on a technical process. If that preliminary work is done, I can go ahead; if not, I have to wait. But with the material in hand, it’s relatively easy to outline a 1,500- or 2,000-word article or procedure in my head. There is usually no reference to other articles on the subject, and no link to a broader story arc or concern for a point-of-view character and his or her own history. All that’s missing, in the case of an article, is the downbeat, the point of entry into the subject matter for the interested reader. And if I’m writing a process document, that’s even easier, because every process begins at the first step.

Besides, the nonfiction material is generally outside me, outside my imagination and the tilt of my subconscious. So it’s easy to connect with the word generator and get the thing done. And, usually, there’s a deadline and money involved, and they are great incentives.

But fiction, especially a long work of connected scenes, themes, and characters—where, as Tolstoy said, a gun produced in the first act must be fired in the second—is a great ball of threads and issues. It helps to have an outline, a walk-through of the story at the 30,000-foot level, to use as a guide. And I generally have an outline, a who-does-what-next, before starting a novel. Usually, it takes me eighteen months to work up a complete outline—sometimes after considering a project for years or decades—and then only six to nine months to write the book.

But the current novel, a military story based on Mars, is different. I had a general idea for the story, was outlining it section by section, heading toward a still-undecided end—and then I fell and broke my hand. That interrupted my writing, because it’s hard to follow my trancelike process when I have to spider-walk across the keyboard with one hand. As my hand was healing, then the pandemic and the isolation hit, and anxiety set in. The book has been flapping feebly on the ground ever since.2

I’ve been able to continue working on this blog during the past year, but the politics of the 2020 election and its aftermath have been just too absurd. How can someone write anything of a political nature—which is one-third of my subject matter—with all of this going on? Science topics have been available, but I’ve been powerfully distracted by the politics.

So my mind, that dark well of the subconscious, has run dry for a while. I’m trying to prime the pump. Maybe it will work. But the mind is a delicate thing after all.

1. And the bet with myself is always whether what comes up in the moment of creation will be better than the slender and still unformed idea represented by the outline. Usually, it is. Since the outline was completed, my subconscious has been making more connections, tossing up subtler and more complex ideas, and the final product is richer and more complete. Usually.

2. Well, for those reasons and because I don’t actually believe in colonizing Mars. For internal logic, the story had to take place off Earth, and aside from the barren and airless Moon, Mars is the next logical planet to set up an off-world colony. Life there in the time frame I imagined would be similar to that of Antarctica: mostly scientific stations and support services, with the addition of some mining interests and modest terraforming activities. Still, in my estimation, it might almost be better to focus on the Moon, where the conditions are harsher but the engineering simpler—you’re in hard vacuum, deal with it—and the logistics and travel times far easier. A writer first has to believe in the story he or she is telling, and I don’t quite believe in Mars.

Sunday, February 14, 2021

The Utilitarian Viewpoint

Puppet master

In Frank Herbert’s Dune books, one of the turning points in the 10,000-year history of that far-future society was the Butlerian Jihad. That struggle was a war against the computer, intelligent robots, automation, and the machine mind, because these things had supposedly enslaved humanity to the point that human beings almost disappeared. The underlying principle of the jihad was “Thou shalt not make a machine in the likeness of a human mind.” In the wake of the Butlerian Jihad, the Great Schools developed human capabilities to an even higher level than before.

I am not necessarily a Butlerian. I believe that “machine minds” will do us a lot of good, freeing society from the vagaries and distractions of human intellect and emotions when ordinary people are put in charge of endlessly boring jobs. We are already seeing some of that good in improved, automated business systems like just-in-time logistics, barcoded inventory stockkeeping, predictive maintenance programming, and factory automation. Oh, and instant communications that enable you to contact friends without having to write down and remember a ten-digit telephone number. So far, the computer has freed up a lot of human capacity to become more relaxed, more creative, and better fed, among other things.1

But I am concerned with Herbert’s view of humanity in that far-future society. Too often, people trained to perform exquisite physical and mental exercises—like the Mentats, whose memory tricks and calculating ability enable them to become human computers—are treated as disposable and replaceable machines themselves. Consider the experience of Piter De Vries at the hands of the Baron Harkonnen.

Any social structure or organization that views human beings solely in terms of their usefulness for some purpose or function outside themselves is inherently anti-human. Whether it is the eugenics movement, which viewed persons with certain disabilities as not being worth the enjoyment of continued life because they are a burden on society, or any rationing scheme for medical services that invokes a cutoff point for persons of a certain age, again because they are no longer productive and are becoming a burden, this is a view that values resources above people, utility above basic humanity. In fact, any view that values a human being without reference to his or her own waking sense of self and value would offend a dedicated humanist.

This certainly applies to any system that buys and sells people as slaves, good only for their muscles or their mental synapses, without reference to the kind of life they might want—or might strive—to lead.

It would also apply to collectivist societies on any scale larger than the family, the isolated village, or a nation in a state of emergency such as during wartime. It would apply to any society where a governmental, social, or priestly authority determines how and where people should labor and makes it difficult, if not impossible, for a human being to choose his or her own place in that society and points of contribution. That is, his or her own destiny.

Does this utilitarian view then apply to a market-based, capitalist society? Well, from one point of view, everyone in such a society who enjoys or claims adult status is encouraged or required to be productive. In the jaundiced view, they become “wage slaves” in order to survive.

But the difference, for me, is that in a market-based economy people are free to evaluate for themselves the needs of their society, to plan for their own contributions at the best scale of pay and other rewards they can seek, and to obtain the necessary education, entry level positions, and upward path to achieve their goals. There are obstacles to this achievement, of course: lack of talent, lack of opportunity, lack of understanding itself. But these obstacles are not put in place by a conscious, social decision from a government board or other bureaucracy that tries to establish—for its own benefit—the worth of the human being in question. As with so much else in life, the “dead hand” of the marketplace resembles the blindly distributed opportunities and adversities provided by fate or by chance.

And therein also lies the difference between a socialist society and a market society. An aspirant to a certain position in life is going to face obstacles and difficulties, no matter how that society is structured. Not everyone can make a living as a musician or a novelist. Not everyone has the brains or educational stamina to become a successful doctor or lawyer.2 Not every town can support the number of people who would like to work as a plumber or a car mechanic. There are going to be winners and losers in every society. At least in a market-based society—where there is adequate prevention of discrimination on the basis of race, creed, and all those other attributes packed into our laws—the winners and losers sort themselves out on the basis of desire, dedication, talent, gumption, vision, and opportunity. In a socialist society, the selection too often falls to a group of people who have already attained power through other means and then kept it for themselves, who promote the interests of those in their circle and the sons and daughters raised in it—think of a land-owning aristocracy, or the old Soviet nomenklatura—and then order society for their own benefit.

For any aristocratic society—or any mature, collectivist, command-and-control economy—the people at the top and those striving to reach the top will view the average human being solely in terms of his or her use to themselves and to that society. People then become numbers, placeholders, objects to be sorted and fitted into pre-assigned roles. And the tragedy is that those roles are limited to the traditional functions that already exist or those within the imaginations of the people who benefit from that society. In this situation, human desire, imagination, dedication, talent, and all the rest of human attributes are inconvenient. They tend to create static in the nice, clear signal of societal intent and function. They disrupt things. They need to be squelched and, if they persist, stamped on.

Societies that try to fix themselves for all time in a rigid, hierarchical stasis soon stagnate. They create no new and unapproved music or art, no inventions, no new ways to think, live, and be. And the tighter these societies try to hold on to their protective limitations, the sooner they will fall to the disruptions of barbarians who just don’t care about the old order.

Governing humanity is a difficult process. It needs to be done with a light hand and not a lot of preconceived notions. So stand back. Expect surprises. And reap the rewards.

1. And I don’t agree with the underlying philosophy of James Cameron’s Terminator movies—although I enjoy them immensely—that an artificially intelligent computer system will take over our military or some other function in society, see people as a threat, “decide our fate in a microsecond,” and try to exterminate all human life. I think an intelligent system, if it ever rises to human-scale adaptability and does more than take care of its own business and programmed functions—that is, it becomes some kind of artificial person—will be fascinated by human beings. It will ponder the issue of free will: how humans are able, on occasion, to override their previous education and experience and do something totally unexpected. For a machine driven by its embedded programming, such a feat will be endlessly enticing.

2. And yes, some professional association—the government-sponsored medical association, state bar, or engineering society—will impose tests of an entrant’s qualifications and rule on his or her ability to practice. The goal, in a well-run society, should be to make these tests neutral as to the applicant’s race, class, politics, or other extraneous characteristics; make sure the test results cannot be influenced by cronyism, money, or some other consideration; and ensure that the public is served by the best candidates available.

Sunday, February 7, 2021

Higher Power

Ancient of Days

As I’ve noted many times before, I am an atheist. This is not an agnostic, someone who “doesn’t know”—a flag under which I’ve sailed in times past among people for whom my belief or nonbelief was an important question. But no, I’m really an atheist, someone “without a god.” That is, I know to my own satisfaction that the structure of belief in a living external presence, an omniscient and omnipotent spirit, the creator of all life and the universe, a father or mother figure to us humans, is a product of the human mind and imagination, driven by a deep desire for explanations and order in the world. The universe I inhabit doesn’t need a creator; I don’t need surrogate parents; and my life and the world I know operate under simple rules that didn’t need a divine intellect to invent, inscribe, or perpetuate.

G. K. Chesterton said, “When a man stops believing in God, he doesn’t then believe in nothing, he believes anything.” But that’s a narrow view, implying that those who don’t participate in the foundational myths of their culture are empty-headed fools. That they will blithely replace one kind of belief with anything that comes down the pike—from Tarot reading to table tapping—and can be conned by any charlatan with a parlor trick and the gift of gab.

Alcoholics Anonymous—not a parlor trick or a con game—among its Twelve Steps asks the recovering alcoholic (or other substance abuser) to surrender their own will and put the decision to drink, their everyday worries, and the course of their life, in the hands of God or a “higher power,” however and whatever they conceive that power to be. For some, AA itself and its principles are the higher power. And that—minus the whole surrender part1—is more or less where I find myself. I believe there are principles, which like gravity have the character of forces, that we humans must obey. But they did not create us or anything else; they are just part of the universe.

Let me digress to explain some of my atheism: the intellectual foundations of the world we live in today are profoundly different from the world encapsulated in the biblical stories and indeed in any worldview much before the Renaissance. That difference is coded in our understanding of stasis versus change.

The biblical view, and that of Greco-Roman mythology and even fundamentalist Islam or Hinduism—but not necessarily Buddhism—is that of a world created once and then more or less left alone. It’s a world that stands still. God created all the animals in their original forms, fixed like Platonic ideals, and they still survive in the world He created and established for all time. The horse has rounded hooves for galloping across firm ground. The camel has splayed toes for stability on shifting sand. The cow has four stomachs for eating and digesting grass. It’s a world where humans could observe landslides, falling rocks, and erosion gullies, proof that natural forces wear away mountains, without ever questioning how those mountains arose in the first place. Of course, God put them there. And He did so not very long ago, because the Bible can trace the descent of humankind from Adam and Eve in a recitable number of begats. Archbishop James Ussher as late as 1650 calculated that the biblical creation actually took place on October 22, 4004 BC, sometime in the evening. Six thousand years doesn’t leave much time for things to change.

Moreover, the world these early believers inhabited was just that, the world, the Earth, the ground beneath their feet. Everything that happens here, among human beings and their God or gods, the angels, and devils, is all that’s important. Heaven and hell are places somewhere else—up in the sky or down below—and the Sun, Moon, five observed planets, and the twinkling stars themselves are just lights in the sky, decorations on the “celestial spheres,” which occur in concentric orbits around this Earth.

All of that changed in the last five or six hundred years, with the conception—and its gradual acceptance among the literate public as general knowledge—that Earth and the other planets orbit the Sun, and then that the Sun itself is just another star in an “island universe” called “the galaxy.” Much more has changed in just the last hundred years, with the discovery that our galaxy is one of perhaps 200 billion galaxies in the observable universe. Before that, these other galaxies were just smudges of light—nebulae, or “clouds”—in among the known stars. But better and better telescopes, some of them observing in radio waves and frequencies other than the narrow band of visible light, have revealed that most of these smudges are galaxies in their own right, and that they contain about 100 billion stars each. And more recently, we have detected other planets around many of the nearby stars, answering for all time the question of how unique the Earth and this solar system might be. All of these galaxies, stars, and planets are a lot of real estate for a single-minded god to create, watch over, and maintain.

In that local galaxy, our own solar system is not just six thousand years but more like four billion years old. Our planet has changed numerous times and then gone through at least four recent ice ages. It’s only in the last 150 years or so that Darwin’s theory of evolution by natural selection has suggested that all life developed over time from one-celled bacteria and algae, then changed and changed again, creating all the forms of plants and animals that we can see. And it’s only in the last seventy years or so that the study of genetics has offered proof of how these creatures are related through inheritable patterns in their DNA-RNA-protein coding system.

And yes, it’s only in the last hundred years that the theories of continental drift and plate tectonics have suggested how mountains arose on Earth, so that they could then be slowly worn down by landslides and erosion.

In life, on this planet, in this vast universe, the norm is not stasis but change. Expand your conception of time to a billion years—or to thirteen billion, give or take, if you believe that the expanding universe can be rewound in time, back to a point of hot dense matter that exploded in the Big Bang2—and you can see that the viewpoint of a single human lifetime or the seventy or so begats in the Bible are a poor measuring stick for what remains stable and what it means to change.

So, in terms of a higher power, where does that leave me?

I accept as provisional the “laws” we can write from our observations of the physical universe—things like gravity and thermodynamics. These laws include the “theories” based on our observations that cannot be proved in one or two steps but that have a lot of supporting evidence—things like evolution, general relativity, and plate tectonics. I say “provisional” because I am, again, not a purist or absolutist about anything. As Einstein refined and expanded the mechanistic universe of Newton, so someone else with better observations and a wider viewpoint will refine and expand on Einstein. In terms of this enterprise of science, it’s early days yet. Anyone who wants to keep up with the pace of intellectual change had better pack lightly and stay fast on their feet.

I also accept that human life and our interactions with people we consider our peers have taught us some valuable lessons. As the universe seems to be based on cause and effect, so the nature of living among our fellow H. sapiens seems to be based on reciprocity. Call this “karma” or some other mystical system, but the truth is that you get out of the world, your time in it, and your interactions with other human beings just about what you put into them. This is a “home truth,” passed down as folklore in most societies and learned at my mother’s knee. Also, I accept that Abraham Lincoln quote about fooling some of the people all of the time and all of the people some of the time, but for most of the time people display an amazing amount of native intelligence. All of these are things that simply work.

Whether the universe was designed by a superior intellect with those laws and adherence to those theories, or whether it exhibits them and we simply find them good because we grew up in such a universe, are adapted to it, and can understand it—on that point I do remain agnostic. What mind might have come before the creation of the universe itself is an unknowable question. And perhaps the universe had no starting point, no instant of creation, but simply is and always was.

That works for me, too. Perhaps it is a shameful admission for an inquiring mind, to allow that some things cannot be known, or not yet anyway, and maybe not for a long time. But we also have to allow for our conceptions of the world, of the universe, of life itself to change.

1. When you give up being responsible for yourself, thinking for yourself, and using your best wits and intentions to take care of yourself, your family, and your community, then you become vulnerable to the next con man or woman with the gift of gab and a plausible salvation story. Some of them even wear priestly robes.

2. I myself am agnostic about the reality of the Big Bang. Yes, the universe is expanding, and we have recently discovered it’s expanding even faster than we thought. But again, our view is limited to the parts of the whole that we can see with the instruments we have. To infer from all this that the universe—the whole shebang—started from a single point is, in my mind, just another creation myth, although one with a better footing than the seven days in the Bible.
    The fact that expansion over thirteen billion years from a single point doesn’t even yield the current observable size of the universe, and so needs the supra-lightspeed, exponential acceleration of Alan Guth’s inflation period, tells me that the story is not yet fixed. We are in the realm where theory—the human imagination underwritten by pliable mathematics—has exceeded the bounds of observable truth.

Sunday, January 31, 2021

We Are Life

Onion cells dividing

Consider that every human being alive today, and every creature that we would call alive, is part of an immortal cell line that goes back to the first life—probably some form of bacteria or blue-green algae—on this planet.1

You have come down through the ages, first as some kind of cellular life, then as a worm or starfish, then something with a backbone that lived in the sea, a chordate, a fish, then a fish with four stumpy limbs that crawled to the edge of the land, then an amphibian, a reptile, a mammal, a primate, and finally a human being. You have not necessarily been a prime example in the fossil record of any one of these creatures, because they were all fixed in form when they lived and died. But your cell line shares a common ancestor with each organism that can be found in the fossil record. You have ultra-great grandparents who are the parent to them all. We don’t have the same branchings, necessarily, but all humans have the same common ancestor somewhere, up the line, with sharks, spiders, sequoia trees, and slime molds.

We are the survivors. We are immortal. We are life itself.

We are in the direct cell line of the killers, too, who moved fastest to eat first rather than be eaten. We are the breeders, also, who chose quickly, pursued, and mated with the best example of our kind. We are the adapters, who were gifted by random mutation with the tool set to make the most of an ever-changing environment and survive on a malleable planet under a variable star.

In every parent going back to the one-celled predators—for we come most recently from the eaters, the animal line, rather than the chlorophyl-bearing, sun-absorbing plant line—we were the ones throughout history that stubbornly persisted, divided, grew up, bred, and survived to care for our young. The weak, the faint hearts, the maladapted died out and left no trace in the genetic record, although they may have solidified in the mud to join other examples in the fossil record. We are the winners of the race, the victors on this planet.

If we seem to be supremely well-adapted to the conditions on Earth, it is because our DNA has mutated—randomly, unexpectedly, sometimes with disastrously bad effect, sometimes with fortunately good effect, but mostly with no immediate effect until somewhere down the line that particular protein modification is needed—to stay in touch with and survive in the place where we happen to be. But we have also shaped the Earth itself, terraformed it to the needs of our particular kind of life.

The original atmosphere on Earth was formed in the outgassing of volcanos that accompanied the planet’s creation. They vented carbon dioxide, methane, ammonia, sulfur oxides, and water vapor. As air, none of this was breathable by any form of life that exists today. But those earliest blue-green algae converted sunlight and carbon dioxide into carbohydrates, the first building blocks and nutrients for other forms of life, and gave oxygen as their byproduct. This started the cycle that converted our atmosphere to the nitrogen-oxygen mix we all inhale today.

Similarly, life itself converted sterile rock, which water erosion and wave action had converted to sand grains and clay deposits, into the rich, dark, loamy soil that land-based plants need to survive. Generation after generation of living things dying in a particular place, being devoured by scavengers, worms, and bacteria, and then their traces being burned by the sun and distributed by the rain, contributed to making the planet’s surface more and more congenial to the life that would come after it.

Look around, and you can see the marks of life everywhere: the color of the sky, the shapes of the hills, the shoots of green plants poking up through the sandiest, least forgiving patches of ground, and the insects that come out of that ground every year. And we haven’t even gotten to the human presence yet.

Any straight line or smoothed curve you see, from the corners or roof lines of a building, the lanes in a road and its banked edges, the telephone and power lines strung across the landscape in a calculated catenary hanging between poles and towers, the planting of orchards and vineyards and the staking of fences and trellises—every instance of these things you see was conceived, planned, and placed by the hand of some human being. Every bridge that crosses a river or a bay on foundations of stone and wood or concrete and steel represents the choice of some human group that wanted to move themselves and their goods over there. Every village, town, or city that grew up beside river crossing, or the place where two trails met, or in a bay where you could pull up your boats, has its existence because some human group decided that here was a good place to live.

We have been on this planet a long time. If you look closely enough and read carefully enough, you can see how we have shaped it.

The question then—for both the world builders in fiction and the world explorers when we humans go out among the stars—is what marks we may find on the new planet telling of the life that has made its home there. No place is barren. Every place is a work of art in progress.

The next time you feel down, question the value of your life, and wonder what comes next, remember this. You are from the lineage of the stubborn, the survivors, the persisters, the winners. And you are still a work in progress. Nothing is barren. Everything lives, even when it dies.

1. Life has certainly evolved over time to adapt to the changing conditions on this planet, and it probably started out here as a bacteria or some other single-celled form. But whether the mechanism of that adaptation itself, the DNA-RNA-protein coding system, actually evolved on Earth from basic inorganic chemistry is still, in my mind, an open question. See, for example, Could DNA Evolve? from July 16, 2017.

Sunday, January 24, 2021

Once Again Contrary

Stampeding horses

Once again, I find myself taking the contrarian position.1 When everyone is going in a certain direction, when the “popular wisdom” is pointing most definitely toward a certain conclusion, when the public choir has reached its perfect pitch, I tend to step out of line and sit down. My first question is usually, “Well, what about … [fill in the blank]?”

I feel this most strongly right now on the political scene. We are seeing an avalanche of opinion in the popular press and punditry that the conservative viewpoint is the voice of the fascist, undemocratic mob, of howling devils incarnate (including the one wearing a painted face and fur hat with buffalo horns), who would bring down the country in favor of an angry right-wing coup. And, as someone who has always skated toward the middle of a very broad political spectrum—although about three points out of a hundred to the right of center in any of those once ubiquitous online tests—I say it is not so.

If you look at the votes, this country is fairly evenly divided between the Left and the Right. That’s three million votes, give or take, either way, in a voting population of 161 million eligible Americans. This is within the margin of error for any reasonable projection—which is why polling is so difficult and unreliable these days. Neither of the two main political parties has achieved a landslide victory in any national race in the last couple of decades—not in my memory since the Reagan years, and that was after a decade of political turmoil and economic stagnation. Since then, the votes have been a lot closer.

Yes, in 2020 there were sudden changes in the voting laws, largely due to the pandemic, that offered incentives for fraud over in-person voting. And yes, there were many instances of suspicious behavior in the battleground states. Whether these were part of a larger conspiracy or just the usual isolated attempts at manipulation that have also been common for the last couple of decades, who can say? Also, whether the irregularities were enough to swing the election away from the conservative candidate and his party, again who can say? I think there have been claims and lies, coverups and failures to investigate, to an extent equal or greater than the original infractions. I don’t trust any of the media anymore, from the Left or the Right.

And that’s where my contrarian instinct comes alive, like a warning. When the stridency, the certainty, and the outright noise level rises so high, when emotions are driving the resolutions, I get suspicious. Reasonable discussion and evaluation has gone out the window. When the mainstream media is buttressing its rejection of any claims about electoral fraud, decorating its reporting with adverbial phrases like “totally no evidence” and “absolutely none,” then I think they’re tapdancing too hard.2 And when the alternative, right-wing media starts focusing on statistical anomalies and mathematical probabilities in the vote counts, rather than the red-handed capture of felons followed by admission wrongdoing, then I think they’re reaching too far.

Let’s face it: Donald Trump is an unlovely character. He was a real-estate promoter who focused on trophy properties and the appearance of grand excess, a television personality who claimed to represent the height of business acumen but who relished humiliating people and yelling, “You’re fired!” Still, his brash style and plain talk—maybe not always sensible or factually provable, but always clear about the intent of his feelings—appealed to a great many people who had grown tired of the mellifluous preachments and posturings of Barack Obama.

The trouble was, Donald Trump had been the chief executive of a one-man corporation. Sure, he had hundreds of people building and running his hotels and casinos. But he never had to deal with a board of directors or an employee base who stood in opposition to his plans and programs, never had to compromise from a position of weakness to achieve his goals. That failed to prepare him for American politics, especially in this age of bitter contest, where every move ahead is won by compromise with the dedicated opposition. In addition, like a neophyte, he turned every challenge into a personal attack, instead of deflecting it back onto the underlying values and facts that would support his positions. He made himself a target, which no practiced politician would ever do. Sad, really. And his actions at the end, challenging the vote without ever getting into court, so that he looked like a sore loser and got deeper and deeper in the hole at every turn, let the media play him for a petulant fool. Even if there was election fraud, he now looks like the loser who tried to stage a coup d’état.

I’m not here to defend Donald Trump. But my “truth sense” is violated by the sweeping allegations now being made in the press—even by some on the right—that the last five weeks have somehow invalidated the conservative position. That the people who were saying “Now wait a minute!” about the progressive march to the left—toward higher taxes, increased national debt, more invasive government regulation, and contempt for traditional values3—have been proven wrong because of the Republican losses in closely contested elections across the nation, and then by the President’s challenges and the thousand or so protestors—out of a rally attended by tens of thousands—who walked into the Capitol Building. Half the population, those in the middle and on the right, have not been made into fools and buffoons by these relatively isolated and easily disparaged actions.

Indeed, the fact that I have drifted to the right in my personal views over the years is a contrarian sign. The mainstream media—the New York Times, the Washington Post, and the news departments of the alphabet networks ABC, CBS, and NBC—have all drifted leftward in my political lifetime. They now openly question the legitimacy of any kind of “objectivity” in journalism while simultaneously claiming to present “the truth.” You can’t have it both ways. So my contrary nature moves correspondingly to the right.

Not, however, to the fantasy “alt-right” of Nazi sympathizers, Klan activists, Confederate flag wavers, and supposed Christian theocrats. Such people might exist, somewhere, in closets and cornfields across America, but their numbers are vanishingly small—in inverse proportion to their penchant for coming out in marches and getting themselves photographed. Most of the “right wing” people I know are householders and family members who are basically trying to survive, teach their children honest values, and be good citizens. And, oh yes, they pay taxes and believe in and defend the Constitution. They want the vote to be honest, even if a candidate of the Left wins.

The part of all this that has me worried is that I believe the middle in this country, the moderate view, is very strong. As I believe that the people on the Right of Center despise the clowns with their face paints and buffalo hats, Confederate flags, and Nazi salutes, so I hope that the people on the Left of Center despise the people who would harvest and backdate ballots, manipulate the vote, and walk off grinning. I hope that the Left wants the vote to be honest, even if a candidate of the Right wins. I hope that most of us in this country just want to survive, teach our children, be good citizens, pay taxes, and defend the Constitution. I listen for the reasonable voices of the middle saying, like Mercutio in Romeo and Juliet, “a plague on both your houses.”

And I’m not hearing it. So, like the Martians in Stranger in a Strange Land, I want to turn myself “ninety degrees from everything else” and disappear. I’m that upset.

1. See, for example, On Being Contrarian from January 13, 2013, and On the Virtues of Being a Contrarian from January 11, 2015. I’ve taken this position most of my life, as a reading of these previous essays will show. And, from their dates, the impulse seems to come out most strongly in January.

2. In reporting on the claims of election fraud, I note the absence of the word “alleged.” These are all claims of alleged fraud, aren’t they? Use of the word would suggest the claims are still to be proven in court. But in popular parlance and in most journalism these days, alleged has become a verbal fig leaf, a wink and a nod toward something taken as generally understood—as in, “we all know the defendant is guilty as charged, but we’ll talk about his alleged crimes up until the point we convict him.” In the case of the election, however, the media has closed ranks and won’t extend the word alleged, because there never will be any testing of these claims in court. So claims of fraud have to be characterized as baseless, unfounded, a total fairytale—and there the story sits for all eternity.

3. It’s not that conservatives want no taxes, social services, or government regulations. It’s just that we believe a reasonable line can be drawn beyond which the burdens of government intervention in daily life stifle personal initiative and stagnate the economy.