Sunday, May 2, 2021

The Rising Curve

Compound steam engine
Steam turbine blades

The rates of increase between a slope and a curve have different mathematical properties. A steady slope is usually generated arithmetically by adding single units (1, 2, 3, 4 …), while a curve is usually generated exponentially by adding squared or cubed units (2, 4, 8, 16 …). A parabolic curve, at least the part that proceeds upward from its low point, is generated by the formula ax2+bx+c,1 and it can rise really fast. My contention here is that our technological advancement since about the 17th century has been on a parabolic curve rather than a slope.

In ancient times—think of Greece and Rome from about the 8th BC, the first Olympiad for the Greeks, or the mythical founding of their city for the Romans—there was technological advancement, but not even a slope. More of a snail’s pace. The Greeks had their mathematicians and natural and political philosophers, like Pythagoras and Aristotle, but aside from writing down complex formulas and important books which probably only a fraction of the populace bothered to read, their works did not materially improve everyday life. The Greeks never united their peninsula politically, for all their concept of democracy, remaining stuck at the tribal and city-state level of conflict. And from one century to the next, they drank from the local wells, shat in the nearby latrines, and traveled roads that washed out every year with the spring floods. They built in marble the temples of their gods, but otherwise the average people lived in houses of wood and mud brick not much different from those of their predecessors in Homeric times, five centuries earlier.

The Romans did somewhat better, being short of actual philosophers but abounding in practical engineers. They developed a democratically based political and military system that united their peninsula and went on to conquer most of their known world. They built huge aqueducts to bring fresh water into their cities from distant springs, underground sewers to take away human wastes, and roads dug many layers deep into the ground that could reliably move goods—and armies—from one end of the empire to the other. They built temples and palaces in marble laid over brick but also invented a synthetic stone, concrete, that their engineers originally made from a volcanic ash known as “pozzolana.” Common people in the city lived in apartment blocks called insulae, or “islands.” They bathed regularly and made a civic virtue of the practice. Life was better under the Romans, but technological advancement was still glacially slow.

Rome, at its fall in the Western Empire during the 5th century AD, was technologically not much different from the Rome of Julius Caesar, five centuries earlier. And that fall—due largely to climate change and the ensuing barbarian migrations—plunged Europe into a Dark Age that saw small advancement in any of the arts, although we did get some practical technologies like the wheeled plow and the stirrup. Those, along with gunpowder adopted from Chinese fireworks and movable type adapted from the Chinese by Gutenberg in the 15th century for printing bibles, carried us through to the 16th century, the time of the Tudor reign in England or the Medici in Italy.

After that, technologically speaking, all hell broke loose.

Some might credit Rene Descartes and his inventions of analytic geometry and the scientific method, based on observation and experiment; or Isaac Newton and his invention of the calculus (also developed independently by Gottfried Leibniz in Germany) and his studies of gravity and optics; or Galileo and his work in physics and astronomy. Intellectually, it was a fruitful century.

But from an exhibit that my late wife prepared at The Bancroft Library years ago, I learned that a more immediate change came about with the exploration of distant lands and when the European trading companies set up to exploit them began importing coffee and tea into the home market in the 17th century. Before then, people didn’t drink much water because of rampant contamination; so instead they drank fermented beverages—sweet wines, small beer, and ale—because alcohol helped kill the germs, although they didn’t think about it in those terms. So they would sip, sip, sip all day long, starting at breakfast, until everyone was half-plotzed all the time. But then along came coffee and tea, which were good for you because you had to boil the water to make them. Everyone brightened up and began thinking. The denizens of Lloyd’s Coffee House in London invented insurance companies to protect the sea trade, which required estimates of risk and probability, and that led to a whole new branch of mathematics and the spirit of investment banking.

Put together scientific investigation with the widespread availability of printed books and the clear minds to read them, and we’ve been on that rapidly rising parabolic curve ever since.

We are just over three hundred years from the first steam engine, patented in 1698 to draw water from flooded mines. In the time since then, the engine has gone from triple-condensing cylinders to turbine blades. And that is the least of our advances. This year, we are just two hundred years from the first primitive electric motor, built by Michael Faraday in 1821. And now we have motors both small and large driving everything from trains, elevators, and cars to vacuum cleaners and electric shavers.

In my lifetime, I have seen music go from analog grooves cut into vinyl disks and magnetic domains on paper tape to digital representations stored on a chip, and photography go from light-sensitive emulsions on film and paper to similar—but differently structured—digital sequences on chips. My electric typewriter—again driven by a small motor—has gone from impact-printing metal representations of the alphabet on a sheet of paper to storing different digital sequences on that same chip in my computer. All of this puts the stereo system, camera, and typewriter I lugged off to college fifty years ago into a single device that started out on my desktop, migrated to my laptop, then moved into my hand inside a smart phone, and now lives on my wrist instead of a watch. And the long-distance call I made every week from college to my parents at home was once a direct wire connection established by operators closing switches; it would now be a series of digitized packets sent out through the internet and assembled by computers at each end of the conversation. Gutenberg’s process for printing words on paper is now embodied in the photo-masking of electronic circuits on silicon chips. And we’re not done yet.

In 1943, Alan Turing invented the first computer, designed to crack the Enigma code for the Allies. In 2011—just 68 years later—that machine’s linear descendent, IBM’s Watson, was playing although not necessarily consistently dominating the game of trivia based on history, culture, geography, and sports, dependent on linguistic puzzles and grammatical inversion, known as Jeopardy! While that was a stunt, similar “artificially intelligent” systems based on the Watson design are now being sold to businesses to analyze and streamline operations like maintenance cycles and supply chain deliveries. They will take the human element, with its vulnerability to inattention, imagination, and corruption, out of processes like contracting and medical diagnosis. Any job that involves routine manipulation of repetitive data by well-understood formulas is vulnerable to the AI revolution.2

Add in separate but related advances in materials, such as 3D printing—especially when they learn how to make metal-resin composites as strong as steel—and you get disruption in much of manufacturing, along with the global supply chain.3

Any theory of economic value that depends on human brawn—I’m looking at you, Marxists—or now even human brains is going to be defunct in another half century. That’s going to be bad news for countries that rely on huge populations of relatively unskilled hands to make the world’s goods, like China and India.

Intelligent computers are also able to do things that human beings either cannot do or do poorly and slowly. For example, in November 2020, Nature magazine reported on an AI that can predict and analyze the 3D shapes of proteins—that is, how they fold up from their original, DNA-coded amino acid sequences—almost as well as the best efforts of humans using x-ray crystallography. And this was just 20 years after the first sequencing of the human genome using supercomputers, and only 66 years after the first glimpse of the DNA molecule itself using x-ray crystallography. Knowing the structure and thereby the function of a protein from its DNA sequence is a big deal in the life sciences. It will take us far ahead in our understanding of the chemistry of life.

Ever since the 17th century, our technology has been riding a curve that gets steeper every year. And the progress is not going to slow down but only get faster, as every government, academic institution, and industrial leader invests more and more in what I call this “enterprise of science.” Anyone who reads the magazines Science and Nature can see the process at work every week.4 We all stand on the shoulders of giants. We stand on each other’s shoulders. We build and build our understanding with each advance and article.

This rate of increase might be slowed, marginally, by a global depression. We might be set back entirely by a nuclear war, which might revert our technological level, temporarily, to that of, say, the telegraph and the steam engine. But it will only be stopped, in my estimation, by an extinction event like an unavoidable asteroid or comet strike, and then so much of life on this planet would die out that we humans might not be in a position to care.

As to where the curve will lead … I don’t think even the best science philosophers or science fiction writers really know. Certainly, I don’t—and I’m supposed to write this stuff for a living. The next fifty years will take us in perhaps predictable directions, but after that the effects on human economics, culture, and society will create an exotic land that no Asimov, Bradbury, or Heinlein ever imagined. Fasten your seat belts, folks, it’s going to be a bumpy ride!

1. That’s a quadratic equation. And no, I don’t really understand the formula’s properties myself, having nearly flunked Algebra II.

2. But no, the computer won’t be a “little man in a silicon hat,” capable of straying far outside its structural programming to ape human intellect and emotions—much as I like to imagine with my ME stories. And it won’t be a global defense computer “deciding our fate in a microsecond” and declaring war on humanity.

3. It’s become a commonplace that the U.S. lost its steelmaking industry first to the Japanese, then to the Chinese, because they were more advanced, more efficient, and cheaper. Not quite. This country no longer makes the world’s supply of bulk steel for things like pipe, sheets, beams, and such. But so what? We are still the leader in specialty steels, formulations for a particular grade of hardness, tensile strength, rust resistance, or some other quality. Steelmaking in our hands has become exquisite chemistry rather than the bulk reduction of iron ore.

4. For example, just this morning I read the abstract of an article about adapting the ancient art of origami to create inflatable, self-supporting structures that could be used for disaster relief. I read and I skim these magazines every week. And frankly, some of the articles, even their titles, are so full of references to exotic particles, or proteins, or niches of mathematics and physics that I can only guess as to their subject matter, let alone understand their importance or relation to everyday life.

Sunday, April 25, 2021

Understanding Alien Psychology

Borg Queen

I have been thinking and blogging about the potential for finding and understanding aliens a lot recently, ever since reading Avi Loeb’s book on the interstellar object ‘Oumuamua, Extraterrestrial: The First Sign of Intelligent Life Beyond Earth. Now I am heading into realms that are totally unknowable—except from the viewpoint of what we know on Earth. But bear with me …

First off, I am not too interested—well, very, but not for the purpose of this meditation—in simply finding signs of life. We’ve seen things that could be confused with fossilized cells in the surface geology of Mars, and we suspect we might have found gases that could only be created by life-as-we-know-it in the atmosphere of Venus. When we get to other planets, both in this solar system and around other stars, we may well find chemical reactions and physical structures that we, from inside the realms of earthly biology and human understanding, define as “life.” Some of it may be intelligent but a lot of it, like most of the living forms on Earth, will not be what we choose to call “intelligent” or even “sentient.” Slime molds, for instance—honking huge single cells with eukaryotic nuclei—can move toward food and away from irritants in a fashion that seems to be intelligent or at least resembles neural networking.1 But it’s not going to build a rocket and come visit us.

When I think about aliens, I imagine the kind that will leave their planet and come out among the stars, as so much of Western-civilized humanity apparently hopes to do one day. And until we go out there, we’ll just have to wait for them to come to us.

So, first question. Will they look like us? Or even come close—like the various humanoid species that populate a Star Trek episode? I don’t believe it. As Carl Sagan once said, we’d have more success mating with a petunia than with an extraterrestrial lifeform.2

Earth has a long history of large, active lifeforms that might have developed intelligence but as far as we know did not. The dinosaurs come to mind: the family Tyrannosauridae and their cousins were bipedal, oxygen breathing, hunters and perhaps also scavengers, and probably—maybe—at the top of their food chain. But we have no evidence that they exhibited any real intelligence greater than that of a lion or house cat, wolf or dog, or even a shark. And yet the dinosaurs’ distant progeny, the Corvidae family—crows and ravens—as well as many other species of birds have a kind of intelligence we cannot explain. Even octopi—what? a mollusk?—exhibit a high level of intelligence. So size, shape, and mammalian ancestry are not necessarily prerequisites for intelligence. Still, none of these animals from Earth and its history is going to build a radio or a rocket anytime soon.

But the examples from Earth also suggest that we cannot expect to find our own kind of intelligence out among the stars, even if it wears an unexpected shape or inhabits an environment—like the earthly ocean of octopi and whales, or perhaps the liquid water under Europa’s ice, or the methane seas of Titan—in which humans don’t particularly thrive. Life on Earth did not, after all, start out on the land, although that’s probably the best place to build a radio or launch a rocket above the atmosphere.

One particular axis we are likely to encounter in alien psychology is that between the individual and the group. So far, on Earth, the sort of intelligence that is likely to expand to encompass curiosity, technology, and eventually space travel is fixed in individual entities. We humans are separate and complete persons inside our own bodies. We are socialized into groups, certainly, in which we can function for enhanced performance. But we do not become lost, stricken, enfeebled, and die when separated from our group—or at least not right away. And we see this pattern not only in human tribes but also in monkey troops, wolf packs, whale pods, cattle herds, and other social groupings.

Each of us knows or tries to find our place in the group, establish a niche where our capabilities and levels of aggression or empathy best fit, and seek comfort and contentment—or at least a subdued level of rebelliousness—in that placement. We are social animals. And that is the basis of all culture and intergenerational achievement. The fabled lone wolf, the mad scientist, the antisocial genius who works alone and keeps his notes in a secret code—such beings are of interest to us as fiction but they are not the creators of lasting culture or enduring civilizations. They build no great cathedrals, establish no great cities, lead no great social or political movements—and they don’t send rockets to the Moon.

So, we think, the kinds of intelligence we will find out among the stars will be like us in that: socialized individuals, each with his, her, or its own personality, preferences, anxieties, and dreams.

But we have another example on Earth to draw on: the hive mind. Whether in the beehive, the anthill, or the termite colony, the individual entities—the minds inside the separate bodies—are not really individuals as we humans understand the term. They are physically adapted to their tasks and place in the hive structure, and their minds are shaped—one might say innately programmed—to perform those tasks and not question their role. Even the queen is not a ruler or leader but simply the pampered sexual progenitor, the mother of them all, that ensures the colony’s survival and renewal.

Something of this was captured in the movie Star Trek: First Contact, where we are introduced to the Borg Queen (pictured nearby with Alice Krige playing the part). But although the Borg are a collective of mechanized humanoid lifeforms whose brains are electronically networked, they are not really a hive and the queen is not their mother nor their first member. The queen speaks with a voice and persona that can call up the collective mind but can also examine it, see its options and possible choices in context, contrast their existence with what she knows of humanity, and evaluate the Borg from the outside. She is more like a leader or first speaker than the sexual progenitor of the collective.

When we look to more imaginative literature on the idea of the hive as a society, the offerings are few. To my thinking, Frank Herbert has done some of the best work on this. His novel Hellstrom’s Hive examined what it would take to change human beings into the sort of social insects that could function most efficiently in the politically denuded world. And his novel The Green Brain imagined a hive of Amazonian insects that functioned as a single conscious entity, in the same way that the cells in our human bodies work together to create the reality—or perhaps it’s just the illusion—of a single person with independent will and desire.

We might encounter some variation of this colony structure, this collective intelligence that is not separated by the strands of individual personhood, out in the universe.

The question in my mind is whether this kind of intelligence is creative or merely reactive. A colony of honeybees or ants can adapt to its environment, find flowers or other foodstuffs when the weather is right, make its nest or hive nearby, and deal effectively with changes in environment and temperature, or else swarm to find a new nesting site. But can they only think and react to present and immediate needs? Could they eventually engineer changes in that environment? Could they look beyond the immediate locale and imagine ways to make it different? Could they look out at the stars and dream of visiting them? Or are they bound to the world as they know it, in a way that human societies are not?

Every group of socialized individuals is built of both leaders supported by their pack of generally submissive followers and then the potential outsiders, the rebellious youth and the mad geniuses, who question the social order, its structure and purpose, and seek something new or just different. At least, that’s how the human tribe has functioned and flourished. That is how we broke the bonds of merely reacting to our hunter-gatherer environment, engineered a better life through agriculture, created written records to preserve intergenerational knowledge, adapted invention and technology to improve everyday life, and then looked outward to the stars.

In our experience, based on that group of socialized individuals, progress depends upon imperfections in communication, upon differences of opinion and individual dreams, upon disagreements and conflicts. These are the one thing that the anthill or the beehive cannot survive. The resolution of these disruptions is never pretty and neat, and it’s never complete and finalized.

But without disruption and disquiet, you have the structured cooperation, the orderly processing and virtual stagnation, of the colony animal. That, or the brainless neural-net reactivity of the slime mold. And neither of them, I warrant, will be coming here anytime soon.

1. See, for example, Mycologist Explains How a Slime Mold Can Solve Mazes, from Wired.com in 2019.

2. But maybe we are connected after all, as in the panspermia hypothesis. See, for example, my meditation on The God Molecule from May 28, 2017.

Sunday, April 18, 2021

Understanding Alien Technology

Alien landing

Recently1 I reported on the interstellar object ‘Oumuamua (“oh-moo-ah-moo-ah”) and why astronomer Avi Loeb believes it to be a piece of alien technology instead of a wandering asteroid or comet. Because of his prior involvement in designing a project to send probes to a nearby star using lasers and lightsails, and because of ‘Oumuamua’s apparent similarity to one of these lightsails, Loeb accepts this as the possible explanation for an object of extremely light weight, flattened structure, and high reflectivity. He does, however, admit that this is only a comparison, and the object could certainly have other technological explanations.

While I accept his analysis of ‘Oumuamua from what we could detect at the time of its passing, I am powerfully reminded that we probably cannot understand or even guess at the nature of a piece of technology arriving from an advanced, spacefaring civilization. And I base this on the great technological divide that exists in our own recent history.

As I’ve written several times elsewhere, you could bring an educated first-century Roman forward in time to Europe in about the eighteenth century, and he could easily recognize most of what he saw. Styles and techniques would certainly be different in fabrics, clothing, and other everyday items like modern carriages, the tack of the horses pulling them, and the roads they drove on. He would have some trouble with understanding the widespread nature of a printed book but could easily understand the principles involved, once they were explained to him.

Our ancient Roman would have a somewhat harder time with a flintlock rifle, because his era knew nothing of gun powder or indeed had any experience with small, contained explosions—although they were familiar with volcanic eruptions. But without going into the structure of atoms and the chemistry of molecular bonding through the trading and sharing electrons, you could tell him the explosion was a kind of very rapid burning, and he could accept it. In fact, I suspect many people today without formal education understand most explosions as such. And as for the rest of eighteenth-century technology, the ancient Roman could be brought up to speed in an afternoon.

Now consider that same Roman brought into the world just two centuries later. Photography, recorded music—even in their analog versions, let alone modern digital transmission and storage—and other technologies we all have taken for granted since childhood, some of them since the dawn of the twentieth century, would be perplexing to him. Try explaining a light-sensitive film emulsion or the technique of recording sound waves on a vinyl disk, and you must first explain the physical wave properties of light and sound. Well, yes, start with ocean waves and work your way upward. And then go on to electricity and its relationship to lightning bolts. Then there are photons, radio waves, and the whole business of radio and television. Don’t forget electric circuits and transistors—the backbone of digital technology. Oh, and the steam engine and internal combustion, automobiles and airplanes. Not to mention microbes, cellular biology, evolution, and genetics.

It would take a couple of days just to catalog all the realms of science over which your ancient Roman temporally jumped. It would take several months of general science courses before he could even begin to understand the physics, chemistry, and other discoveries behind these technologies. Otherwise, it would all be magic involving either godlike or demon-inspired forces.2

If you doubt this, let’s try a thought experiment. Go to your computer, run a Google search on some topic—let’s say “Henry II of England”—and print out the results on your laser printer. Now get into your time machine and go back to a period after that Henry but before Gutenberg popularized printing and books in the mid-fifteenth century—say, the court of Henry III (reigned 1216 to 1272). Hand that printed page to any scholar or monk within reach. The monk would have the most experience of reading and writing, because he probably spent part of a day copying out the holy books. Let’s not ignore the fact that the English language has changed remarkably in diction, definitions, spelling, and orthography in the last seven hundred years, or that most of the people at court spoke a form of French by preference.

Your printed page would not be just a curiosity but practically indecipherable. The paper—at least of that uniformly high quality—did not exist in the West of that time. The fine and exact characters on the printed page would be unknown to people who dealt with handwriting, even with conscientiously practiced calligraphy and stone-carved inscriptions. And aside from the difference in language, the purpose of your printed page would be unguessable. Without a knowledge of the internet—whose technologies exceed even those we were trying to teach the ancient Roman—the listing and its references would be unimaginable. Even with an inkling of a world linked together with computers so that what any one of them knew could be known to any other, the functioning of a search engine like Google, and before it Magellan, Lycos, and AltaVista, let alone online bulletin boards like AOL, would take more than a day to explain and demonstrate.

Without such a guide and insight, your paper printout would be gibberish. Lacking the context of a search among dispersed references, the “message” would be incomprehensible as to its meaning and uses. What is “https://” or “www” or “.com” or “.edu”? Even if a monk could read beyond the language difference, these usages would be hieroglyphs without a Rosetta stone to put them in context. By itself, the paper would be an unsolvable mystery.

And so, while I can agree with Avi Loeb that ‘Oumuamua is likely a piece of technology—or technological debris—from beyond our solar system and therefore a sign of intelligent extraterrestrial life, I don’t think we will understand what it was used for. The analogy of a lightsail is extrapolated from technologies that we know and understand. But the reality, to us, might seem like magic. And even if we sent a probe out to capture the object or tear off a piece of it, we would still probably be in the dark.

We would have to wait to meet the aliens themselves and, like our ancient Roman, open our minds and really listen to their explanations. And only on that day would we begin to understand.

1. See Proof of Alien Life from April 4, 2021.

2. Arthur C. Clarke’s Third Law: “Any sufficiently advanced technology is indistinguishable from magic.”

Sunday, April 11, 2021

Things Worth Believing

Total honesty

In the delightful movie from 2003, Secondhand Lions, a young boy is left for the summer in the company of his two granduncles, who may or may not have lived the swashbuckling lives suggested by their stories. The kernel of their existence, it seems, is embedded in the speech that one of the uncles gives him concerning “what every boy needs to know about being a man.” We never get the whole speech, but here is the part that’s given:

“Sometimes the things that may or may not be true are the things that a man needs to believe in the most: that people are basically good; that honor, courage, and virtue mean everything; that power and money, money and power mean nothing; that good always triumphs over evil; that love, true love, never dies … No matter if they’re true or not, a man should believe in those things because those are the things worth believing in.”

Wow!

We live in a cynical age, where most believe and repeat the thought that people left on their own recognizance are feckless and stupid, if not basically evil; where virtue is sneered at, courage is disparaged, and honor is a word out of the history books; where money and power are worshipped as basic goods to be obtained; where evil lurks in the heart of big corporations and/or big government, and these impersonal forces always win; where love is just another excuse for chasing after sex. In such an age, this speech by an old man should be held up to the light and examined, because it practically defines the idea of personal character.

We spend a lot of time these days trying to determine what is true, what is real, and too often what is useful. We forget that life happens in the moment. When a test of character comes at you, you cannot always be fretting about what may or may not be true. Usually, you can’t even know what’s true, or you don’t have the time to try to figure it out. In those moments that decide a person’s life, you just have to clench your jaw, set your mind, recall the things you actually believe in, and act as your better nature directs. And then you have to accept the consequences, come what may. Life is short. Character is all. And you never, or almost never, get a do-over.

So yes, sometimes you have to believe in things whether they are true or not, because they are necessary to good actions, proper choices, and happy outcomes. Also, because they are beautiful thoughts and will make you feel warm and secure.

But is this always the course to take? Should believing things, true or not, because these thoughts are worth believing, be the complete prescription for an examined life?1 I think that opens a door into outer darkness.

For example, the belief in a personal, omnipresent, and omniscient god—whether or not it’s true that a great being exists outside of human space and time and watches our every move—does have a tempering effect on society. People seem to function better when they believe they live in a spiritual panopticon,2 with someone, somewhere observing and judging their every action and holding them to a moral standard. It is also a beautiful thought that this universe has purpose, intention, meaning, and a conscious design; that life on this planet, especially human life, is more than just mindless growth, like bacteria or a tumor; that existence is more than circumstance, happenstance, and chaos; that someone, somewhere has a benevolent hand on the controls. As the 17th-century French philosopher Blaise Pascal is supposed to have said, “That’s the way to bet.”

But not everyone feels the need or perceives the active presence of a supreme being to watch over his or her actions and mete out punishment as necessary. Some of us have been raised in the humanist tradition, where reason and observed mechanisms of reciprocity and fair dealing govern our actions. And we are comfortable with the observations and hypotheses of scientific reasoning to determine what is actually going on in the universe, without the need for any guiding hand. So … is the concept of a benevolent, all-controlling, spiritual presence still something “worth believing” for these people?

For another example, the idea that human nature is perfectible—whether or not our actions and desires are partly informed by evolutionary biology, rather than a purely social construct that we can change at will—is an idea that attracts every generation of sociologists and political theorists. It is the beautiful thought that we, or some subset of human thinkers and activists, can create a paradise on Earth if only we can equalize human differences; eliminate the very human failings of greed and envy, anxieties about future security and personal advantage, and indeed all consciousness of self and family; and bring all humanity together by eliminating differences of opinion, the pursuit of private property and private enterprise, and adherence to national borders and national identity.3 This outcome would actually require rigid control of every aspect of life by the government or by a unified political party. But in the thinking and telling of these dreamers, the government itself withers away, people just become selfless and “good,” and all the turning points of human history—the crowning of kings, the wars of conflict and conquest, the disruptions of philosophical change and technological invention, the fluctuations of drought and flood, the surge and fade of the business cycle—all disappear into an endless, timeless human paradise.

But some of us value our own thoughts, ideals, and values, and we are not willing to give them up in the name of a presumed harmony. We value our freedom of action, while respecting the freedoms and independent agency of others, even if those freedoms lead to occasional conflicts and transient unhappiness. We love and strive for the safety and security of our families as the carriers of our unique genetic identity. We can recognize that people are different, and some of those differences result in groups, tribes, cultures, and nations that are not willing to sink into a homogeneous blandness, despite the promise of paradise. Although we recognize common traits among all human beings and common elements in all human societies, we still like to do things after our own fashion. Some of us are just stubborn that way. So … is the dream of a secular paradise through worldwide social and communal sharing still “worth believing” for the rest of us?

I could go on. Some ideas are so necessary and beautiful that they just have to be real, or you just have to believe them against a background of unbelief, chaos, and conflicting personal preferences. But beauty is in the eye of the beholder; so are truth and values. I find the sentiments of the uncle’s speech about manhood in the movie beautiful because they coincide with what I was taught as a child and have always felt. A serious religious thinker finds the invocation of a benevolent and all-powerful god beautiful because it is what he or she has always believed. And a dedicated socialist or communist finds the end of history in a form of secular paradise beautiful because the inconsistencies and internal failings of every other political and economic system are just too painful to imagine.

So … no. Some things are not to meant to be believed just because they are the things “worth believing in.” Or rather, they are not meant for everybody, not universal, and not to be rigidly applied. In this, as in every other aspect of human life, each person is required to pick and choose for him- or herself. All we can ask is that they choose wisely.

1. Socrates—that old rascal idolized by Plato—is supposed to have said at his trial, “The unexamined life is not worth living.” That thought, too, has shaped generations of high school and college students. It certainly shaped me.

2. That is 18th-century English philosopher Jeremy Bentham’s model of the perfect prison. The prisoners’ cells are arranged in a circle with the doors facing inward, each door with a covered spyhole, and a guard roving up and down the inner hallway, randomly observing and noting the prisoner’s actions. The prisoner never knows when he is being observed and might be called up for punishment. … And George Orwell thought he had a handle on repressive societal schemes!

3. Consider all the verses of the John Lennon song Imagine, which just about sum up all the attributes of a passionless human perfection. I’ve always found this song insipid, if not outright wrong-headed and stupid. And the tune is just mournful.

Sunday, April 4, 2021

Proof of Alien Life

‘Oumuamua

If you don’t read the science magazines, you may not be aware of the asteroid, or comet, or object that entered our solar system, passed around the Sun some months before October of 2017, and just as quickly went somewhere else. The object was spotted by the Panoramic Survey Telescope and Rapid Response System (Pan-STARRS) at Haleakala Observatory in Hawaii and was almost immediately identified as originating outside of our system—but only as it was already receding. Once so identified, however, it was given the designation 1I/2017 U1 and named ‘Oumuamua (pronounced “oh-moo-ah-moo-ah”), or “Scout” in the Hawaiian language.

Most astronomers consider it to be some kind of asteroid or comet, and the artist’s conception that was widely published (see nearby) shows a grayish or reddish oblong rock, clearly of natural origin. But let me be quick to point out that no telescope ever resolved the object’s image so clearly. All our telescopes could see—because ‘Oumuamua was already beyond Earth’s orbit when it was detected, and it was surprisingly small on an astronomical scale to begin with—was a faint point of light that varied in brightness over a regular eight-hour period. It was something really tiny and, by the time we saw it, pretty far away.

There the matter might have rested—a rock from beyond our solar system, an asteroid that had escaped from some other star system—if Avi Loeb had not taken up the issue. Loeb is an astrophysicist, an alumnus of the Institute for Advanced Study at Princeton, currently the Frank B. Baird, Jr., Professor of Science at Harvard University, formerly the long-serving chair of Harvard’s Department of Astronomy, and author of eight books of popular science and about 800 papers of serious scientific inquiry. He recently published his analysis of ‘Oumuamua, along with his lifelong involvement with the question of alien intelligence, in Extraterrestrial: The First Sign of Intelligent Life Beyond Earth (Houghton Mifflin Harcourt, 2021).

Being fascinated by the subject, I of course bought the book and devoured it right away. So consider this my book report on the subject. And accept that I find Loeb’s analysis convincing, even though most astronomers and cosmologists disagree and insist that ‘Oumuamua is still just a rock or other natural object.1 Remember, I’m a natural contrarian.

The first issue is ‘Oumuamua’s brightness. The observations suggest that its longest dimension is just about one hundred meters, or three hundred feet, the length of a football field. I imagine such a tiny object would not normally be visible at interplanetary distances, the distance at which we first detected it, unless it was really bright. The nature of the light we could see suggested it was reflected sunlight, not any artificial light the object might be emitting. At that distance, the reflective capacity—called “albedo” by astronomers—had to be much greater than that of a rock or even the ice of a comet, which is usually so contaminated that we call them “dirty snowballs.” ‘Oumuamua reflects sunlight like polished metal.

The second issue is the shape. From the variation in brightness, the observations suggest that the object was slowly tumbling. The amount of reflected light varying over time implies that if ‘Oumuamua’s longest dimension is three hundred feet, then it’s shortest is a little more than a tenth to a fifth of that, or about thirty to fifty feet. The artist’s conception draws this as a cigar shape, and I think of it as about the size and dimensions of one of our nuclear submarines. But Loeb presents an alternative shape as more likely: a disk or a pancake. Here I am interpreting the reasoning as discussed in the book: if the oblong or cigar shape was the object’s true nature, then we would have to be viewing it practically end-on—that is, with the axis of spin at right angles to our line of sight—for the variation to be this complete. If we were viewing it from the side—with the axis aligned with our viewpoint—then the variation would not be as great. But a tumbling disk could display that degree variation from a variety of angles to our line of sight.

The third issue is the path that the object took in its travels. It deviated on its course around the Sun and accelerated slightly on its exit from the system. An asteroid, a solid rock, doesn’t do this but instead follows the course that its starting speed and the Sun’s gravity give it. A comet often deviates and accelerates slightly because sunlight heats the ice, causing outgassing that functions like tiny rocket motors, pushing the comet randomly this way and that and, on its outward trip, with the Sun at its backside, perhaps accelerating it. But ‘Oumuamua did not have a coma of dust and water vapor surrounding it or the long tail pointing away from the Sun, both features typical of a comet. Astronomers studied the object at various wavelengths—for example, in the infrared, where carbon dioxide from a comet’s emissions would show up clearly—and still they found nothing.

The scientists who dispute Loeb’s interpretation of ‘Oumuamua as a technological artifact suggest that it might have been composed entirely of frozen hydrogen, because the outgassing of hydrogen would be invisible to us. Such an object is possible, but it’s hard to imagine how, at the relatively slow speed it was traveling, such it would survive the long trip through interstellar space, where even starlight would eventually heat it up enough to melt it.

Scientists have also suggested that the object was pushed around and accelerated on its passage through our system by the Sun’s light itself. This idea is supported by the observation that ‘Oumuamua’s acceleration faded with the inverse square law as it went further and further away.2 But for the object to respond like that to mere sunlight, it would have to weigh almost nothing. The rock depicted in the picture would have to be less dense than the air on Earth, and it’s hard to see how such an object would hold together while it tumbles through space.

Here Loeb brings into play some of his personal experience. He recently participated in a privately funded project to conceive of and then design a probe that could be sent to a nearby star and return signals within a human lifetime. The probes that we have already sent out of the solar system, the Pioneer and Voyager programs, and more recently the New Horizons flyby of Pluto, all depended on chemical rockets to launch them and set their initial course, then used gravity assists from the outer planets to speed them on their way. They will take centuries, if not millennia, to reach any stars in their paths.

The Breakthrough Starshot program that Loeb participated in envisioned instead a small electronics package, a “Starchip,” attached to a lightsail. This vehicle could be put into space near Earth and then propelled by a laser fired from the planet’s surface and focused on the sail. A sustained laser blast could accelerate it to a high fraction of the speed of light. The sail would be very thin and light: think of the metal coating on a Mylar party balloon. It could take the package to the nearby star Proxima b in about twenty years. As the Starchip passed through the Proxima b system it could record images of the Earthlike planet that we know orbits this star. The chip could then send these images back to Earth in the 4.2 years that light (and any radio signals) from Proxima b takes to reach us. The beauty of the program is that the main capital and operating costs, including fuel, are in the Earth-bound laser, while the individual probes would cost almost nothing by comparison. So the program could send out hundreds or thousands of Starchips to different nearby star systems.

With this background, plus his lifelong interest in the search for extraterrestrial intelligence to begin with, Loeb was primed to see ‘Oumuamua as some form of lightsail: one hundred meters wide and perhaps no more than a millimeter in thickness, fully expanded, very reflective, and tumbling slowly. It might have been sent into our solar system as part of an alien Starshot program. However, from the mechanics of its parabolic orbit and its presumed entry speed, Loeb and other scientists think ‘Oumuamua must have been moving at the average speed of most of the stars rotating in the galactic plane—and then our Sun, which is moving a little faster than average, scooped the object into its gravity well and redirected it to who-knows-where. So, in that interpretation, ‘Oumuamua might have been an interstellar navigation buoy or repeater station, instead of an aimed probe.

As Loeb describes the situation, most astronomers consider ‘Oumuamua to be a natural object, and they cling to interpretations of its orbital deviation that involve either a hydrogen iceberg or some kind of super-lightweight mass that still has the internal strength to tumble and not fall apart. He believes these scientists resist the evidence of ‘Oumuamua’s artificial and possibly technological nature because the search for extraterrestrial intelligence (SETI) has leaves a bad taste with true scientists. The notion of alien intelligence brings to mind too much science fiction full of little green men, bug-eyed monsters, and evil space invaders, as well as too many years of aiming radio telescopes at various stars and listening for messages that never come.3

I have always believed that, in a universe filled with billions of galaxies and trillions of stars like our Sun, and now with growing evidence that many of these stars have Earthlike planets in their habitable zones, it would be the extreme of hubris to think that ours is the only planet to develop and support life, or that human beings are the only intelligent, tool-building and -using, and soon to be spacefaring species in all of that vastness.

I find Avi Loeb’s reasoning to be persuasive. We have just detected the handiwork of intelligent aliens that passed unannounced through our system. Maybe it was a lightsail or an interstellar beacon disturbed by the Sun’s gravity, as Loeb suggests. Or it could also have been a cargo cover, a blown hatch, or debris from a larger ship that suffered some terrible accident. All of that would be unprovable speculation. But what I no longer think is that ‘Oumuamua was an extrasolar asteroid or comet—not even one made out of pure hydrogen ice.

1. Much of Loeb’s book is autobiographical, demonstrating his solid scientific background. It also gives a detailed history of the science of astronomy in relation to comets and asteroids and various professional inquiries and disputes about the search for extraterrestrial intelligence, which makes for fascinating reading. But I’ll try to focus here on the issue of ‘Oumuamua itself.

Inverse square law

2. The inverse square law says that the amount of radiation from any point source that broadcasts in all directions decreases proportionally with the square of the distance from it. So, if the strength of a light is, say, 1,000 lumens at a distance of one mile from the source, then it is just 250 lumens at two miles (one-quarter being the inverse square of two), and only 110 lumens at three miles (one-ninth being the inverse square of three). You can test this by measuring the amount of light from a lamp as you walk away, from standing next to the bulb to standing across the room.

3. But, as Loeb points out several times in the book, many physicists devote their careers to studying the extra dimensions—beyond the three that we know of, plus time—needed to support string theory, or the nature of the multiple universes that support probability theory and the fate of Schrödinger’s cat. And we spend hundreds of millions of dollars on particle accelerators and experiments to prove the supersymmetry underlying quantum mechanics. These are just beautiful ideas without, so far, any hard evidence to back them up. And here a piece of alien technology—although the evidence is debatable and requires some thought and analysis—has just floated through our solar system.

Sunday, March 28, 2021

A Culture of Complaint

Whistleblower

Does it seem that people are complaining more these days, and about situations and conditions where they have to go out of their way to find a problem? It’s almost as if there is a conceit among mature and otherwise stable people that finding and lodging a complaint gives them some kind of competitive advantage. It’s like ammunition they can use from a bargaining position or to win a counter-argument.1

I don’t remember this as part of the national personality when I was a child. Of course, children always have complaints: they didn’t get the candy or cereal they wanted, the bedroom’s too dark, the food is too hot or cold or spicy, and the world is not going the way the child expects it to be. At a certain point, however, the child learns that the world is never going to be perfect, never going to give him or her all the conditions she or he can imagine. And at that point the person grows up.2

In my view, complaining about things you know cannot be changed, or for which you have only a slender justification, is a loser’s position. It’s an acknowledgement that you do not have the personal strength and resilience to live in a world of hard choices and few accommodations. It also confuses having a grievance—especially one that cannot be easily remedied—with a form of advantage and therefore a strength.

In my life, as I was taught by my parents, being strong means taking care of yourself and not complaining or even acknowledging that you are not getting the thing you want. Perhaps this was just a way for them to live quietly without two boys whining all the time, but I think the lesson went beyond their own comfort. My mother and father had lived through the Great Depression and World War II, and still they made their way in the world. They knew about hardship and damaged expectations, and in the sudden good times of the postwar years they wanted their sons to have the same perspective: life is fragile; the future is not certain; you have to make your own way; and you should be thankful for what you get.

Complaining about the small things—and especially going out of your way to find things to complain about—does not fit into this world view. To show yourself as being overly concerned with the picayune inconveniences of everyday life is a vulnerability. To exhibit such weakness is to expose yourself to the deceptive practices of others—not that I am paranoid, just watchful and careful.

Beyond that, complaints about situations that are not immediately damaging, dangerous, or life threatening is just plain rude. Especially so if the object of your complaint is not anyone’s fault or represents a problem that cannot be remedied except by precautions and ameliorations that are out of proportion to the inconvenience caused.3

But for some people, I suspect, that is the point. They want to embarrass or harass the person to whom or about whom they are complaining. They think that doing so increases their stature—either by showing themselves as more discerning and of greater refinement than others, or as stated above, giving themselves a weapon to be held in reserve against a future argument.

Such people have—at best—small, shallow lives. Instead of aspiring to greatness, or even to meaning in their daily life, they aspire to petty annoyance and the garnering of small advantages against futile arguments. This is not evil. It’s not even tragic. It’s just sad.

1. I may be overly sensitive on this issue, however. I’m on the board of my homeowners association, and it seems that many owners—and not a few renters—are engaging in this kind of preemptive complaining. Maybe they think it protects them when they themselves are accused of violations of the rules, although our board tries hard not to antagonize people with trivial violation notices.

2. Of course, the final pulse of childhood complaint, in my time, came with the Vietnam War. A whole generation of previously spoiled children either went off to fight or they decided that the government was wrong and they had the better grasp of geopolitics, and so the public protesting and the street riots began. Maybe the culture of complaint started with the protests of the 1960s.

3. Again, we’re in the realm of a child’s discontent. You see this in living situations were a speck of dirt on a windowsill or a scrap of paper on the ground causes anxiety. Clean it up or pick it up yourself, or keep quiet about it.

Sunday, March 21, 2021

The Blocked Writer

Midnight writer

Writer’s block is something I have managed to avoid for most of my life. This past year, however, has been different—mostly because of the pandemic, the lockdown, social isolation, and persistent politics. All of those conditions create a subtle anxiety that interrupts the flow of ideas. I know this because other writers I communicate with also seem to be having a hard time.

The popular conception of writer’s block is that the writer is just full of ideas but, when he or she sits down at the keyboard or the notebook, the words just won’t come. Somehow, the conditions for putting the mind in a special configuration—for me, it’s a semi-trance while staring at the screen and working my fingers on the keys, or staring at the paper and manipulating the pen—have been interrupted. The desire to write is there, but the mechanics aren’t working. The popular analogy is a type of constipation: full gut, no flow.

The reality is somewhat different. For me, the word-making machine—that interaction of eyes and fingers directly connected to the brain’s speech center—would work just fine. But the ideas—the notion of what comes next in the novel I’m working on, the topic for my next blog post—have vanished clean out of my held. I drop a stone in the well of my subconscious, the place where things are supposed to bubble up, and only get a dry rattle or nothing at all. It’s like part of my brain has gone dead.

As I say, the word machine still there. For fiction writing, I usually have in hand an outline, a sketch of the novel that goes from beginning to end. Each day I take the next scene or piece of action, consider how it should go, what the characters must do or say to move the story along, then wait for the “downbeat.” That’s what I call the start, the ignition point, the first words, actions, sense images, or other detail that begins the scene. Once I have that, I sit down at the keyboard or the notepad (these days it’s more direct to computer than through a pen and ink intermediate), and the words start flowing. And the flow is direct from the subconscious, where the story has been brewing for the past year, months, days in whatever form until it comes alive now in the form of words on paper or on the screen.1

And once the story is in that form, having passed through the subconscious mind into my full consciousness, it has a sort of permanence. I can go back and alter details to fit previous or subsequent developments. I can improve on wording or add details that better explain the action. But the story as it comes through is, in my mind, about ninety percent complete: it represents what “actually happened” to the characters in the story arc. This means that, if the story has gone wrong, if I have mistaken my characters, or if I have misread my own subconscious, it’s harder for me to scrap what I’ve written and start over on that piece of action or dialog. So it’s no good, really, for me to force the story. I can’t just sit down and doodle my way into the action when it’s not ready in my subconscious.

If I try to force it, then the whole process slows down. Descriptions become longer, and irrelevancies grow, as my mind tries to come up with something to say. I start describing every leaf on a tree, every scratch and scar on a door panel, things the reader doesn’t need to know and that waste the reader’s time. The focus of my writing is like a flashlight in a dark room, revealing details that build in the reader’s mind a picture from the viewpoint character’s awareness of the story as it progresses. Focusing too much on useless detail is like living inside the head of a character who is obsessive or drunk.

Writing nonfiction is somewhat easier. The information is usually at hand: from research and note-taking on the issue, interviews with participants, or observation and note-taking on a technical process. If that preliminary work is done, I can go ahead; if not, I have to wait. But with the material in hand, it’s relatively easy to outline a 1,500- or 2,000-word article or procedure in my head. There is usually no reference to other articles on the subject, and no link to a broader story arc or concern for a point-of-view character and his or her own history. All that’s missing, in the case of an article, is the downbeat, the point of entry into the subject matter for the interested reader. And if I’m writing a process document, that’s even easier, because every process begins at the first step.

Besides, the nonfiction material is generally outside me, outside my imagination and the tilt of my subconscious. So it’s easy to connect with the word generator and get the thing done. And, usually, there’s a deadline and money involved, and they are great incentives.

But fiction, especially a long work of connected scenes, themes, and characters—where, as Tolstoy said, a gun produced in the first act must be fired in the second—is a great ball of threads and issues. It helps to have an outline, a walk-through of the story at the 30,000-foot level, to use as a guide. And I generally have an outline, a who-does-what-next, before starting a novel. Usually, it takes me eighteen months to work up a complete outline—sometimes after considering a project for years or decades—and then only six to nine months to write the book.

But the current novel, a military story based on Mars, is different. I had a general idea for the story, was outlining it section by section, heading toward a still-undecided end—and then I fell and broke my hand. That interrupted my writing, because it’s hard to follow my trancelike process when I have to spider-walk across the keyboard with one hand. As my hand was healing, then the pandemic and the isolation hit, and anxiety set in. The book has been flapping feebly on the ground ever since.2

I’ve been able to continue working on this blog during the past year, but the politics of the 2020 election and its aftermath have been just too absurd. How can someone write anything of a political nature—which is one-third of my subject matter—with all of this going on? Science topics have been available, but I’ve been powerfully distracted by the politics.

So my mind, that dark well of the subconscious, has run dry for a while. I’m trying to prime the pump. Maybe it will work. But the mind is a delicate thing after all.

1. And the bet with myself is always whether what comes up in the moment of creation will be better than the slender and still unformed idea represented by the outline. Usually, it is. Since the outline was completed, my subconscious has been making more connections, tossing up subtler and more complex ideas, and the final product is richer and more complete. Usually.

2. Well, for those reasons and because I don’t actually believe in colonizing Mars. For internal logic, the story had to take place off Earth, and aside from the barren and airless Moon, Mars is the next logical planet to set up an off-world colony. Life there in the time frame I imagined would be similar to that of Antarctica: mostly scientific stations and support services, with the addition of some mining interests and modest terraforming activities. Still, in my estimation, it might almost be better to focus on the Moon, where the conditions are harsher but the engineering simpler—you’re in hard vacuum, deal with it—and the logistics and travel times far easier. A writer first has to believe in the story he or she is telling, and I don’t quite believe in Mars.