Sunday, March 31, 2019

Personal Selfishness

Mangrove path

I have always been a selfish person. It’s not as if I don’t care for others, extend a hand when it is needed, or give to charity. But my first goal in life has been to protect and defend my own person, my integrity, my … destiny, if you will, and my family. I have always resented the suggestion—sometimes spoken, usually implied—that I did not and should not live for myself but for others. It said that if I was not selfless in spirit and did not pursue the goals of others or of some generalized sense of society, I was not a worthwhile human being.

But I have not exactly lived for my own good times and catered to my lusts and pleasures. Early on, back in high school, I conceived of my life’s purpose as being a writer—that “destiny,” if you will. I was going to capture something mysterious but important, wring it out of random thoughts and thin air, and put it down on paper for all the world to read and understand. What I intended to capture was not going to be some secret of life or instruction for the illuminati, but instead a particular view of humanity, of human possibility, and of personal fulfillment.

To achieve this, I realized that I would have to give up part of my brain to the development and pursuit of stories. I was going to be the prism through which these thoughts and stories entered the world. And novel writing is a full-time activity that extends upon the time a person is actually sitting down and marking a piece of paper or configuring pixels on a computer screen. Plots come out in bits and pieces, and good ideas awaken you in the middle of the night. Characters offer suggestions about what they will and will not do, and they try out bits of dialogue while you are soaping in the shower. As I’ve written elsewhere, the writing life is like renting out your head to a traveling theater company and being constantly nagged by the in-house playwright and the actors about the show in production.1 The only selfish part in all of this is that I have pursued my own visions, my own stories, instead of consciously adopting the purposes and narratives of others or working for some generalized benefit to society.

As I politically matured, sometime in college, I began to see this “selfishness” as a key element in the differences between the Left and the Right. A person’s focus of interest, striving, and goals can be placed along a spectrum: from individual self to family, clan, and tribe—or village, neighborhood, city—or guild, profession, class—and then on to state, region, nation—or party and government. From the closer, more personal, and more tangible, to the wider, more social, and more abstract.

The Right would place a person’s natural emphasis on the closer end of the spectrum: self, family, personal beliefs, individual preferences. This is actually a state of diversity within society. “I like chocolate ice cream.” “I like vanilla.” “I want to work as a plumber.” “I’m going to be a lawyer.” “I choose to drive a Mercedes.” “I like Chevrolets.” “I believe in the Christian god.” “I am an atheist.” And a society, economy, and government fashioned according to the ideals of the Right would allow all of this. So long as the individual does not install faulty plumbing fixtures, cheat clients out of inheritances, rob a bank to buy that Mercedes, or burn crosses in anyone’s yard—go for it. The Right has generally been about individual freedoms, self-determination, and going your own way.

The Left would place a person’s entrance on the wider end of that spectrum: class, state, nation, and lately the globe itself as some simulacrum of all humanity. This is actually a state of uniformity for individuals. “I like whatever foods can be most sustainably produced.” “I will work wherever the government requires my service.” “I will ride the bus and take the train if it relieves pressure on the environment.” “I believe in the principles of my party.” And the society, the economy, and the government would be fashioned to make the best use of this willingness to conform, according to the inspirations of the philosophers, social scientists, and technical experts identified and promoted by the party. (I almost wrote “the state” there, but in the Left’s current paradigm, going back to Mao and Lenin if not to Marx himself, the state is the supreme expression of the party—and not the other way around.) The Left has always been about obedience, subservience, and getting in line.

Is all this a bit too extreme? Perhaps a few decades or even a dozen years ago I would have admitted as much. But in the last couple of years the Left has come out of its populist shell. The Democratic Party has veered into Democratic Socialism, if not into traditional Socialism and even Marxism. I would like to believe that the party had a cooling off and period of reappraisal from its pro-Soviet heyday in the 1920s and ’30s after the failure of the Ukrainian harvests, the Moscow show trials, and the Molotov–Ribbentrop Pact. But perhaps the party leaders simply devised a mask fashioned out of unionism, suburban populism, and later environmentalism to gain support for itself in the shadow of the Cold War.

Of course, at points along this spectrum and in certain societies, the focus of individual intentions has sometimes become confused.

During my college days in the late 1960s, a lot of young leftists were “tuning in, turning on, and dropping out”—if not dropping out of college directly, they were dropping out of the vision of social order and the nominal good that their parents espoused. Some of this turning away from society had to do with the Vietnam War. There, young leftists refused to comply with the state’s demand for more soldiers—not for their own personal reasons, of course, or because they were scared, but presumably in service to a greater humanity that surpassed the dictates of the state of that time.

There has also been some confusion about just how free and open a rightist, free-market, capitalist economy can be. Sure, everyone is free to find his or her own career and own way of making a living. But some of those choices—novel writing for one example, or almost any art form—don’t necessarily pay a living wage. And while the individual is free to like any flavor of ice cream or drive any brand of car he or she desires, the object of desire is not always within economic reach. Indeed, it sometimes seems that the producers of ice cream, cars, and everything else are promoting choices as limited and directed as if they were promoted by a government technical expert. The only difference would be that if a socialist government decided to produce only plum-flavored ice cream as the best, most healthful, and cheapest, then individuals would have no choice but to eat it and forget about chocolate or vanilla. The government could not be wrong in its decision and would not suffer economically. But if a competitor in a capitalist economy decided to promote only plum ice cream and found subscribers willing to fund the venture, they would probably lose their money and go out of business.

The question of emphasis along the spectrum, of self or others, comes down to a basic philosophical view. Does the individual belong to him- or herself or to society as a whole? Does the individual have innate value as an autonomous and conscious being? Or is the individual important only as a cog in a bigger machine, a part of the social whole, a zero stringing out a larger number?

Anyone who knows my writing will know how I answer that question.

1. And so I have always carried a pocket notebook and pen with me—as early as high school—and placed pad and pencil on the nightstand, just to capture these random thoughts. From loose paper, these bits and pieces go into a folder on the hard drive dedicated to the current novel or to one in development. Notes are organized into themes, motifs, characters, plot points, and eventually an outline. And the outline is then translated into a full production draft. The difference between a working writer and everyone else is this system of capture. Everyone gets these random ideas; the writer organizes them toward the work in progress.

Sunday, March 24, 2019

Creation Stories


Humanity loves stories. I love stories. From the time of Homer in Greece of the seventh century BC, the first storyteller of the literate Western tradition; from the time of Gilgamesh, King of Uruk, from sometime between 2,800 and 2,500 BC—the epic poem about him was the first known work of literature—poets have been putting their thoughts into story form. Not just chants or prayers or tributes to gods and goddesses, but coherent tales of action and dialogue, of sequence and consequence, of cause and effect. This is what sets us apart from the chatter of monkeys, the mindless songs of whales, and the howling of wolves.

The Bible, Old Testament and New, is a series of stories. The creation of the world, where God—the one true god, the only god, the supreme being—divides the light from the darkness, the water from the land, the animals from the new creation, Man. That beginning is a story that starts from nothing but the mind of God and proceeds to build a world.

And that story as much as anything set the human mind—or at least its Western variant, the Judeo-Christian tradition—up to see the world as a created place. It didn’t just exist for all time, as far as anyone knew. It had a start, a point at which something came out of relative nothing. Other traditions have their creation stories. For example, in the American Northwest, the native cultures believe that Great Raven made the world, the mountains, and the tides.

I suppose that, absent the ingrained need to tell stories, any observant and inquisitive mind would ponder how the world came into being. Such a mind would note that the nature of mountains and hills is to break down—through erosion, landslides, rockfalls—but never to build up. Such a mind as Leonardo Da Vinci’s, as recounted by paleontologist Stephen J. Gould,1 noted that mountaintops sometimes contained the fossils of sea creatures, and he wondered about them. In fact, humanity didn’t have an adequate explanation for the rise of mountains until the theory of plate tectonics was first proposed in the early 20th century and validated by evidence of the seafloor’s spreading in the 1950s and ’60s. But without that theory, mountains can be observed to break down from known causes but must be imagined to rise due to … earthquakes, convulsions, or the Hand of God.

Back when the Bible writers and Renaissance polymaths were wondering about such things, the world was just this one planet. The Earth was the central place of the universe, while everything else—Sun, Moon, other planets, and the stars in the night sky—was just a set of ornaments provided to heat this world, keep track of the months (and cause madness), and aid human beings with navigation. Otherwise, Earth was the place that mattered.

It wasn’t until that same early 20th century that humanity knew there were stars beyond the stars we could see. The “galaxy” was the Milky Way, the concentration of stars like a river in the night sky. But some of those “stars” were distinct points of light, while others were fuzzy patches that astronomers called “nebulae,” or clouds. With better telescopes, they could decide that some of these cloudy objects appeared to be actual backlit clouds of dust and gas, the remnants of exploded stars. But others remained just fuzzy patches. It wasn’t until Edwin Hubble announced in 1924 that the patch everyone called “Andromeda”—after the mythological daughter of Aethiopian King Cepheus and his wife Cassiopeia—was actually another galaxy, like our own Milky Way, but far off.

Just as Nicolaus Copernicus, with his new model for the motions of the Sun and Earth, turned “the world” into a solar system with other planets no less important than our own, so in one step Hubble expanded “the universe” from an island of local stars into a vastness of galaxies, hundreds of them, perhaps thousands, no less important than the stars we can see around us. It would take much larger telescopes, including the orbiting scope named after Hubble himself, to see that there are actually billions, if not a couple of trillion, galaxies expanding in clusters and webs that extend into the space beyond which even our most powerful telescopes can see.

Hubble also noted that the light from these distant objects was “redshifted,” or appeared farther down the spectrum and at lower energy levels than the light from stars in our own galaxy. This suggested to him and to other astronomers that those other galaxies were expanding away from us, and so the universe itself must be expanding.

By modeling the life cycle of stars based on size and temperature, calculating the age of the oldest stars according to this cycle, and figuring out how stars create the various elements by fusion—from helium out of hydrogen; then the lighter elements and metals through iron; and finally the heaviest extant elements like gold, lead, and uranium from the collapsing pressure of supernovas—astrophysicists could determine the various generations of stars needed to make up the universe we can observe. They came up with a probable age for the universe of about 13 billion years.

If the universe is expanding, it is reasonable to assume that it has always been doing so. And then, if you “roll back” that observed expansion by 13 billion years, you come to a point in primordial space. All the stars we can see, and the dust and gas we can’t directly see, everything in the universe collapses down to a point that’s infinitely small. And because it’s so packed with material, that point or singularity must be infinitely hot and dense and just waiting to explode. That’s what must have happened: this infinitely tiny, infinitely dense, infinitely hot thing exploded and spewed out all the matter in the universe. And this hot stuff then began expanding and cooling and evolving into protons, electrons, neutrons, neutrinos, and all the other subatomic particles, and finally into coherent matter in the form of hydrogen atoms. At the same time, residual energy in the form of fast-moving photons made light waves and all the variable energies we can detect. And after a time of expansion, bits of the local scene began to contract under gravity—as stars still do today—until they could ignite a fusion reaction and begin making the other elements out of those hydrogen atoms.

All of this was called the “Big Bang” theory, somewhat derisively, by astronomers who instead assumed that the universe had always existed in a “Steady State.” The two sides might never have resolved their positions, until in 1965 two engineers at Bell Labs in New Jersey, Arno Penzias and Robert Wilson, discovered the Big Bang’s echo. They were trying to fix the radio interference that was plaguing a giant radio antenna that was supposed to pick up satellite communications. Nothing they tried—even sweeping it for physical debris such as twigs and leaves—could clear up the signal. It was a low hum, energy at an almost stone-cold 2.3 degrees Kelvin, or -455.53 degrees Fahrenheit. This temperature matched theories that predicted those high-energy photos released in the Big Bang would, over a time of 13 billion years, have cooled to a microwave background radiation at just such a frequency.

That was proof of the Big Bang as the creation story of the universe. There was just one problem: if you rolled the apparent size of the observable universe back to that single point, it takes a lot longer than 13 billion years for it to expand. In other words, even if the universe expanded outward from that point at the speed of light, it would be a smaller universe than the one we can see today. This puzzled astronomers until a professor at Cornell, Alan Guth, in 1979 conceived of Inflation Theory. This theory said that in the period between 10-36 seconds and perhaps 10-33 or 10-32 seconds after the singularity exploded, and for reasons that are not explained, the space containing that outpouring of material expanded exponentially at much greater than light speed. It went from a zero-dimension point to about 0.88 millimeter—about the size of a grain of sand—in virtually no time, and afterwards the universe expanded at a much slower rate. This inflation period accounts not only for the current size of the universe but also its apparent smoothness.

And all of this story—from the discovery of other galaxies and the expansion of the universe, to the Big Bang, to the inflation that explains the Big Bang—has been conceived from observations and worked out with intense mathematical calculation within the last hundred years. Much of that calculation—if laymen can follow the numbers at all—grapples with issues of general relativity.

According to general relativity, time and space, or timespace, have/has no fixed or absolute value. Instead, they are true and fixed only for the local observer and based on his or her speed and the gravity well in which the observer finds him- or herself. People traveling faster or existing under heavier gravity experience the passage of time at a slower rate and the shape of space at a greater curvature than people living in slower, more open domains.

But also, according to general relativity, the speed limit of the universe is fixed at the speed of light, c, or 186,272 miles (299,792 kilometers) per second. So regardless of how compact the Big Bang mass might have been, its speed of expansion according to any observation was pegged at that amount of distance over time … or not.

Doesn’t this all seem to be just a bit too artificial? Massive singularities, rapid expansion, a fixed age for a universe that is continually expanding, with temporary conditions that violate other theories. As I have stated elsewhere,2 we may not yet understand the nature of space, time, and gravity at all. So we may not be equipped to unravel the nature of an expanding universe or roll its scale back to the infinitesimal spitball of hot matter that the Big Bang requires.

We may instead be living in an age of conjecture comparable—but with more sophisticated theories and advanced mathematics—to the years between Copernicus’s modeling of the sun-centered universe in 1543 and Kepler’s working out the theory of planetary motion as ellipses rather than perfect circles in 1619. And now we are waiting for a better theory of space, time, and gravity to account for our developing observations.

But then, on the other hand, why did the universe need to be created at all?

1. See Leonardo’s Mountain of Clams and the Diet of Worms: Essays on Natural History, from 1998.

2. See Fun with Numbers (I) from September 19, 2010, and (II) from September 26, 2010.

Sunday, March 17, 2019

Warp Drive

Enterprise warp bubble

So I get to thinking about things. And a recurrent theme with me is the size of the universe, interstellar distances, and how humanity will one day—and other intelligent beings perhaps sometime sooner—cross them.

According to our current thinking about the nature of space and time—or “spacetime,” if you will—we physical beings cannot travel faster than light.1 Supposedly, the nearer you approach light speed, or c, the more massive yourself and your ship become and the slower your onboard clocks tick until, finally, at c, you and the ship weigh an infinite amount and time stops for you. That would be a problem, especially since you also have to carry fuel to move that mass—at least with our current propulsion technologies. And if time has stopped, how are you accounting for your speed toward your destination? But I digress …

Popular science fiction tropes to deal with this—so that humanity and other beings can conquer and maintain interstellar empires that don’t quickly become temporally distorted and dissociated—include both wormholes and warp drives. Wormholes presumably punch through the “fabric” of space that appears to be crumped up like a giant wad of papier-mâché, so that one place and another are not actually separated by vast interstellar distances but actually lie side-by-side through interdimensional space. This presumes, of course, that the entire universe we see around us is crushed up to a thing about the size of a walnut. But I digress …

Warp drives, popularized by the Star Trek television franchise, allow that those two places are indeed far apart, but that you can get from one to the other without violating the light-speed limit by collapsing the “fabric” of space ahead of the ship while simultaneously expanding it behind. You do this by creating a “warp bubble” around the ship. My best analogy for this—since it’s kind of hard to envision space itself collapsing and expanding2—is someone walking on an elastic sidewalk.

From my home in the Bay Area to the state capitol in Sacramento is a distance of about seventy miles. Walking at a steady pace of four miles per hour—my maximum sustainable speed—I could get there in about twenty hours, allowing for one or two rest stops along the way. I have long legs, so my stride is a bit longer than three feet, heel to toe, but I can’t move my legs any faster than my normal, determined pace of about two strides per second, or say, six feet per second, to complete the trip in any less time.

But suppose I could somehow, magically, draw together or compress the sidewalk and the ground beneath it that lies in front of me, so that each of my three-foot strides might cover, say, thirty feet. And as soon as I had placed that forward foot and lifted my rear foot, the sidewalk expanded again to unclench the concrete and soil behind me. My legs wouldn’t be moving any faster; I myself would not be exceeding my walking speed limit of about four miles per hour. But with a ten-to-one advantage in ground coverage, I could make the trip to Sacramento in two hours without getting out of breath. If I could compress the sidewalk by 300 feet per step, and expand it again behind me at the same rate, I could walk to the city in twelve minutes without even breaking a sweat.

I would be warping the sidewalk and the ground under it in the same way the starship Enterprise creates a warp bubble to collapse and expand the space around it.

With my walking pace, we have known points of contact with the ground—my heel coming down, my toes pushing off—at a steady pace of three feet per step, six feet per second. So we can easily determine how much the sidewalk has to collapse and expand to achieve a reasonable travel time. If I were moving much slower—say, hobbling with a cane—we would need to collapse larger and larger amounts of sidewalk but do so at a slower rate than once per half-second, to make that twelve-minute trip of seventy miles. Conversely, if I were a seasoned marathoner, running three times as fast as a brisk walking pace, or about twelve miles per hour, we would need to take smaller bites of sidewalk but collapse and expand them at a much higher cycling rate.

What’s left out of the Star Trek story is how fast the Enterprise can travel between the stars without warp effects. The alternative to warp dive in the narrative is “impulse drive,” which is presumably some kind of mass-reaction thrust. But the stories told around the ship’s adventures never exactly correlate the capabilities of either drive with distances traveled. Sometimes a modest speed at warp drive covers light years in a matter of minutes; sometimes much longer. Sometimes a hefty fraction of impulse drive will take them halfway across a solar system in a minute or two; sometimes much shorter distances—say, to close with an enemy vessel a thousand kilometers away—in the same time.

Various official and unofficial “manuals” created either by the show runners or the fans attempt to quantify these fantasy speeds. One reference says that maximum impulse speed is one-quarter of light speed, or 167,000,000 miles per hour. So, without the benefit of warp drive effects, the ship could travel the 93 million from Earth to the Sun in about half an hour. Or the 365 million miles, on average, between Earth and Jupiter in about two hours and eleven minutes. So, to maneuver within orbital distances around a planet or to close within range of an enemy a hundred kilometers away, the ship would need to operate at the barest fraction of impulse drive. Not tenths but hundredths or thousandths of the available thrust.

All of which—and given that these fantasy speeds are extremely slippery—makes me wonder how much of the fabric of space does the starship’s warp bubble need to collapse and expand to appreciably speed up its maximum non-warp speed. If the bubble extended for just a hundred feet around the ship, or even a couple of thousand feet, it would have to cycle extremely fast to make any decent headway on an interstellar flight. I mean, it would be collapsing and releasing that much space in terms of microsecond or even nanosecond cycling, over and over again. Otherwise, the ship’s maximum 167-million-mile-per-hour speed would simply overrun the bubble.

Or the ship’s warp bubble would have to take in a lot of space. If the Enterprise is reputed to be a kilometer long, and moving at even a fraction of its top impulse speed, say, fifty percent, or 83.8 million miles per hour—or 23,285 miles per second (37,474 kilometers per second)—then it would have to collapse a volume of space about 37,500 times its own length each second just to keep pace with itself. To increase this natural, non-stressful impulse speed by a factor of ten, it would have to collapse 374,740 kilometers of space ahead of its bow. That’s just a little less than the distance from Earth to the Moon, which is 384,400 kilometers. And even that wouldn’t be a very high “warp factor,” because at ten times half-impulse speed, the ship would travel to the nearest star, Proxima Centauri—a distance of 4.22 light years, or 40 trillion kilometers—in about 3.4 years.

To obtain the warp speeds needed to represent travel times in the Star Trek world, the Enterprise would have to be collapsing volumes of space roughly equivalent to our solar system. Either that, or the ship would be collapsing and expanding smaller volumes at much higher cycling rates—so high that the warp field would probably destabilize any “structure” such a volume of space might have and reduce any interstellar dust and gas captured in that volume to blazing quarks.

Early in the Star Trek narrative, the ship had to travel far outside a planet’s gravity well before engaging its warp drive. That story element seems to have since been dropped from the telling in later series—again, the show’s distances and speeds are slippery things. But still, if a ship that was traveling even close to a near-Earth orbit engaged its warp drive at even the lowest factors described above, it would severely damage the fabric of the planet and play havoc with the Moon’s orbit.

But the question of the ship overrunning the warp bubble presents a conceptual puzzle, doesn’t it? The warp drive serves no purpose as a travel enhancer if the starship merely sits in the middle of a bubble while space pulsates around it: contracting and relaxing ahead of the bow, expanding and then retracting behind the stern. Just as I must step across the wrinkled, compressed concrete of the sidewalk to take advantage of that thirty- or three hundred-foot contraction, so the starship would have to cross the region of collapsed space ahead of it in order to put all that collapsed space behind it. If I’m walking in the open air and only the sidewalk is contracting beneath me, then my body is not affected by the compression. But for the starship, all of space is collapsing around it. Presumably this collapse would also affect the fabric of the ship’s hull and the people inside. So how does the ship survive that ultimate disruption? But perhaps I digress …

As things stand in our real, non-fantasy physics, we can’t begin to imagine grappling directly with the “fabric” of space—if such a thing even exists—or how we might make it collapse by so much as a cubic centimeter. My bet is that doing this kind of roughhouse to so small a volume would still require immense amounts of energy. To collapse a volume stretching from Earth to the Moon would take more energy than you could get from any conceivable matter-antimatter reaction. And to collapse the volume of even a medium-sized solar system would be playing with energies reserved for the gods.

Space is really, really big. Manipulating it in any significant way to travel between the stars will take unheard-of energies and a physics we can’t yet begin to understand. … Maybe it would be simpler just to punch through a wormhole to the other side.

1. Neither can energy elementals or beings of pure thought, according to the theory of relativity. But for now we’ll concentrate on carting our physical, protoplasmic bodies to the stars.

2. As I’ve said numerous times before, I don’t think our physics or mathematics really understands or accurately describes space, time, and gravity. For which see, once again, Fun with Numbers (I) from September 19, 2010, and (II) from September 26, 2010.

Sunday, March 10, 2019

Tom Bombadil

The One Ring

Great literature has the power to show us to ourselves, to give insight into the sort of person we are and might become. This is not a new thought, but it always has surprising implications. You might think of great literature as the product of a bygone age and not all that popular: the timeless—but also endless novels—of Victor Hugo, Leo Tolstoy, and Charles Dickens, or the concise social anatomies of Jane Austen. Books that we read once in school and, unless you are a scholar or a true aficionado, no one willingly reads today. But now I am also thinking of a modern work of fantasy that has the same power to evoke particular characters and personal allegiance.

J.R.R. Tolkien’s The Lord of the Rings is full of characters that strike a chord in the reader’s heart at the many stages in life. A young person faced with a challenging task for which he or she has the courage but may doubt they have the strength will find a resemblance in the intentionally androgynous face of Frodo Baggins. A loyal friend or supportive spouse who would pick up a loved one’s burden if only he or she were asked will feel a kinship to the servant and companion Samwise Gamgee.

And the cast extends beyond those main characters. Who has not felt like a true king in hiding, with the skills and a destiny beyond his or her current occupation, like Arathorn? And who has not felt that her or his heart and soul were placed into the wrong body and the wrong social situation, like the shieldmaiden Éowyn? Every grandmother can see in herself Galadriel, still beautiful after all these years and with gifts and wisdom to bestow and a mirror with which to show the young all that might yet come to pass. And any grandfather can see in himself Gandalf, still strong and vigorous despite great age, but with more answers now, and finally starting to put together the great puzzle of his life. What wide-eyed, impulsive child doesn’t share a bit of the reckless exuberance of Merry and Pippin? And what criminal or thug doesn’t share a glint of Saruman or a sliver of orc?

But my character is not among these. He had only a small but crucial part in the first book and was dropped entirely from the Peter Jackson movie, although he appears in some of Tolkien’s early poetry. I identify most strongly with Tom Bombadil.1

Tom is the ageless man, “oldest and fatherless”—perhaps the oldest being in Middle Earth. He has made his home in the Old Forest for centuries of solitude. He lives with an ageless, ever-beautiful young woman.2 He sings to trees3 and decorates his home with flowers.4 He fears nothing that is evil or dead, and he is untouched by the One Ring of power. Tom Bombadil lives as a hermit in the modern world, holding at a distance the cares and worries of everyday life, and is never to be trusted with weighty matters such as deep decisions of policy and the politics of power, because he is inner-directed to the point of distraction and perhaps a kind of autism.

The part of Bombadil’s character that is important to Tolkien’s story is how he reacts to the One Ring, which contains a great measure of Sauron’s personal, magical powers. Others in the story such as Bilbo, Frodo, and first of all Gollum want the ring because it is beautiful and attracts their lust for possession. The more seasoned minds of Gandalf and Galadriel are drawn to it for the power it contains and the good they might do with it, but they also fear it for the way it will warp their will and intentions. But Bombadil alone can pick it up, hold it, admire its beauty, and even put it on—then give it back. The beauty of the metal thing itself cannot match the beauty of the life he already leads. And the power it offers can in no way enhance that life.

At one point in the story, when the Council of Elrond is debating what to do with the One Ring, now that it has come into their possession, it is suggested that they give it to Tom Bombadil for safekeeping. But that is no solution, because he does not understand the hold the ring has on others. He would be like a child asked to keep the Hope Diamond. He might hold it for a while, then forget it, misplace or lose it, or even give it away in a moment of generosity. And unspoken is the question of how durable his personal strength and determination to preserve it might be when the minions of the Dark Lord discover where the ring is kept and come after it.

Tom Bombadil is in this world, but not of this world. And that is both a strength and a weakness.

Several times during my career as an editor and writer I have been offered and accepted positions as a supervisor and manager. Most recently, after several stints as a director on my condominium association board, I was elected president. Each time, I had projects I wanted to complete and situations I wanted to set right. But the impulse never lasted, and subsequently I resigned and walked away. It was not that I don’t understand the function of leadership and the uses of personal power,5 but wielding it holds no attraction for me. I lack the gene or the nerve synapse that makes someone want to tell other people what to do. If asked or if the need arises, I will give advice or make suggestions, but to speak from a position of authority, to command obedience because of a position and the rules or laws or customs that back it up—that is foreign to me. In my view, all exchanges between adult humans are personal, based on trust and respect, and not subject to coercion.

I know myself now, and I work best alone. In this, I am like most other artists: striving for a vision that is personal and not subordinate to the insights, directives, or unhelpful efforts of the people around me. Yes, in my work for various corporations I have written articles and posters that carried the company message. But that was always after I had reconciled myself and my vision to the corporation’s culture and its aims. For the period that I worked in the company and took their money, I adopted their goals. But I still maintained enough artistic and professional integrity that, when someone up the chain of command wanted to force a message or a directive that would deflect or do damage to those goals, I would politely but firmly refuse, and always with a respectful explanation of why not.

Tom Bombadil had his own power to make things grow and become beautiful—and make growing things heed his voice and obey the intention of his songs—but that power could not be shared with others or made to serve a purpose that Bombadil himself did not understand. So I have the power to convert thoughts into coherent words, arguments,6 and stories. But I can’t easily share that power with others. I can accept their edits, especially when the changes are backed up with the persuasion of a paycheck. But I can’t easily collaborate on a text that will have some of my thoughts mixed with the thoughts and words from another mind.7 The artistic vision is not easily shared.

Similarly, I once tried to write a book that would capture the market share staked out by some popular bestsellers of the time. Many agents and editors will tell young writers that their work would sell if only they could write like author X or tell a story similar to bestseller Y. My one attempt to school my vision to the thriller market was Trojan Horse, which went absolutely nowhere. Of course, the trap in trying to follow bestsellers in the marketplace is that they already have their audience. By the time a new writer gets the idea, creates a manuscript, finds an agent or a publisher, and then the work goes through the production and marketing effort, the public taste and perception will have moved on to something else that’s new. So it’s better to just do what you can do well and hope that lightning might strike.

The most important aspect of Tom Bombadil’s character is that, although the rest of the world sees him as a private soul, closeted to the point of invisibility, and doesn’t trust him more than it would a child—he is happy. His life is complete. He expects nothing that he cannot obtain for himself.

And that is not a bad way to live.

1. It was perhaps prophetic that, when I was in grade school and still ten years away from first reading Tolkien, classmates who wanted to make fun of me made a shout of my name: “Thomas the Bombus.” It was nonsense at the time, of course. But as the twig is bent …

2. Mine was with me for 41 years—a long time in modern reckoning—and she was never less than beautiful in my eyes.

3. My main occupation, if you count telling stories and writing them down on sheets of paper. The paper has more recently became a complicated engine of electronics and glowing phosphors that records my thoughts.

4. True, if you count shelves full of books as the flowering of other human minds than my own.

5. In fact, for a year or so at the public utility I wrote and edited a newsletter for managers and supervisors that taught the principles of leadership. I defined the role of leader as achieving goals through the cooperation of others. For this newsletter, I had to do a lot of reading and thinking on the subject. But my own goals are small and personal, best achieved by the one person I can command with impunity: myself.

6. Here “argument” is used in its older sense: not an altercation between two hotheads, but the chain of reasoning offered in support of a proposition.

7. Yes, I collaborated on four novels for Baen Books, as listed here under Science Fiction. But these were situations where the publisher had previously acquired an outline or notes toward a novel from a senior, established writer who had no intention of following up on them. It was then my task to absorb the idea—make it part of my artistic vision—and produce a manuscript that the senior writer could then edit and agree to put under his name and mine. It was purely a means of getting my own name more widely known in the marketplace.

Sunday, March 3, 2019

Toward a Rational Monarchism

Double Eagle crest

Yes, I know, we live in a republic with democratically elected representatives and a democratically elected president, all as ordered by the U.S. Constitution. And the system has worked fairly well for the past 230 years—barring one outright civil war and a number of scandals approaching the level of a coup. This is pretty much as Winston Churchill said: “Democracy is the worst form of government, except all those other forms that have been tried from time to time.”

In its ideal form, as we were taught in school sixty-odd years ago, democracy lets the best and brightest in the population step forward as selfless public servants who want to make the best decisions and to seek and determine the greatest good for the greatest number. And I have no doubt that some people are drawn into the public sector for this very reason.

In a slightly more cynical view—one to which I adhere from time to time—democratic elections and government positions invite people who want to make their living by telling other people what they should and should not, can and cannot—and in the most extreme cases—what they must and may not do. This is a form of busybody that we do not admire anywhere else in our lives, except that every two to four years we hold an election and give such people the keys to the government and power over us.1

And in the most jaundiced view of all—one that I hold in my darkest moments—elected officials and the appointees they add to their staffs and administrative positions are only there because government is where the easy money is: the taxing power to take a larger and larger share of the economy for whatever purposes they think appropriate; and the influential power to grant access, favorable legislation, beneficial findings, and grease through the gears of government, all for a “consideration” that may be immediate and monetary or may represent some other form of value, perhaps indefinitely delayed but surely repaid. This is corruption, of course. This is pigs at the trough. But you can’t say it never happens, or else why would K Street in Washington, DC, be known as the center of and pivot of the revolving door for lobbyists and public advocacy groups? They all want something from government, know their way around government, and are very well paid to get from government what they and their masters want.

From time to time—and especially when the choice of politicians and parties is not between better and best but between mendicant and mendacious fools, who promise anything to get votes, even when they have no intention or the means of paying up—I decide that the electorate has not the power to pick or even to recognize a good leader or an honest man or woman. The system is rigged toward the lowest common denominator, both among voters and candidates, and nothing good will come of it.2

And so, from time to time, I entertain a cautious enthusiasm for the system of monarchy. If we can’t pick a leader from among the ambitious grabbers and wasters who will say anything to get elected, then let’s settle on the one who climbs to the top of the heap through war, subterfuge, or poison and let him—or more rarely her—have the job for life. At least the king or queen in the first generation will be eager enough to give the populace some semblance of good government. That at least will help ensure a long life and stable tenure.

And then, with the prospect of his or her prodigy to follow, the monarch will be induced to think in the long term. That means no stealing the treasury or scouring the land or murdering too many innocents. Such actions would leave the next generation’s prince or princess to inherit a kingdom ripe for overturning. And the reigning monarch will also have an incentive to pass along to that prince or princess whatever principles of good government, of protecting the people and the land, and of trying to ensure their happiness, that he or she has learned in the process of ruling. At least then someone in government will have been born and trained from childhood to think beyond him- or herself, beyond personal desires and privileges, to cherish and support the good of the nation.

Or not. The trouble with hereditary monarchy is that the vigor of the current generation may not be passed along to succeeding generations. Certainly, it didn’t work out all the time for England. There the warrior genius of Henry V passed sovereignty on to the religious simpleton of his infant son Henry VI, and that led to decades of war and subterfuge, the War of the Roses. Then Henry Tudor—the VIIth of royal name—finally emerged from the strife and brought forth two fine sons, Arthur and the future Henry VIII. But the ascendant second son, Henry—after Arthur died young—lacked a legitimate male heir for the longest time, causing more political strife. Neither Henry VIII nor his ministers or the populace counted the rights of mere daughters and so wanted a son to carry on the Tudor name. Everyone else around the throne feared another baronial war if the king eventually died without a legitimate heir. And when the king finally had a sickly son, Edward VI, who was crowned at the age of nine, never prospered, and died sixteen years later, only the daughters were left to rule.3

Such are the problems of a hereditary monarchy. And they eventually led, in England and elsewhere, to a constitutional monarchy. Yes, the king or queen was still head of state and nominally head of the government, but the position is now largely ceremonial and the power merely that of persuasion. The actual decision-making falls to cabinet ministers who, in the parliamentary system, are promoted from the democratically elected legislature to oversee various aspects of government. And one of them, the prime minister, is the effective head of government—so long as he or she keeps winning important votes in parliament. But still, someone in the top level of government has at least been born and trained to think of the good of the country, even if he or she has to work through a system of popularly elected politicians.

Hereditary monarchy relies on tradition and the popular belief in and support of a person and ultimately of a family. It is, for the governed, a matter of respect, obedience, and even love. That is, until a rapacious or imbecilic prince rises to the throne, and everyone is willing to cast the dice in a dynastic war. Representative democracy, even when it operates under the wing of a constitutional monarchy, relies on the tradition of and popular belief in a process: that if the politicians we elect today don’t work for our benefit, we can always toss them out in two or four years’ time (American system), or with a loss of confidence and change of government over some important issue (British system).

You might think that, with the rise of constitutional monarchy, the system of democratically elected representative government has won for all time. But I note that notions of kingship persist in every culture. How many revolutions have passed through their republican phase, of whatever duration, to relapse into the public’s bestowing its respect, obedience—and even love—on a tyrant? The Romans tossed out the seven hereditary kings in 509 BC, maintained a republic through wars of both expansion and civil dissension for almost five hundred years, and finally dissolved into the elevation of a field marshal, or imperator, or emperor (never the hated term “king”) who ruled through his family, his army, and finally through the power of the name “Caesar” alone for almost five hundred years more. The French deposed and guillotined their Bourbon king in 1792, declared a republic that devolved into a Reign of Terror, engaged in wars of egalitarian liberation throughout Europe, and finally named their best general as the new emperor in 1804.

You can avoid the term “king,” and you can try to limit the ruler’s seat on the throne and the claim of dynasty. But once you grant someone supreme power in his or her own name, you are forced to either accept that ruler’s offspring to follow in the job—as the Democratic People’s Republic of Korea has done with the Kim family—or you risk dynastic wars among the next most powerful barons, senators, generals, or commissars to claim the throne. And that’s usually the bloodier and more disruptive transition. But whether you call the position “first citizen” or “führer,” the effect and function of a monarch are still the same.4

For anyone who thinks that we, in this modern age, are too smart or sophisticated or enlightened to ever go back to giving our respect, obedience, and even our love to a single person raised to the status of king, emperor, first citizen, or führer … look around. There is nothing about a battleship, a jet fighter, or a battle tank that can’t wear the sigil or coat of arms of a royal house. And nothing about a computer or an artificial intelligence that can’t respond to the political power of one man or family, however it may have been obtained and wielded. We are barely half a millennium away from the tribalism that gave England and Europe—not to mention everywhere else in the world—their kings and emperors.

The sword can easily be laid on our necks once again.5

1. And in the meantime, these same elected officials appoint and budget for even more busy people who will run the congressional staffs and the executive branch’s departments and commissions that never need to get publicly elected, are protected by Civil Service laws, and lately have acquired public union protection. This is that “Deep State” you now hear about: a tinkertoy perpetual-motion machine that absorbs taxes and grinds out regulations, subpoenas, hearings, judgments, fines, and punishments—all generally outside the public view, often under a gag order, and without effective recourse by the private parties involved.

2. But as Robert A. Heinlein wrote in Time Enough for Love: “Of course the game is rigged. Don’t let that stop you; if you don’t bet you can’t win.”

3. And don’t get me started on later history, where the long-ruling Victoria married off her princess daughters all over Europe and so spread a recessive gene for hemophilia to more than one royal house and helped bring down the 300-year-old Romanov dynasty in Russia.

4. After all, the word “monarch” derives from the Greek, combining mono, or “one,” with archon, or “ruler.”

5. Or, as Benjamin Franklin replied, when asked whether the Constitutional Convention in Philadelphia had brought forth a monarchy or a republic: “A republic, if you can keep it.”