Sunday, June 19, 2016

My Influences

As a writer, you are what you read. And as I am heading into the later years of my sixth decade and working on the outline of my sixteenth complete novel, I have come to realize that you are most strongly influenced—the twig is bent, the compass pointed—by what you read at an early age. I was lucky in that I came from a family of avid readers, and my father’s bookshelves were stocked with popular novels, mostly bestsellers, from the 1930s, ’40s, and ’50s, which were the period of his young adulthood and so from the generation that immediately preceded mine. He never protected these books and never made rules about which of them I could read. And then, as I was growing up, he further subscribed to the Reader’s Digest condensed books to feed his own habit,1 and later to the Time Reading Program to bring even more books into the house.2

What I read were mostly adventure and war stories: Herman Wouk’s The Caine Mutiny, about American sailors in the Pacific in World War II; Nicholas Monsarrat’s The Cruel Sea, about British sailors in the Atlantic in the same war; and Garland Roark’s Wake of the Red Witch, about merchant sailors and Dutch traders in the South Pacific between the wars. I also read Thomas B. Costain’s The Black Rose, about a 14th-century Oxford scholar who travels to Cathay before Marco Polo; Mika Waltari’s The Egyptian, about a young doctor in ancient Egypt during a time of religious and political upheaval; C. S. Forester’s Hornblower novels, about a British officer in the Napoleonic wars; John Cheever’s The Wapshot Chronicle, about the disintegration of an old New England family in the 20th century; Joyce Cary’s The Horse’s Mouth, about dissipated London artist Gulley Jimson; and Robert Graves’s I, Claudius, about the early Roman emperors.

For nonfiction, I read Bertram D. Wolfe’s Three Who Made a Revolution, weaving the lives of Lenin, Trotsky, and Stalin into the Russian Revolution; Fitzroy Maclean’s Eastern Approaches, telling of his adventures in Stalinist Russian in the 1930s, then in North Africa with the Long Range Desert Group, and finally in Yugoslavia with Tito; Commander Edward Ellsberg’s On the Bottom, about hardhat divers raising the S-51, an American submarine rammed and sunk off Block Island in 1925; and Ellsberg’s Hell on Ice, about a ship and crew trying to force the Northwest Passage in reverse, traveling above Siberia from the Bering Sea in 1879.

Of course, to become a science fiction writer, I read heavily in the genre, although this was of my own choosing with almost nothing found in my father’s library. I read Ray Bradbury’s Martian Chronicles and Something Wicked This Way Comes; Edgar Rice Burroughs’s The Land That Time Forgot and its sequels, as well as various of his John Carter of Mars and Carson Napier of Venus books; every one of Robert A. Heinlein’s novels I could get my hands on; and each of Frank Herbert’s Dune books, which took me up to my own young adulthood. I also read J. R. R. Tolkien and loved his The Lord of the Rings, as well as E. R. Eddison’s Worm Ouroboros and Mervyn Peake’s Gormenghast series—but for some reason fantasy as a genre never took hold with me, or not enough to come out in my writing.

These and many more books that I’ve probably forgotten I read on my own, outside my assigned books for school and college. And most of them I’ve gone back and re-read at intervals to see if my childish appraisal was worth anything—and sometimes just because I loved the story. Most of them have held up, in my opinion.3

What did all this early reading give me? To start with, a taste for story itself. I learned to love connected and crossing human lives, people engaged in adventures, or travels, or the trials of combat, sometimes working together, more often on opposing sides. I loved desperate characters, being pushed to their limits, fighting against odds, and risking everything. Were a certain proportion of these books melodramatic and overblown? Of course, but these are the stories that stir us, that take those of us who are average, middle class, workaday, householding readers like my father—and then me, in my later working years—out of ourselves and put us into another world with different needs and values. You may call that “escapist fiction.” I call it entertainment and, for a novelist, pure entertainment is high purpose.

Next, these books gave me a taste for the technical end of things, learning and showing how the world works. Stories and books about fighting navies, about making a revolution, or about hardhat diving operations are all about mechanics and techniques. The how is just as important, and as interesting, as the who, what, and why.4 I learned—or rather, absorbed through my reading—the notion that every ship, every weapon, and every tool or technique available to the character’s hand is a complex mechanism, worthy in itself of study and respect. Ships, especially, are complex because they depend not only on the mechanics of the vessel—operation of the engines, precise manipulation of the sails, aiming and firing of the guns—but also on the organization and cooperation of the crew in order to function and survive. A ship without a properly working crew is a dead thing sitting at the dock or run up on the shore; it quickly deteriorates, disintegrates, and turns into scrap.

And then, people can be strange and mysterious, too. The silent and angry Captain Ralls in Red Witch or the brilliant and driven Captain Nemo in 20,000 Leagues Under the Sea5 showed me that every person has a history, a “backstory,” which shapes his or her present life and aims. People and characters are even more complex mechanisms that are put together as much by time and incident as by their own desires and decisions. Even characters driven almost completely by their passions, like Gulley Jimson in Horse’s Mouth or Leander Wapshot in the Chronicle, are forced to live with the consequences of their choices and desires.

One thing that my early reading did not provide for my writing was too many independent female characters. My books were mostly stories about men at war, in conflict, or pursuing their own advantage. Women in these books, like Julie Hallam in Cruel Sea or Angelique Desaix in Red Witch, are either distractions or trophies, not companions, not partners, not equals in battle. This was something lacking, which I believe I understood from an early age. I attribute this feeling to the influence of my mother, who was a capable woman, had been trained as a landscape architect, and worked as a draftsman at Bell Labs during the war. She gave up her professional life in order to have us boys, but she remained strong throughout our adolescence and considered her marriage a partnership with my father. And from her I gained a taste for strong female characters like Isabel Archer in Henry James’s The Portrait of a Lady or the Dragon Lady in the comic strip Terry and the Pirates. Strong women, perhaps because of the frustrations they face in a male-dominated society, have always had a dual nature in my personal mythology: partly good, partly disruptive, and sometimes turned to evil. I’ve created my share of female characters in my books, and they always seem to face moral as well as physical choices.

All of this goes to the question: where do my stories come from? How and where was the twig bent? I believe, from my early reading, that I came to like, and now try to create, old-fashioned stories based on action and decision rather than purely internal psychology. Emotional struggle always accompanies dire choice, random incident, and crossed purpose, rather than preceding or driving them. And, true to the old novels with which I grew up, hard choices and great risk mean that death is always on the line.

1. People—especially in the university courses I took and the literary profession I later followed—used to sneer at the condensed books, but I read them innocently enough as an adolescent. I remember first reading Richard McKenna’s The Sand Pebbles in condensed form before going back a decade later to buy the full novel and re-read it. And I read Alistair MacLean’s The Guns of Navarone first in the unabridged paperback and then, for a greater understanding of what I might have missed, compared it word-for-word and line-by-line with its condensation. From this I discovered that the Reader’s Digest editors had performed a delicate work of reduction: quietly removing excess words, multiple modifiers, and non-critical clauses while maintaining every scene, important line of dialogue, and plot point necessary to understanding the story. I think, in some way, doing this side-by-side comparison helped me become a better writer.

2. The Time Reading Program, a subscription service available between 1962 and 1966, mailed three to five books of both fiction and nonfiction to your house ever month. The editors chose works of 20th century authors—about a hundred in all, over the life of the program—which they believed would endure and belonged in any complete library. Much of my outside reading during my high-school years came from my father’s subscription to this series. He also bought the RCA Victor Red Seal Records subscription service, which brought three to five LPs of classical music into the house every month, and that helped bend my taste in music.
       My father’s method of training up boys was subtle. He didn’t force us to read or listen to anything in particular. He rarely even made suggestions—although he did have opinions about what he considered “garbage” among the modern music and books we brought home. Otherwise, he just set the example by being a reader himself, and then he left good books and music lying around for me and my brother to discover.

3. Well, with some exceptions. I’m just now re-reading the Red Witch, in an original I found at Alibris. While the story is still there, the writing is antique, cluttered, and constitutes what my English teachers called “fine writing”—using five words where one would fit, reaching for awkward similes, and choosing rare and erudite words over plain English. As one of my former coworkers would say, “Look, Ma! I’m writing!”

4. Those books also reflected my father’s taste in stories, and he was trained as a mechanical engineer.

5. Which I came to by way of the Walt Disney movie in a theater, when I was six years old, before I read the Jules Verne novel from my father’s bookshelf some years later. The articulate, precise, and yet deeply flawed captain played by James Mason had a powerful effect on me. So did the wonderfully barbed, bulbous-eyed, pre–Steam Punk submarine Nautilus as designed by Harper Goff.

Sunday, June 12, 2016

The Durability of Monarchy

If there is one form of government that seems to be universal and enduring, it is monarchy. Humankind has dabbled with many different systems but always seems to come back to letting a single person and his or her family rule the country for generation after generation—or at least until society falls apart, the economy goes to hell, and somebody thinks he and his followers can do better.

We in America like to think we’re immune to this monarchical tendency because of the U.S. Constitution and the government structure which it proposes and which we all accept. And true, the system has worked reasonably well for more than two centuries. But we are not immune.

We also like to think of monarchy as old-fashioned, a throwback to a less-enlightened age, to hundreds of years ago, before the rise of the printing press, instant communications, and the democratizing forces of universal education, free speech, and the machine gun. And true, modern social media does seem to guarantee that everyone gets his or her say, eternally, ad nauseam, in whatever forum he or she can erect or channel. But monarchy does not rely on parchment scrolls, coats of arms, and ermine robes. It is not prevented by popular education and endless chatter. We are not immune.

Consider every civilization that has risen on Earth. Almost all of those that were enlightened enough or lucky enough to pass through a period of democracy eventually has reverted to some form of monarchy through crisis or invasion.

The Greeks, who apparently invented the system of voting for and electing leaders, after a couple of centuries soon devolved into rule by homegrown tyrants who had garnered extensive political power. Eventually, the Greeks were conquered by the Macedonians, marched with Alexander to the ends of the world, and upon his death broke up his empire into a collection of Hellenistic kingdoms founded on his leading generals. They in turn were conquered and ruled by either the Romans or the Persians. Eventually, the Greek homeland ended up with its own hereditary monarchy. Basileus is the ancient Greek word for “king.”

The Romans were ruled by the Tarquin kings for the first couple of centuries after the founding of the city—until the Roman elite threw the kings out. The Senate and People of Rome then developed a republic run by annually elected consuls, or military governors, supported by a hierarchy of publicly held offices: the cursus honorum, or “course of honor,” open to all men of senatorial rank. The system worked well enough for a couple of centuries, until men of rank and wealth rose high enough to raise their own armies, fight for control, and take over the state. For centuries more under the Caesars, who held the rank of imperator, or “field marshal,” the form of the old government subsisted with meetings of the senate and, for a while, the nominal authority of elected consuls and other offices. But it was a sham that deceived no one, because the emperor had the army at his back and held the real power.

The French had a government of kings and nobles until 1789, when revolution overthrew the system and tried to install a new and totally enlightened democracy. This system failed for having too many architects with grand ideas and too much residual anger at the old regime. After several years of political strife, the state fell to military adventurism under a Corsican general who pretended to be spreading the benefits of revolution to the rest of Europe. The French themselves named Napoleon their emperor fifteen years after their own revolution. And the style of “emperor,” although not always the power, continued in French government until late in the 19th century. The French today are a democratically elected republic, although they are now rapidly shedding their sovereignty to an unelected European Union run by social architects with grand ideas.

The Russians had their own kings or tsars until 1917, when the three-hundred-year-old Romanov Dynasty succumbed to the February Revolution under a provisional government centered on the Duma, or “parliament,” and led by liberal aristocrats and centrist Social Democrats. That new government lasted until the Bolsheviks under Vladimir Lenin took control in the October Revolution (or November in the western calendar) later in that same year and installed the Soviet, or “workers’ council,” system. The Soviets were supposed to be democratically elected, and for decades they paraded under that façade. But after five years of civil war and a slowly congealing “dictatorship of the proletariat,” the party united under a single ruler, Stalin, who was afterward sometimes referred to as “the Red Tsar.”

The Chinese went from an imperial form of government in 1912 to a brief republic that, after years of weak and vacillating administration—not helped by foreign intervention and the Japanese invasion—succumbed to the Communists in 1949. As with the Russians, the Chinese Communists practiced a dictatorship that was effectively no different from rule by a totalitarian emperor and his unelected mandarins.

The Germans, or at least the parts of central Europe that eventually became Germany, were ruled for centuries by a royal family, the Hohenzollerns, which passed through a number of stages—counts, burgraves, electors of the Holy Roman Emperor, and finally emperors themselves—until the end of World War I. The Germans then tried a republic, but when it sank under its own weight and that of war reparations imposed by the French and their allies, the Germans turned fifteen years later to the National Socialists and a führer, or “leader,” who was granted absolute control more complete than any king’s.

About the only advanced countries that retain their hereditary monarchies in a modern form are the English and the Japanese. The English had a long line of kings and eventually queens—although not always from the same family, or even from the same country—and early on they adopted a form of elected representation in a parliament, which existed mostly to grant the monarch the taxes needed to fund the government. That worked sometimes well, sometimes badly, until the accession of Scottish-bred Charles I, against whom the English fought their first and only civil war and ended by beheading him and submitting to the near-absolute rule of Oliver Cromwell as Lord Protector. That worked out so badly that they brought back the monarchy and have kept it ever since, on a shelf and under glass, as a kind of reminder.

The Japanese have had an imperial family since beginning of its history in about the sixth century of the Common Era. For a span of more than six hundred years, however, from 1192 to 1867, the real power resided with the military caste of samurai under the shogun, or hereditary “military commander,” with the emperor functioning as a ceremonial puppet. The emperor was returned to authority in the Meiji Restoration of 1868, which twenty years later established the Imperial Diet, or parliament. However, the military influence always remained strong in Japan. In 1947, after World War II and the American occupation, the country officially became a constitutional monarchy, like the English.

In a constitutional monarchy, parliament is the seat of government under control of a prime minister and his or her cabinet of secretaries—all of whom are elected representatives from the party or coalition holding the greatest number of seats. Prime ministers have ruled in England for more than two hundred years, and the king or queen has functioned in a merely ceremonial capacity. This is rather as if the United States preserved all power in the Speaker of the House and his or her party cohort but maintained a line of national celebrities—say, Henry Fonda, followed by his children and grandchildren—to preside in the White House as masters of ceremony, holding state dinners, giving out medals, and setting the tone for polite society.

The United States swore off monarchy after its own war of independence, ending British rule in 1783. The president of our republic—whose office and forms of address were purposely designed so as not to resemble those of a king—has waxed and waned in power, sometimes taking on near-dictatorial authority, as Lincoln did during the Civil War, or Roosevelt during the Great Depression and World War II. But the forms of the Constitution have held—up until now. The rise of a centralized government in Washington, DC, has taken over more and more functions in the name of interstate commerce.1

While legislation originates in Congress, the executive branch is charged with enforcing the laws that Congress enacts. For the past half-century or so, Congress has taken to writing larger and ever more complicated pieces of legislation, trying to cover entire spheres of social interaction—water and air quality, communications, medicine—and provide for every eventuality. But these massive documents leave on-the-ground interpretation to the appropriate department of the executive branch, with their rulings and edicts moderated only by legal challenges resolved through the court system. This has pushed more and more power into the hands of appointed—not elected—officials who are loyal only to the president and his appointed cabinet members.

It is questionable how far this tendency toward consolidation will go. Certainly, the United States will never have a king, just as the Romans were so proud of having thrown out the Tarquins that no subsequent ruler, however powerful, dared claim the title and trappings of “king.”2 But as history has shown, monarchy does not depend on a word. “Field marshal,” “leader,” or “chairman” will do just as well. The title “president” will serve, too, with the qualifier “for life” spoken in a whisper. It is the amalgamation of power and the ability to pass it on to your chosen heirs and descendants that makes a kingship.

I have no love for monarchy … although the idea of having a young person raised to think of the nation as his or her special responsibility, properly trained in the ways of good government and stable policies, and guided by generations of his or her family’s administrative retainers—such a fairytale does seem like the way to preserve the best of a society’s traditions. It would certainly be better than a gaggle of clever, power-hungry politicians who can promise the electorate a grab bag of spoils, or national greatness, or whatever else passes for currency with the fickle mob.

One would think that the ability of a democracy to change course quickly, to throw out an administration when it heads in the wrong direction, and to choose a new path when an inspired leader can devise and communicate one—that such attributes of a democracy would make it the ideal form of government. Such a system would be self-correcting, always seeking the best for the majority of its citizens. And such a fairytale does seem like the best way to steer a growing and dynamic society through troubled, turbulent, or technologically innovative times.

But we don’t live in a fairytale. We live in a world where human beings seem to be disproportionately divided between those who want to control their neighbors and the neighbors who want to be left alone. Those who want control do so because they think they have better ideas than anyone else, or because control is an obvious way to obtain money, prestige, and security for themselves and their families. Those who want to be left alone desire a stable social and economic order in which to practice their skills—whether farming, shoemaking, or trading—and raise their children, and they don’t much care who is in control or what the government does, so long as good order is maintained.

For both these types of human being, a monarchy backed by some credible semblance of authority—whether it’s the “divine right of kings” or the loyalty of the legions and a praetorian guard—will satisfy their needs. For the power-hungry, supporting a hereditary monarch means not having to worry oneself over which other politician will climb to the top of the heap and take control, while ensuring that a measure of shared power and delegated authority will flow to those who endear themselves or make themselves useful to the royal personage and his or her household. For the complacent neighbors, a hereditary monarchy means not having to think too much about who is offering what incentives in the clamor for support and which path is the right one to choose in the midst of social and economic confusion. Or, as the milkman Tevye says in Fiddler on the Roof: “Lord bless the Tsar and keep him—far away from us.”

Monarchy is not a great system of government. But allowing a single man or woman and a single family to take control of the country and share out power within a trusted circle seems to be the most stable … enduring … recurring system of government. It works after every other system has broken down, chaos has ensued, and the dogs have stopped fighting over the scraps.

1. This is actually in the Constitution, Article 1, Section 8, Clause 3: “The Congress shall have power … to regulate commerce with foreign nations, and among the several states, and with the Indian tribes.” Since every aspect of the country’s social and economic organization can, one way or another, be construed as affecting commerce between the states, the Commerce Clause has provided justification for the federal government to involve itself in nearly everything.

2. In times of crisis, the ancient Romans would occasionally name a dictator, which is just the Latin word for “speaker.” But while this role was sometimes granted for an indefinite period, even “for life,” it was supposed to be temporary and was never intended to be hereditary. Julius Caesar was assassinated because, in part, he seemed to be aiming at a crown for himself.

Sunday, June 5, 2016

A World Without Numbers

All of our human mathematics, all of our arithmetic, goes back to a simple idea: that the world with which we concern ourselves is made up of near-identical and so countable things. One person, two people, three people … One sheep, two sheep, three sheep … One stone, two stones, three stones …

When you can see the world in such interchangeable units and read them as whole numbers, the integers, you are on your way to the arithmetical actions of addition and subtraction, multiplication and division. Three people stand on a street corner, then two more join them, and now five people stand there. I have ten sheep, but the wolf eats two of them, so now I have eight sheep. Three of my ewes each bears two lambs, and now I have six sheep.

Only after you discover the integers do you come to the fractions. I have two bags of apples, but part of one bag is rotten, so now I have, let me count them … a bag and a half. And from there, it’s just a few more steps to the concept of zero, then the negative numbers, exponents, square roots, algebra, trigonometry, analytical geometry, and calculus. Ah, calculus—which is the art of quantitating smooth curves, variably timed rates such as acceleration and deceleration, and other measures that can change shape continuously and so do not yield results in easy integers.

But what if our brains, our eyes, and our sense of proportion did not start out by seeing units at all? For in truth, no two things in the universe are exactly alike. One stone is not like another, because the first stone you pick up might weigh 212 grams and be made of quartz, while the next is 310 grams and made of feldspar. One sheep is not like another, because to start with, some are rams, some lambs, some ewes. Or, with a breeder’s eye, one can see that this sheep has the DNA from one genetic line, while the next sheep diverged from that line of inheritance and has brought in other gene sets in every generation.1 True, most of us don’t see such divergence so clearly in our animals, but try telling that to the people who judge dogs at the American Kennel Club!

You might say that we would still count the years, the days, and other such divisions of existence, down to the hours and minutes in a day. But remember, it was not until the invention of clocks that most human beings had any notion of passing daytime or nighttime beyond the “canonical hours” set aside for certain prayers. And in the experience of most early peoples, the division of each year into different seasons appropriate to planting and harvesting, roundup and slaughter, was more important than counting from one year to the next. From a certain point of view, any divisions of time are meaningless except for the all-important “now.”

If we lived in a world—that is, if our brains made such distinctions of precision and granularity—where we saw no two things as being alike enough to count, then human-style mathematics might never have gotten started. Of course, we would eventually have run into indistinguishable units, the groundwork of integers, in studying atomic structure. There, for all we can discover, one proton is exactly like another, one electron like another, one neutron like another, and so on through the catalog of particles in the Standard Model.2 In such a universe, one proton makes a hydrogen atom, two make up helium, three lithium, and so on through the Periodic Table. But it’s more probable that, if humans had possessed such granularized, distinction-making vision that we never discovered numbers per se, then we probably would never have developed the theories or the instruments needed to study atoms, subatomic particles, or quantum physics. We would live in a completely different conceptual universe.

A world that was so granularized that it could not identify a likeness of things and group them would be a world without descriptive nouns, either. One could not speak generically of a “tree” but only of a “red poplar” or a “white birch.” And not even that, because those are species—a grouping of similar individuals. Instead, one could only speak of the “old, red-colored, vertical-standing life form which my grandfather planted in the north corner of the yard.”

It goes without saying that in this numberless world we would have no collective nouns, no words with which to identify a group of things, because those things would be so particular and unique that they could not be gathered into groups. It would be a world with barely any words at all. Instead, we would refer to the unique shape, composition, heritage, smell, or taste of every object, like a dolphin or a bat trying to describe its environment only through the echo-return of its own pings and squeaks.

It’s hard to imagine any thinking creature that did not or could not examine individuals cases, make associations through likeness of dominant or noticeable features, and group these cases—and so create the occasions for counting and other mathematical manipulations. Certainly, a creature or a person with extremely primitive awareness, an incurious nature, or a fixed attention to a particular detail might make comparisons solely through examination of unique features. This tree is not that tree, because this one has a fuller, more rounded shape, while that one is rather spindly—like a family judging Christmas trees on a lot. Or, this tree is not that tree, because this one has green foliage, while that one is colored yellow and red—like a traveler enjoying the turning leaves on a trip through New England in the autumn. An amoeba, with no thinking apparatus at all, might consider every crumb of food that it encounters—or every change in the temperature, salinity, or acidity in the water surrounding it—to be a unique and nonrecurring event.

You could almost say that the act of noting and comparing differences and likenesses, cataloging traits, generalizing about them, and making associations is the origin not only of numbers but also of words. Numbers and words are the beginning of our thinking processes. They are the basis of perceiving our surroundings and separating ourselves from our environment and experiences.

Without them, a person would be an automaton, reacting to each stimuli, making anew his decision in regard to every event, living in a void absent of time and memory. This is the reality of one-celled organisms, vegetables, filter-feeding mollusks, and insufficiently programmed artificial intelligences. If we had no numbers, no groups, no names for things as classes other than as unique incidences—we would not be human. And if we were to encounter such creatures, say, out among the stars, we would not be able to communicate with them, much less share in their viewpoint and experiences.

We already suspect that life—if it does exist elsewhere in the universe—will come in many strange and wonderful forms. The same must be said for the kinds intelligence that life spawns.

1. Ah, but if we want things to count, you say, we can count generations! But can we, really, when a mother can have a child every year or so? To which generation, then, does the first child in the family belong, compared to the last who comes a dozen years later? The situation is even more complicated when different fathers may be involved. You can make a distinction that the children are all related genetically in being the immediate offspring of one mother, rather than being among her grandchildren or great-grandchildren. But on an individual level, the concept of “a generation”—the age cohort or time period into which I was born—becomes granularized. I may have been born to the Baby Boom generation, but because my parents delayed the birthing experience longer than most, they were older and more established than the parents of many of my peers, and so they died sooner than did the parents of most of my friends. And if you view an extended family of aunts and uncles, nieces and nephews, and unrelated grandparents and in-laws, generational lines can become quite blurred.

2. And is not the Standard Model becoming so varied and fragmented that we might eventually be left with a universe of strangely unique particles and essences?