Sunday, August 13, 2017

Platonic Forms in Everyday Life

In various of Plato’s Dialogues he has Socrates address the notion of “ideal forms.” This is the theory that we can recognize everyday objects because we hold in our minds—and, in some interpretations, because there separately exists, perhaps somewhere in the stratosphere—a perfect form or prototype of the object. According to this theory, the physical thing before us is just an imperfect copy of the ideal form. Thus, for all the horses on Earth, from the dog-sized “dawn horse” Eohippus up through the race-winning Quarter Horses, wagon-pulling Clydesdales, playground ponies, donkeys, and zebras, there exists somewhere in heaven the perfect Horse, of which these living examples are only pale and imperfect copies. Similarly, for all the oaks, maples, birches, cypresses, and bristlecone pines, there is an ideal Tree somewhere in an imagined forest that all of these specimens are trying to be.1

As I’ve noted before,2 when it comes to living examples, there is no ideal form toward which the various species of a genus or family are striving. Each one is a product of adaptation to a niche in the environment, whether by accidental mutation or selective breeding. Mutations gave rise to the Eohippus, donkey, and zebra. Judicious breeding gave us the Quarter Horse for speed and the Clydesdale for pulling power. Sure, when the average person thinks of a horse—just daydreaming, without context or the prompting of a picture—he or she probably pictures Secretariat, Sea Biscuit, or some other famous racehorse. But that no more makes these celebrities an “ideal form” than movie stardom has made Marilyn Monroe or Scarlett Johansson the ideal woman anywhere but in the adolescent imagination.

The same goes for trees and every other living thing. Those we find in the wild have been shaped in every feature and part by adaptation to some aspect of the local environment. Those we find in the barnyard and or in our homes have been removed from the wild and selectively bred—not always wisely—for some feature of appearance, intelligence, temperament, strength, or taste. There is no ideal form of a tree, a dog, or a beef cow anywhere.

So does the ideal, Platonic form have any meaning in life, except as a bit of naïve Greek philosophy? I can think of a couple of examples.

The first is in the arts. Michelangelo famously said of his statues that he did not carve them so much as release the figure that already lived in the marble. Well, maybe. And to the extent that a particular piece of recrystallized calcium carbonate may have had streaks, veins, and fissures, or the block itself may have some critical defect—like the awkward indentation that yielded the bent knee of the David—this may well be true. But other than that, I’m pretty sure Michelangelo’s figures resided first in his own mind, an image of what he was about to carve, and he merely removed stone, first in big chunks, then in grains and flakes, and finally in softly polishing strokes, until the figure he had conceived stood before him.

Thus every artist—painter, writer, musician—pursues an image, a thought, or a sound that he or she carries in the mind and works to reproduce on canvas, paper, or the keystrokes and fingerings of a chosen instrument. However, the image or thought might not always be as clear as Michelangelo’s stone people. I know from experience that my conception of the book I’m writing usually remains hazy—just big chunks somewhere out there in the fog—until I sit down to compose and actually live vicariously through the action and hear in my mind the dialogue as my fingers are flying over the keyboard. Even a fairly extensive outline is, for me, just a suggestion of where the book might go. Many times I have carried a scene in the outline that I thought was fixed, and the writing of which would be practically a job of just finding the opening line and then fleshing out the details—only to discover that, when I sat down to experience the action at first hand, it wanted to go in another direction and cover different ground. And I’ve learned to trust this instinct, because the scene as it gets written is usually richer and more satisfying than whatever thought I had in mind before.

Another, and perhaps better, example of the ideal form is found in karate. The style I practice, Isshinryu, relies heavily on the katas, or forms, as developed and adapted by the master, Tatsuo Shimabuku. These are practice routines for an individual, laid out as a series of punches, blocks, kicks, and other movements in sequence against the imagined attacks of an invisible opponent. In the dojo I attended back in Pennsylvania—now almost fifty years ago—learning and mastering the hand-and-foot and weapons katas were the main course of study. Yes, the practice included sparring, or kumite, with a partner, where blows were simulated at two inches to a light tap. Sparring gives the student a feel for the timing, reach, ranging, and reactions of a live human attacker. But the essence of Isshinryu was carried out in the katas.3

As a creation in the mind of Master Shimabuku, each kata is an ideal form, the perfect combination of stance, movement, balance, and rhythm representing a certain aspect of the style or emphasizing a certain pattern of defense. There is only one way to perform the kata. Or is there? When I was in training, we practiced the twist punch, with the hand rotating from a palm-up position at the hip to a palm-down position in the last quarter of the arm’s extension. This was my sensei’s teaching, and he had studied personally with Master Shimabuku. But after the master died, his sons took over the style. They decided that the twist punch was archaic or impractical or something—I don’t know their minds—and so introduced the vertical punch, in which the hand moves like a piston with the knuckles aligned vertically in a single plane from beginning to end. The vertical punch is easier to throw and master, more practical in an actual brawl, and more in keeping with Isshinryu’s “one-heart-way” teaching—short and direct. But it’s not very elegant and, in my opinion, not as good as the twist punch for keeping your wrists flexible and exercising your forearms.

So … were the katas with all those twist punches the “real” mind of the master? And is the vertical punch a later corruption of the ideal form? Who can say? I do know that old movies of Master Shimabuku, taken when he visited this country in the early ’60s (you can see them at the site referenced above, but they are small and blurry after being copied over from eight-millimeter film into digital files) show him throwing punches that are sometimes twisting, sometimes vertical. In the same way his basic stance, the seisan, sometimes has the back foot parallel with the front, sometimes turned out—and our school taught parallel feet as if they were Holy Writ. Maybe the master had gone so deeply into the idea of Isshinryu that it didn’t mush matter if his punches and his stances were one thing or the other. Certainly, the kata would then depend on its shape in the mind of the student: what were you taught and how closely are you following it? So the “ideal form” of the kata really is just an expression of the school’s current practice and the student’s understanding.

A further example of ideal forms in everyday life draws on something I have learned from taking music lessons. As a boy, I played—well, attempted to play—the trombone. But I never learned the underlying structure of Western music itself, with is twelve notes, some in whole and some in half steps, laid out in the black and white keys of the piano. I never learned about key signatures and how they affect what notes I played; so my playing was a disaster. I could pick out the notes on the staff and in the positioning of the trombone’s slide, but I didn’t understand their relationships.4 After I retired from the business world, I determined to fix this hole in my education. I bought a keyboard instrument and started taking formal lessons. And one of the things that has come home to me through my teacher is that, although a song might be written down on the page in clear, precise notation, this isn’t always the way you play it.

I’m not talking here about the key signature, because that’s pretty well established in music. But sheet music as written is not always an exact copy of the composer’s original musical thought, his or her ideal form. For instance, the person transcribing the music is just as prone to making errors as someone typing up a manuscript. So my teacher, who has edited music scores professionally, is constantly correcting chords called out in my music book: “That’s not a dominant chord, it should be a major.”

And then, every piece of sheet music—especially those lead sheets in which most popular music is published—shows both the melody and the harmony, and the harmony can be further broken down into the root note and the accompanying chord (third, fifth, and seventh notes). A person playing the piece as a solo might play the melody in one hand and the chords in the other, or her or she might “voice lead” the song—stacking the harmonics of the chord’s root below the melody note in the right hand, and then playing the root note in some rhythmic variation or a “bass walk” for timekeeping. And when playing as part of a group, the keyboardist might perform just the bass walk and chords, letting a singer or lead guitarist carry the melody. Or if the group already has a bass player, the keyboardist might not even bother with the root at all. So the song itself, that ideal piece of music written on the sheet, might change according to where and when it’s played. And we’re not even talking yet about changes in tempo and jazz improvisation.

Chords themselves are subject to much variation, too. For example, the harmonics around the root can be played on the keyboard in the order third-fifth-seventh or inverted as seventh-third-fifth, creating the correct notes but with a different sound and feel. And the player might have to move quickly between two chords, or adapt the harmonics when voice leading. So it’s always acceptable, my teacher tells me, to drop the fifth note. The chord may also be marked to play with a ninth, a sixth, or some other note included—and then usually dropping the fifth—which creates a wholly different sound. And finally, pairs of chords that are commonly associated in music often shift from one to the other through the movement of just a single finger from note to note, without changing the whole hand. So the “ideal form” of every piece of music really is just an expression of the song’s setting and the player’s immediate needs.

Of course, karate katas and popular songs are not physical objects, such as Plato was describing. These are sequences and ideas that start out and live in the human imagination and travel from one head to another by the means of crude copies: physically demonstrating the movement, or humming and playing the tune, or making abstract notations on paper. But even there, in the mind of the karate master or the music composer, the process of evolution—yielding subtle changes in structure, timing, and sequence—work against any fixed, immutable form that might live in the stratosphere or in heaven forever.

1. Of course, on an individual and personal level, this is a perfectly valid—if somewhat obvious—point of psychology. Each of us does build up, in our own minds, based on our varied experiences, an idealized image of a horse or any other object of which the world has offered us repeated examples with minor variations. When we think of a horse without a living specimen before us, we picture this mental composite. And the image is less specific and more fanciful the farther a person is removed from the world of the barnyard and the paddock. It is this sort of mental extrapolation that lets woodcarvers and painters create the horses that children delight to ride on carousels.

2. See, for example, The Point of Evolution from April 27, 2014.

3. The forms are so complete a statement of the style that I can still use them to practice Isshinryu karate fifty years later, at least for their benefit in cardio exercise, balance, and coordination, if not for actual fighting skill. Although I haven’t stepped inside a dojo in all that time, I can still throw punches, blocks, and kicks with relative speed—although probably not to any modern teacher’s satisfaction.

4. That those sharps and flats at the left end of the staff in the first line of music might affect how you were supposed to play all the other notes on the staff further down in the piece—this was a mystery to me as a boy. My teachers had either assumed I understood the relationship of the different keys in the Circle of Fifths—a bit of arcanum, like the Rosetta Stone, that they never actually discussed—or else they taught the key signature as a kind of just-so story. And as a rational young man, I tended to ignore anything I didn’t understand.

Sunday, August 6, 2017

Blooms in Season

Our condominium has lower-level elevator lobbies adjoining a garage structure that has an open-air plaza with swimming pool and tennis courts built on top of it. This structure connects the three groups of buildings in the complex. Ramps between the actual garage floor and each of the lobby floors—about a four-foot height difference—wind around large planter areas. The architectural plan calls these planters “atriums.” Each one is a lined concrete box, about twenty by thirty feet in area, with its own irrigation and drainage systems and glass walls, but is open to the sky at the plaza level. Think of the atriums as life-size terrariums.

As a starting point for landscaping, the atriums are a blank slate. They are not visually or organically connected to the rest of the property, where the grounds are a mix of planned flowerbeds and potted trees on the garage-top plaza level; stretches of ivy and cultivated rockscapes along the driveway and around the outer perimeter of the garage base and buildings; and at the back of the property, large unplanned areas of rock cliff, eucalyptus trees, grasses, thistles, and weeds. All of the landscaping, or lack of it, has been turned over to a commercial contractor for monthly maintenance. The grounds contractor will do whatever the Architecture or Landscaping committees or—failing any clear directive from these resident committees—the complex’s general manager and the city fire marshal tell them to do.

Given these conditions, we could choose to grow wheat or a Christmas tree farm in the atriums—and indeed, several of them support sizable and long-standing trees chosen from among the decorative varieties. But since these adjuncts to the lobbies are the only part of the complex that every resident sees every day, people pay them special attention. Almost everyone believes that, because these planter boxes are essentially on our front doorstep, they should reflect the community’s artistic standards, our property values, our status as a “luxury condominium,” and our collective taste in horticulture.

In the past, we’ve had various professional and semi-professional landscape architects step in to create artistic designs for the atriums. The last was a noted professional, active in the local area, who created a “river” theme for these enclosed spaces. The main feature is an abstract French curve filled with jagged pieces of electric-blue glass, intended to suggest a jungle or forest stream. This pattern is bordered on one side or the other with reciprocal curves holding rounded, gray pebbles, meant to look like banks or shoals. Along these visual streams, the design originally called for green, vaguely tropical shrubs1 and, in one atrium, a stand of bamboo.

Of course, like everything else open to discussion in a condominium association, a large and vocal group immediately hated the design, especially the bright-blue strips. A focal point of discussion, also, was that the green plants weren’t pretty enough. And when one of the shrubs suddenly broke out in slender stalks with clusters of tiny yellow blooms for about a week, the criticism increased. Even the flowers weren’t very pretty!

As I said, the grounds contractor will do whatever they’re told. The landscape architect who designed, sketched, and painstakingly specified the plantings around each of these faux Amazons was long gone from the site, and the condo association had made no contractual arrangements to maintain the plantings with the design for which they were intended. So the tropical shrubs were soon tossed out and a collection of colorful azaleas, hydrangeas, and other flowering plants was installed.

And for the first month or two in spring when they bloomed, everyone said how nice the atriums looked. But spring fades, and now we have stone rivers with not-so-tropical, not-so-pretty—in fact, kind of lonely and spindly—plants growing beside them. In another couple of months, when the rains come back and people are spending more time indoors, the agitation will begin for more “color” in the atriums. And then soon enough it will be Christmas, and the Great Poinsettia Debate will begin again.

As a lapsed libertarian, I generally consider myself a “little-D democrat.” I’m not an active party person, but I believe that the mass of people are pretty sensible and, if allowed to converse and find consensus among themselves, will usually come up with a workable solution. That is, I generally trust the wisdom of crowds2—at least when they are not in an agitated state.

As evidence, I present the paths that generations of walkers have scoured through the woods. If a hundred or a thousand people walking across a hillside are left to find their own way, flattening the grass, the new green shoots, and the dirt as they go, they will most likely tread out a line that combines the shortest possible distance with the gentlest possible slopes and the fewest necessary switchbacks. Compare this to an artfully designed park, where some architect has laid out concrete paths across the grass. Architects like geometry, so they create right angles and pleasing diagonals. But come back in a year or two, and you will find bare paths in the dirt where the people actually doing the walking have taken shortcuts and found their own least resistance.

As further evidence, consider the free-market system, where the wisdom—or at least the fickle tastes—of the public decides what gets produced and put onto store shelves. Yes, there are glitches: sometimes public tastes change immediately after a product has been conceived, researched, designed, produced, and distributed. This sometimes results in waste going into a landfill somewhere. More often, though, the changing tastes that have orphaned a product line will result in lower prices that eventually attract somebody, anybody, who doesn’t care about taste and can still use the underlying product. And yes, popular products often cost more than we would like, or go out of stock sooner than we would expect, because people flock to these products rather than to the less desirable brands and designs. And finally, yes, a lot of products get made for which no one has a rational excuse—for example, see Bernie Sanders’s famous “twenty-three brands of deodorant.” But somebody must be buying each one of those brands, or else they wouldn’t get shelf space for long.

On the surface, it might seem that the capitalist system pushes all these brands and taste choices because the rich white men behind it are either evil or stupid. These men must be evil because they create unworthy desires that foment in the public mind a consumerist lust and run the average American buyer around in blind circles following the latest fads. These men must also be stupid because they lack the foresight to design that single, most serviceable product which everyone will want at a price everyone can afford and then supply it to the satisfaction of all. This current confusion and profusion of product choices must be a bad thing, right? Especially, as Sanders said, “when children are going hungry.” A command-and-control economy run by wise and benevolent men in the employ of the state always seems like the antidote to this waste and confusion—until you examine the store shelves in the old Soviet Russia or in today’s Cuba and Venezuela.

Socialists will say that the lapses and shortages resulting from their system are attributable to the stubbornness of non-government producers. Socialists believe that recalcitrant farmers, lazy factory workers, and negligent store clerks simply refuse to follow government dictates about how much food and other necessities to produce, at what cost, where to sell them, and at what price. But Margaret Thatcher was wrong: Socialism doesn’t fail because “sooner or later you run out of other people’s money.” Socialism fails because resources are concrete and finite, while desires are illusory and infinite. Sooner or later you run out of people willing to provide goods and services in the quantities and at the prices that some government middleman—who has no actual responsibility for matching production to consumption, and who pays no penalty for being wrong—decides constitute a “reasonable” amount of stock to put on the shelves (that is, enough to satisfy everybody) at a “fair” price (that is, low enough for everyone to buy as much as they like). Sooner or later, the producers get tired of being the goat and go out of business. In all the societies that try socialism, the producers and distributors who survive are doing business at the point of a gun.

But in the matter of flowering plants, I’m not so sure little-D democracy works. We end up with the stub-ends of floral designs and with flowers that go dormant for most of the year. But this might be the failure, not of democracy itself, but of a landscaping system that listens to a few loud voices who want “color” in their gardens year-round and don’t understand growing seasons and blooming cycles. They don’t realize that most plants have flowers, not just to be pretty, but in order to sustain reproduction as part of a complete life cycle that includes gestation and dormancy for the plant’s own benefit. And these wiser heads screaming for more “color” take no responsibility and pay no penalty when the atriums look like a mess.

But the situation is really not such a tragedy. In the off-season, the rest of the residents suffer only from a lack of exciting and vibrant color—which is a situation most of us seem able to endure. After all, it’s not as if we had to eat the flowers.

1. Although I am the son of a landscape architect, I confess that I can’t recognize one decorative plant from another, no matter how many trips my mother took me on to the Arnold Arboretum in Boston, or the photography shoots I have taken with my brother to the Strybing Arboretum in San Francisco.

2. See People Ain’t Stupid from September 2, 2012.

Sunday, July 30, 2017

Rules or Values?

Is humanity, taken as a whole, good or bad? To my mind, this is a nonsense question. It cannot be answered as an either/or proposition. It’s like asking if life itself is good or bad. These two questions—and many others besides—can only be proposed as both/and, then followed up by stating under what circumstances and to what degree each term applies.

We all know of good people in the world: open, honest, unassuming souls, ready to help others, respecting the norms of their culture, and otherwise minding their own business. We know of bad people, too: narrow, conniving, battered souls, always looking out for themselves, defiant of any norms or values, and ever ready to catch others in a trap. And we know people who are a mix of both qualities: trying to be good but with occasional lapses, generally bad but sometimes engaging in acts of kindness, or at least personal tenderness.1

None of us has x-ray vision into the souls of other men and women, and generally we can know a saint or a villain only after that person has acted—because everyone lies at times, with big fabrications and with small untruths, told out of shame, modesty, or vanity. Since this is the case, it is only prudent to offer everyone you meet a measure of courtesy, provisional respect, and the trust that they may have good intentions until they prove otherwise. This is not just good manners but a recommendation for good health: paranoia drains the soul and keeps you from sleeping well at night. What you’ll gain in a positive outlook and a life free of confrontation will more than make up for whatever momentary hurts, stolen goods, and lost opportunities you might occasionally suffer at the hands of the villains you meet.

All of this is preamble, I think, to the larger question of how people in a group of whatever size should be ruled, governed, or led upon some mutual venture. The choice seems to be—and here again it’s a question of both/and, under what circumstances, and to what degree—whether you want to give a lot of specific orders and write a great number of rules detailing all possible situations and potential crimes, or you want to evoke and demonstrate a set of values that will guide the individual behaviors of your citizens or followers.

Frank Herbert had the flavor of this in the Dune novels, with Duke Leto’s training of his son Paul: “Give as few orders as possible. Once you’ve given orders on a subject, you must always give orders on that subject.” In other words, it’s easier to lead by example and through the transmission of values than by giving specific instructions. This approach trusts in the intelligence and good will of your followers.2

Still, rules are sometimes necessary. Even “men of good will” need to know the local speed limits and approved parking spaces so as not to harm or inconvenience others; learn the guidelines about socially acceptable limits to choices like public exposure and personal practices; and have the norms regarding economic transactions, business practices, and other public acts spelled out, in case there’s any dispute. But rules should represent those limits beyond which mere personal choice becomes a public infraction that the local populace will not tolerate. This is why most laws incorporate sanctions and penalties: These rules are just too serious to be left up to gentle reminders from well-intentioned passersby and public shaming in the town square.

Orders are necessary, too, especially in emergencies and under special circumstance that the local magistrate, governor, or previously selected group leader could not anticipate and whose outcome he or she cannot predict. When catastrophe looms and chaos descends, when the peculiarities of the situation transcend the rules and regulations established to guide people in normal times, then everyone involved looks to a leader—whether formally elected, newly appointed, or naturally emerging—to tell them what to do, what to expect, and how to survive.

But, as Duke Leto reminds us, orders given merely for the sake of establishing one’s authority can quickly become burdensome. Worse, they can destroy personal initiative and hinder the creativity and responsiveness of subordinates and supporters who might be better informed or closer to the problem than the leader him- or herself. Besides, a leader who is always giving orders misses the opportunity to be pleasantly surprised by the native genius of his or her followers, or to encounter areas of failure or lack of direction that can be used as “teachable moments.”

The leader’s actions and directions in those moments, added to his or her thoughts expressed in speeches, private conversations which are meant to become public knowledge, and published writings, as well as the examples given through his or her own behavior, become the basis for values taught, learned, and transmitted. If the leader has gained the respect—and that’s another whole discussion!—of his or her subordinates, supporters, and followers, then they will watch and listen closely to see what actions are now appropriate, what behaviors will be rewarded—if only with a smile or a kind word—and what activities will receive the leader’s censure and punishment.

Of rules, orders, and values, the values that a member of the group or the public learns and adopts are the strongest governor of present and future actions.

It’s one thing to give a child the rule “Don’t hit your sister.” Spoken with sufficient parental sternness, this rule can keep a boy from physically abusing her with his fists. Yet he still might taunt her cruelly, damage her toys, or fail to protect her from bullies. But the value “We take care of each other as a family,” once learned—and seen demonstrated by and between his loving parents—will guide the boy correctly through all future situations.

Similarly, it’s one thing to post warnings and impose harsh fines against littering. That might keep people from dropping candy wrappers and soda bottles on the ground—at least where someone else might see and call them out, especially with a police officer or park ranger nearby. Yet the littering rule does nothing to prevent vandalism or theft of public property. But the value “We take pride in maintaining public spaces,” once transmitted and accepted, will preserve the parks and plazas, and keep the lawns and roadsides clear of trash.

Rules can be gamed: People can think of a hundred exceptions and a thousand excuses. Orders can be ignored: Subordinates can insist they misunderstood the subject or the context, or claim they never heard them in the first place. But values, once accepted as a personal guide and interpreted into a belief system, work at all times and in all circumstances. A person has to wiggle pretty hard on a point of logic to subvert the dictates of conscience or to explain a failure to act in terms that sit well with his or her soul. Values become part of the person and guide behavior.

At least, that’s the way it works with good and even halfway-good people. However, for the scoffers and those who think their own will and desires are superior to any situation involving “other people,” such imparted and evoked values may never work as guides to belief and action. But then, such people don’t do too well with strict rules and direct orders, either.

Rules and orders will work well enough if you’re in a hurry or operating in shifting times or under dubious conditions. But for long-term effect, a leader, a government, or a society should work to instill values and make supporters and citizens self-compliant. That’s what civilizations do.

1. In the same way, life as a whole for most of humanity, and in the instance of a single person’s existence, can be both good and bad. Some have lives filled with poverty and physical misery, but they still experience moments of sweetness and love. Some have lives of ease and pleasure, but they still experience boredom and depression. It’s like the light and the dark. If there were never night or any shadows, how would you know the quality of light? And if it was always night, with never a gleam of daylight, how would you know the depth and texture of darkness? Pain and terror exist to remind us of the sweetness of peace and calm.

2. For more, see Writing a Good Commandment from June 4, 2011.

Sunday, July 23, 2017

Degrees of Freedom

The subject of freedom is much on my mind these days. As I’m now approaching late middle age—on the cusp of seventy years old—I realize that avenues of potential are continuously closing down for me.

Of all the things I might have become as a young man fifty years ago—doctor, lawyer, soldier, politician—none remains available today. Those are occupations you must train long years to become, or meet special physical requirements, or establish a steady track record of participation, and I no longer have the time or the stamina to even try. Of all the exotic places I might go—Machu Picchu, the top of Mount Everest, or even diving in the Caribbean—I no longer have the physical energy to attempt. Given that I no longer do well on long airplane flights, with their cramped seating conditions and my big frame, I probably will never see Europe again, unless I’m willing to pay the treble fare to fly first class. And given the amount of political uncertainty and violence that seems to be endemic in the rest of the world, I probably will never get farther east than Greece or farther west than Japan in the travels of my remaining lifetime.

So freedom as a practical issue of choice and possibility, rather than an abstract matter of statute or moral law, is always part of the human condition. It may technically be true that every boy—and now every girl—born in the United States might one day grow up to be President. But that destiny will probably be decided sometime before he or she gets out of high school, based on whether that person has the inclination or the aptitude to put in the time and energy, enter the American cursus honorum,1 and make the sacrifices required. And then, by about the age of forty, he or she will know where the top of his or her personal career arc will likely reach—and for a great many it will stop in some local or state office without ever attaining national prominence.

Freedom comes in many forms and at many levels, depending on personal and public constraints, as well as personal interests and desires.

At the most basic level are those freedoms assigned to bodily function: the freedom to decide when and what you will eat; when and where you sleep and for how long; when and how you use the bathroom; and trivial choices such as whether you want coffee, tea, or something stronger to drink. One would think that we are all perfectly free to make these choices, but not everyone and not all the time. Some jobs have assigned eating and sleeping times, limit the kinds of foods served or allowed in the cafeteria or mess hall, and limit or prescribe bathroom breaks. We accept these restrictions in favor of a greater good, such as the smooth functioning of the organization or maintaining good relations with our co-workers. Some people agree to give up these freedoms under special circumstances and for a limited time, such as a person joining the army and taking food and rest under a strict regime, and again the reason is for some greater good. Societies also place involuntary restrictions on these freedoms as a form of punishment, as anyone who has served time in prison can attest.

At the next level are freedoms associated with the details of daily living: freedom to decide where you will live and under what conditions; where you will travel and with whom; and how you will spend your time. For most of us, these freedoms are prescribed only by our economic condition. I would like to spend my time reading or playing games, but in order to earn my daily bread and the mortgage money I must work at a job that is not always of my own choosing and not always easy and fun. I would like to commute to that job in a Ferrari, but that car is too expensive and the freeways are too crowded anyway; so I ride the bus or the subway with dozens or hundreds of strangers. I would like to live in a 5,000-square-foot house in a nice suburb, maybe with a pool and a patio, enjoying a ten-mile view to the mountains, but again that kind of living is beyond my means. Sometimes the state or local authority intrudes on these decisions, such as when downtown zoning doesn’t provide enough parking for even a small Fiat, let alone a Ferrari. Or that big house in the suburbs is precluded by limits on land use, lot size, or utility hookups, so that I am forced to live back in the city.

A special category of freedom is associated with decisions about lifestyle and a person’s level of health or dissipation: freedom to decide whether to eat wholesome foods or processed junk; how much exercise you will take versus how much time you spend in sedentary pursuits; which vices you will adopt and which you will engage your will power to renounce. Aside from people in prison or the military, we all think we are free to eat what we like and exercise as much or as little as we want. But employer-paid health insurance is beginning to provide monetary incentives—more likely disincentives—to promote healthy lifestyle choices. And certain vices such as liquor, cigarettes, and recreational drugs have been subject to heavy taxation if not outright prohibition for most of the twentieth century.

And finally, the ultimate level of freedom involves decisions and opportunities that affect a person’s lifelong contribution to society, the search for meaning in life, or the fulfillment of some personal destiny: freedom to acquire education, skills, and training; freedom to think for yourself and make decisions about your career and the ultimate reach of your ambitions; freedom to guide your children in paths you believe will give them a good life. More than access to money and avoidance of public censure and state controls, the limits on these freedoms are often your own imagination. If you don’t know what the choices are and can’t think up satisfactory goals for yourself, you are as bound as if you wore handcuffs. Yes, in totalitarian societies, the freedom to think and become what you want is often proscribed—ask someone trying to publish the truth as he sees it in the old Soviet Union or in the People’s Republic of China. And yes, being denied access to education and the broadening effects of wide reading and personal inquiry can limit the imagination. But in most cases, the lack of goals and motivation usually comes from a failure of the home environment and lack of access to good teachers, mentors, and wise relatives like a favorite aunt, uncle, or grandparent.

Since all of these levels of freedom—from bodily function to personal destiny—are subject to external limitations, the real question is how we want that limit decided. Do we take it upon ourselves to seek out and do what we want, live where we want, think what we want subject only to the natural limits of time, money, and our own skills, ambition, and energy level? Or are we willing to relinquish these choices to some other person or human agency, such as a prison guard, a platoon sergeant, a factory supervisor, the local zoning and school boards, or the representatives of one or the other alphabet-soup agencies of the federal government?

Persons with an “institutional mentality,” like a life prisoner or a career soldier—or many of the common citizens of more regulated societies in the European Union and the Middle East—will opt for a guard, officer, or commissar to do their thinking and deciding for them. Most Americans, however—at least those of the older generation—tend to guard their freedoms jealously and would by choice live in a cold-water cabin on the edge of the woods than in a marble mansion under the supervision of a nursemaid, perfect, or magistrate.

1. The cursus honorum was the “course of offices”—political, military, and religious—that an ancient Roman of senatorial rank was expected to fulfill on the way to political prominence and power in the Republic and the early stages of the Empire. The American equivalent would be something like getting a law degree and becoming district attorney, or serving in locally elected positions like being on the school board or town council, then running for state assembly or senate, then for Congress or a governorship. Other paths may be possible, and we certainly saw them represented in the Republican presidential candidates for 2016. But for someone who is not already independently wealthy, this course is the only way to attract the attention of publicists, campaign managers, fund raisers, and the funding sources necessary to attain high elected office.

Sunday, July 16, 2017

Could DNA Evolve?

I recently posted about the nature of DNA,1 how it is found in every living thing on Earth, and how every living thing—no matter how far back you go—uses the same DNA-RNA-protein coding system. It’s not just similar in every microbe, plant, and animal. It’s the same system, down to the smallest details of chemistry, arrangement, and function.

To me, this is like discovering that every car on the road has the same motive power: a four-stroke, four-cylinder, inline, fuel-injected, internal-combustion engine, all with the same valve timing and compression ratio, and all burning the same grade of gasoline. With a little imagination, you might be able to conceive of an internal-combustion engine that burned kerosene or diesel fuel. You could invent a block with two, six, eight, or ten cylinders. You could design in your head a configuration with the cylinders arranged in either flat opposed pairs or a V shape. With more imagination, you could imagine the power cycle simplified to two strokes, so that exhaust and intake occurred on the same stroke, and every combustion stroke was followed by a compression stroke. You could think of ways to introduce the fuel into the cylinder other than by injecting it with a nozzle—say, by spritzing it into the air flow through the throttle body and call it “carburation.” You could even think of external combustion processes, like a steam engine. Or engines that had no cylinders and pistons at all, like a turbine.

All of these variations are possible to think about. But in the world I’m describing, they don’t exist. Every car on the road is a fuel-injected inline four. More than that, every pickup truck, semitrailer tractor, farm tractor, and motorcycle has this type of engine. So, too, does every weed whacker, lawn mower, water pump, and air compressor. Also every airplane, helicopter, and railroad locomotive. If it moves in this world, it is powered by an inline four-cylinder engine of the same exact specifications. Some engines would have larger or smaller cylinder volumes than others, but all have the same arrangement, operating principle, and fuel needs.

After thinking about this for a bit more, you might reach one of two conclusions. The first is that the fuel-injected inline four is just so perfect an engine that the designers, manufacturers, and users of cars, trucks, airplanes, and farm equipment simply had no reason to try anything different. The second thought is that maybe the engine wasn’t invented around here but brought into this world in its fully developed state from someplace else.

This is where I end up thinking about DNA. Either the DNA-RNA-protein coding system was just so robust and efficient that it outperformed and overcame all other possible chemical coding systems during Earth’s earliest history—so early that no trace of these competing systems remains on the planet. Not as some feeble microbe hiding in a deep cave somewhere. Not as a tiny mite making an inconspicuous living on DNA’s droppings in the sandy desert soil or the ooze at the bottom of the deep ocean. Either that, or coding system for all the planet’s known life forms went through its development and evolutionary stages somewhere else in the universe and blew into Earth’s early atmosphere as a microbial spore, or arrived as skin cells shed inside a visiting astronaut’s lost glove, or was seeded here with a package launched by galactic gardeners from another star system.2

The obvious answer—once you accept either premise, ultimate efficiency or astronaut’s gift—is that the DNA system itself simply can’t evolve. Once the fragile molecular chain floating in the salt brine of a tide pool stops trying to arrange itself and starts calling for the protein and lipid sequences to build a membrane around the first single-celled, prokaryotic organism, the system is locked in place. That first cell, whether it leaned toward the plant-way or the animal-way, used the DNA coding system to build its internal organelles and external membrane, to regulate its operations by a cascade of enzymes, to feed itself through the breakdown of carbon compounds and the buildup of the energy molecule adenosine triphosphate (ATP) inside its mitochondria, and to conduct all the other processes to which the cell had become accustomed. Once the living organism was dependent on using this coding system to process the amino acids it needed to build proteins, and then to build those same proteins over and over again as the cell grew and expanded, its fate was sealed.

The DNA code—its sequence of its base pairs—might be changed, or mutated, either by chemical challenges or by radiation effects from the external environment. Change the letters of the code, and it will—sometimes, but not always, depending on the letter’s position in the three-base codon—call for a different amino acid and so create a different protein. The new protein might be slightly different in structure and function from what the code called for before, or it might be very different. That is how evolution works: accidents to the DNA sequence create changes in proteins that either hurt the cell inheriting the new code, or that have no present effect but allow this cell to prosper amongst its sisters when the environment changes—as the environment continually does—or, occasionally, that improve the cell’s functioning right away in the present environment.

The code itself is resilient, because many of the sixty-four possible combinations of four bases in a three-base reading frame call for the same amino acid, and the third base in the codon can usually be changed without effect—which is why it’s called the “wobble.” But also, most proteins are big enough and complex enough—with enough amino acids chained together—that changing out one or two amino acids in their makeup has little effect on structure or function. And then, most protein changes are not either beneficial or lethal to the organism right away, but instead they hang around and make themselves felt when the environment changes and then they either benefit or kill off one set of genetic inheritances over a competing sister line with a different inheritance.

The whole system is slippery and wobbly in its effects, in the exact sequence of DNA and RNA bases, the choices among amino acids, and the production of proteins. But this is like saying that a flatbed printing press can produce many different documents, based on how the lines of monotype letters are arranged in its iron frame. To create all those different documents, however, the press always uses a predetermined alphabet of type blocks, sets them up in the same framework, inks them the same way every time, lays the paper on them in the same place, and applies the same amount of pressure with the platen. The coding changes all over the place, but the coding system remains the same.

If the DNA-RNA-protein system could evolve and change, that would create chaos within the cell—wouldn’t it? If a new fifth purine or pyrimidine base were added to the existing four, it would scramble the DNA sequence. First, because it would have no complementary base to pair with, as A always pairs with T, and C pairs with G. A fifth base—say, the purine xanthine (X)—would just sit there filling a hole, like the empty socket in a jaw that’s missing a tooth. Having nothing to pair with, the new base would scramble the code, much as the upper tooth over an empty socket has no way to provide bite pressure. Second, if somehow two bases could be added and paired up at the same time—matching that X with, say, the pyrimidine orotic acid (O)—their popping up together in the sequence would still scramble the code. Even if the new bases could be recognized and transcribed into messenger RNA, the existing ribosome in the cell body would have no way to translate either of them into one of the possible amino-acid choices for the next position in the developing protein strand. And if somehow the new X and O bases were added to the existing code and intended to call for some new amino acid—beyond the twenty that now make up all microbial, animal, plant, and human proteins—that would simply create another toothless gap, because the cell’s internal processes are not yet geared to manufacture, collect, or supply this new amino acid in any quantity.

And all this is just to consider the evolution of the DNA-RNA-protein system inside a single prokaryotic cell. Such cells reproduce by continually growing all their contents and expanding to the point of rupture, at which time they replicate their DNA strands, divide and haul off the resulting new chromosomes to opposite ends of the cell body, pinch off the cell membrane in the middle, split into two new cells, and trot on. If the existing parent cell had somehow survived the chaos of introducing at least two new bases, transcribing them successfully into messenger RNA, happening to have the right kinds of new amino acids on hand, and then using the new protein in a constructive manner … then no problem. The two daughter cells produced by the split would inherit this newly evolved DNA-RNA-protein coding system and continue to function with it.

But in the eukaryotic domain, whose cells contain their DNA in a separate nucleus, most reproduction is by sexual joining.3 Two organisms come together, usually by one contributing an egg and the other fertilizing it with sperm, in order to create a new and unique individual. That individual differs genetically from either parent, and so sexual reproduction increases the amount of genetic variation—and thus the possibility for new combinations of mutations, more changes, more adaptations—in the species. But sexual reproduction puts a powerful limit on the evolution of the DNA coding system. An individual who might somehow evolve in his or her germline a new set of X-O base pairs, a new corresponding messenger RNA sequence, a modified ribosome to translate the new code, and a new and unusual set of amino acids to be used by it … would then be a genetic freak. To reproduce and pass all this newness along to the next generation, she or he would have to meet up with a breeding partner who had similar equipment. Chromosomes in sexually reproducing species come in pairs, one from the mother aligning with one from the father. Unless the individual with the newly evolved DNA could meet someone with the same evolved system, the breeding line would die out. The evolved system would disappear in the first, nonexistent generation.

Or would it? If the evolved DNA was in a male, it would probably disappear, because the sperm provides nothing but raw coding to the next generation. But if the altered individual was female, and her egg contained the mechanisms for the novel transcription and translation—appropriate RNA, ribosome, and amino-acid processes—then the offspring might survive. It would make the usual proteins from the traditional DNA chromosome pairs supplied by both the mother and the father, and it would make new proteins with the X-O-contaminated chromosomes and adapted cellular machinery supplied by the mother. Over time, and with enough generations—probably passing down the female side at first, like the mitochondria in the mother’s egg, because of all that cellular machinery—the new DNA system might spread through the population of both females and males. In fact, it probably would spread if it conferred advantages of more flexibility, more adaptability, more robustness. Eventually, certain species that had an improved six-base DNA, perhaps in a larger, four-base reading frame, and calling on more than twenty amino acids to create novel proteins, would appear in generations that could be traced back to the evolutionary split. Eventually, the older style of DNA with just four bases in a three-base reading frame might disappear in all the different animals or plants that evolved from that revolutionary ancestor. As a result, we might see two separate populations differing in their fundamental DNA system.

Such a systemic evolution would not be easy. It might first appear as a byproduct: one gene on a fragmentary chromosome, off to one side in the cell body or in the nucleus, making its own special proteins, and not interfering with the regular business of the cell. It would have its own RNA. And the ribosome out in the cell body, being a highly adaptable structure, might quickly evolve to make use of these new messenger RNAs with their strange coding. The new system might start out sex-linked to the female line, as certain genes are now linked to the male line’s Y chromosome. If a six-base DNA—or any other systemic variant—had any greater adaptive power or offered more evolutionary advantage to a cell line, it might certainly develop out of the existing four-base system. And some of its daughter cells might not be so chaotically disrupted that the old system would out-compete them in every environment. A hybridized cell, using both DNA systems at first, but perhaps eventually singling up on the newer model, could survive somewhere, in some environment, someplace on Earth.

With a little imagination, it could happen. But it didn’t. We live in a world without two-stroke engines, without two- or six- or eight-cylinder engines, and with no trace of a steam engine or a carburetor in our developmental history. Everywhere we look it’s just fuel-injected, four-cylinder, inline engines and always has been. And I still wonder why.

1. See The God Molecule from May 28, 2017.

2. The third alternative is that the DNA-RNA-protein coding system was thought up and then cooked up by a genius god with a PhD in molecular biology. But as soon as you start allowing for the supernatural, then all sorts of “just-so” stories become possible and the whole world is simply a giant miracle.

3. Once you get beyond the single-celled eukaryote variants such as the algae, yeasts, and protozoa.

Sunday, July 9, 2017

Causes of Civil War

Anymore, I’m keeping a clock inside my head, like one of those countdown-to-midnight clocks that once got published about certain predictable catastrophes, like the next nuclear war. Mine is weighing the chances of a second civil war in America. I wrote about this in a recent novel, Coming of Age, where—among many other story lines—the national debt makes this country vulnerable to foreign manipulation and initiates a split between the largely urbanized coastal states and the more rural inland states.

Most people consider even thinking about another civil war to be the sign of an unbalanced mental or emotional condition. For me, such a war is just another future hazard. Many countries have had civil wars when their political differences reached the irreconcilable stage. Most recently, these have been countries under attack by Marxist revolutionaries and leftist rebels: Korea, Vietnam, Cuba, Guatemala, Nicaragua, Cambodia, Venezuela, Bolivia, Colombia, and myriad African hotspots like Nigeria, the Congo, and South Sudan. In the Middle East, the wars have more recently been between the secular governments installed after the two world wars and the religious fundamentalists, but the contention is still between those who want an open society based on personal freedom and those who want it closed and based on rigid codes of moral or political conduct.

And even long-established countries that today we think of as enlightened and stable had their periods of civil war. England had its own war against the monarchy in the 17th century. France had its revolution against the aristocracy in the 18th century. And America—not counting the colonial revolt against English rule—had her crisis and convulsion in the middle of the 19th century. Russia fell apart under pressure from leftist revolutionaries and monarchical incompetence in the middle of World War I, went through a period of civil war, and emerged as a Communist regime. Germany fell apart in the 1920s as the result of losing that world war, went through a period of hyperinflation and street thuggery, and emerged as a National-Socialist dictatorship. China—with help from a Japanese invasion before and during World War II—fell apart into feuding, warlord-dominated enclaves and emerged as the People’s Republic in 1949.

You might think armed conflict over political or religious issues can’t happen in this country again, because we have a … a what? A document called the Constitution that has endured for 227 years now and is the model for good government around the world? A huge military armed with nuclear weapons that is, by design and by decree, politically neutral and subservient to civil authority? A built-in mechanism for regime change enshrined in popular elections held every two and four years? All of this makes us special and in some cases unique in the world. It does not, however, render us invulnerable to irreconcilable differences that cannot be healed by the ballot box and will not submit to long-standing social and military traditions.

Documents, traditions, and laws are effective only so long as the majority of people hold them to be inviolable and put them above personal advantage and political opinion. History is full of carved idols, tablets of stone and bronze, and inherited traditions that became honored only by rote and with the lips but were ignored in everyday practice and with the heart. Ancient Rome went from being a democratic republic to an imperial dictatorship in the span of two generations by just such a hollowing out of her traditions. Rome’s period of civil war was a contest between powerful politicians who fielded essentially their own private armies. All through it and the dictatorship that followed, the country still maintained the form of electing its politicians and military leaders, but the process was controlled and the outcome inevitable. Even the Soviet Union had its popular elections, but with the sole candidate nominated by the local soviets with guidance from the Communist Party. Even the Islamic Republic of Iran votes—but only for candidates already approved by the theocracy.

In the vast majority of the more recent civil wars, the dispute was not about some single social or economic issue—like slavery in the American Civil War, or economic collapse in my Coming of Age books—but about the ongoing nature of society itself, the principles under which people should be governed, and—in the case of the revolutionary insurgencies—who should exercise those principles.1 Even the religiously tinged uprisings in the Middle East—and now in parts of Europe and Asia—are not about doctrinal issues and matters of faith so much as about imposing Sharia law and Islamic culture on countries that have recently adopted—or, in Europe, have long practiced—Western-style, secular democratic government, free market economics, and liberal social policies.

In some cases, the war—that is, actual military hostilities—comes only after some defining action and not as a lead-up to it. In the American Civil War and in the wars between North and South Korea or North and South Vietnam, the separation of one part of the country had already occurred, whether by secession or through international agreement. In the Russian Revolution, the Bolsheviks had already taken power in the capital during the October Revolution and forced the royalists and the remaining moderates to retreat into the countryside or to emigrate. Sometimes, however, the war is the deciding factor in regime change, as in the case of the civil wars in Spain in the 1930s and Cambodia in the 1970s.

Which way will the United States go in the early 21st century—if we must go to war at all?

Although my novel Coming of Age portrayed a split between largely contiguous sections of the country—the urban, progressive coasts versus the rural, traditionalist interior—I don’t think that model holds in today’s political situation. We saw from the breakdown of voting patterns in the 2016 national election, by county rather than by state, that the sentiments between left and right are far more distributed. Most of the dense urban counties went Democratic, while the less populated rural counties—but holding an impressive amount of geographic territory—went Republican. California, for example, is staunchly progressive in the urban centers of San Francisco, Los Angeles, and San Diego, where most of the population lives, but also strongly traditionalist in its rural counties, which encompass most of the land area. If California ever decided to secede from the Union—as some are seriously promoting—either as part of a new federation with other progressive-dominated states like Oregon and Washington, or as its own country, it would quickly lose the Central Valley and the Foothills through their own act of secession. Indeed, the far northern counties of California and the southeastern counties in Oregon are already agitating—and have been doing so since 1941—to form a new state called “Jefferson.”

In the hardening controversy between progressives and conservatives—where reasonable discussion and polite disagreement have already given way to marches, occasional riots, and now to political shootings—the solution won’t be anything as simple as a resolution to take one part of the country out of the Union and form a new country with either free-market capitalism or bureaucratic socialism as its economic model. But in any new secessionist country, under either model, the government and its politicians would probably still consider themselves to be a democracy, and they might adopt some form of the U.S. Constitution as their founding document. However, the rules and practices of that democracy would likely change from what we have now. A progressive state would probably adopt a larger, more intrusive federal bureaucracy, give less authority to a smaller popular assembly, and seek more open and contextual adherence to that new constitution—i.e., treating it as a “living document.” A more conservative state would intentionally create a smaller standing government, give more rulemaking power to its congress, and adopt a more strictly “originalist” interpretation of its constitution.

But the geographic lines and the regional sentiment to support such a nicely defined state-by-state or regional split simply don’t exist. No, I believe we have progressives and conservatives living too close together, as in California. Or in Upstate New York versus New York City. Or in any other urban-rural split you could name. We are more like the intermixing of Hindu and Muslim in the British Raj before its partition into the states of India and Pakistan. And that means the next American civil war—if it ever comes, if some reconciliation doesn’t take place soon—will be more like Spain’s or Cambodia’s. More neighbor against neighbor, cities versus the suburbs and rural counties, more like guerrilla and urban warfare.

Whether the U.S. military could keep out of such a conflict is an open question. All of our officers have taken oaths to “support and defend the Constitution of the United States against all enemies, foreign and domestic.” Most soldiers and serving professionals—and since the end of the Selective Service draft, we have built a professional military based on self-selected, volunteer service—are traditionalists who see themselves as upholding the values of the country as a whole rather than the privileges of a politicized bureaucracy in any current government. If the party in power is openly contemptuous of America, its history, its traditions, and Western civilization in general, that is going to be a hard oath to keep.

Of course, a country engaged in urban warfare could not survive long. In short order—no more than a couple of years, if we go by history elsewhere—one side would dominate and the other give up. Otherwise, we would eventually see a flow of forces that moves people of similar loyalties and opinions into geographical refuges and strongholds. Such regions might eventually become the basis for new countries that coexist side by side, like North and South Korea. But the bet is still that one side will quickly dominate, as in Franco’s Spain and Mao’s China. And the risk in today’s world is that, while civil chaos exists, foreign intervention and opportunism might take the country down. With intercontinental ballistic missiles and other weapons of force projection, the two oceans guarding our borders, and our friendly neighbors to the north and south, will no longer protect us.

I hope we can avoid this. Such a war would mean large numbers of military and civilian dead, ten times as many injured, years of civil disruption, billions of dollars in destroyed infrastructure and property, trillions in lost personal and public wealth and lost productivity. War is the ultimate leveler. But it seems to be the only way two groups of human beings can settle their long-held, irreconcilable differences without possibility of deception. Oaths can be renounced. Treaties can be broken. Laws can be ignored or reinterpreted. Extralegal actors—rioters and assassins, brigands and pirates—can be encouraged. But once you have beaten an enemy to the point at which he cannot lift his arms to hold a weapon, once you have decimated his population, razed his cities, and salted his lands—or once you are put into this form of submission yourself—then you can pretty much call the issue settled and start working on the peace terms.

I don’t know what the future will bring—and I say that as a science-fiction writer whose business is to foresee and interpret the future. But I know that somewhere a clock is ticking.

1. When Lenin came back to Russia, via a sealed train through Germany, he was aghast to find his old revolutionary cadres shouting, “All power to the soviets!” These were the workers’ and soldiers’ councils—the meaning of the word “soviet”—that had sprung up in Moscow and Saint Petersburg during the revolution. “Do not cry ‘all power to the soviets,’ ” he chided them, “until you have control of the soviets.”

Sunday, July 2, 2017

The Science-Fiction Mindset

One of the beta readers for my latest novel The House at the Crossroads, commented that none of the characters ever seems to get hurt or angry. I thought about this during the editing phase but could see no reason to change anything. The characters’ responses to their life situations, to their frustrations, and even to outright enemy action seemed all appropriate to me. Then I realized that this commenter was not a regular reader of science fiction. And that, for all its historical trappings, is the essence of this novel.

Maybe it’s just me and the way I react to things. When someone challenges me, says something hurtful, or tricks or betrays me—doesn’t happen often, but sometimes—I probably do feel hurt and anger. But that’s a reaction occurring as a residual effect, usually in thinking about the situation after the act. In the moment, my conscious mind is busy trying to figure out the basis of the challenge, the reason for the other person’s scorn or belittlement, or a tactical response to the trick or betrayal. In other words, my response is to act first and moan about it later.

Maybe it’s just a lack of personal introspection. The way I was brought up, personal feelings were not all that important. My parents were practical, technically minded people.1 Like most of their generation, they had gone through the Depression and World War II, where making do with what you had and then putting aside your personal preferences in order to do your duty and get the job done was a national characteristic. Like most of my own generation, I regularly heard warnings that began with “If you think you’re hurting now …” and “If that’s the worst thing you ever have to do …”

These are also characteristics I admire and think should be emulated in fiction and in real life: emotional resilience, mental resourcefulness, physical bravery, dependability, and responsibility. I admire people who can face up to their situation, however painful, and work to rectify it—rather than brooding on their hurts and the wrongs done to them. And I believe this is a common characteristic of the fictional people portrayed in most science fiction—at least in the books produced in the decades immediately following the last world war. There the characters don’t waste time feeling hurt, and for them anger is a spur to action. Don’t get mad, get moving—and then get even.

Given a crisis, anyone confronts a choice. You can collapse inward or focus outward. You can curl up inside your shell, examine your feelings, and wait for someone else—or perhaps time itself—to make things better. Or you can take a stand, strike out, hit back, and keep fighting, dodging, and weaving until either the situation changes or you are dead. Perhaps, in the bigger picture, your stand and your moving fist will change nothing. Perhaps the initial blow was too great, the fire too hot, the sea too cold, and your hope of survival or the probability of your receiving reinforcement or rescue too small. But the choice is still there. You can die, face God, or enter Valhalla either curled up in a ball and whimpering or standing on your feet and spitting challenges.

This is not to say you—and the characters in science fiction from the 1950s through about the ’80s—don’t have feelings. You do register hurt and anger. But they are secondary and after the fact. The first order of business is to address the problem, get moving, fight back.

Maybe this observed tendency for the characters in my stories not to react with hurt and anger is also an artifact of the way I write. My style, developed over a number of years—and now seventeen completed novels—is what some have called “free indirect discourse.” It’s nothing that I was ever taught, except by observation and emulation of the books I have loved. In this style, the text is in the third person but the point of view is always first person. So I may be writing “He said …” “She thought …” “He observed …” but, change the grammar around, and the story would flow equally as well with “I said … thought … observed …” While the grammar may be pretending to observe the character from the outside, as with the traditional “omniscient narrator” of earlier fiction, the sense of the language is observing the world through the character’s eyes.

As a writer, this is a strange mask to wear. It puts me—and, vicariously, the reader—inside the character’s head at all times. It forces certain limitations, and so a structure, on the narrative. Unlike the omniscient narrator, a passage told in this style can’t sample one character’s thoughts and feelings, perceptions and observations in one sentence or paragraph, then turn around and delve into another character’s head in the next paragraph. If I place a character on one side of a closed door, I can only speculate from the knowledge available to him what might be happening on the other side. If the character is engaged in conversation, she can only speculate about the other person’s motives, intentions, or exact feelings. The world is one-sided for the duration of the scene or chapter in which the character is engaged. This means that, if I want to show what’s on the other side of the door or sample the thoughts on the other side of the conversation, I must start a new scene, enter the head of a new character who has access to these events and thoughts, and recreate the story from that second point of view.

Why adopt this technical, clunky style? First, it puts the reader into the center of the action. Rather than observing the story as a theater audience might, watching the characters on a stage, the reader joins me in putting on the mask and seeing the world through the eyes—and the history, perceptions, prejudices, and desires—of the focus character. This is like observing the action through the camera lens in modern cinema technique. And it’s a way to color the world of the story with the character’s sense of self and particular knowledge. That can be very powerful in storytelling.

Second, indirect discourse lets the writer set up situations where one character may be lying, misunderstanding or presuming certain facts, or acting from what seem to him like perfectly reasonable motives—all of which may differ from the perceived reality of the other viewpoint characters in the story. This establishes the possibility for the plot to go in two directions, to cycle back on itself, to force the characters into sudden and perhaps unpleasant realizations—and only the writer and the reader are party to all points of view and so to a greater understanding than any one character. Shakespeare sometimes does this with whispered asides from his characters, or with dialogue conducted in secret, in a scene set apart from the other characters. Indirect discourse is a story told in personal asides. And the possibilities for revelation and resolution are even more powerful.2

In this style of writing, it is entirely possible—one might think it is almost required—to show a character’s reactions of hurt and anger to a distressing situation. Simply write “He felt angry …” or “She was hurt …” And where the direct cause of the feeling might not be obvious, the writer in indirect discourse can use those lines and explain the feeling. But I believe a higher level of storytelling involves trusting that the reader is wearing the mask fully and completely. Then the reader will understand and feel the shock, the anger, the betrayal, and the pain without the writer having to belabor the point with internal stage directions. The unspoken feelings hang in the air, like a sudden realization, revealed only by the character’s subsequent actions: getting moving, solving the problem, taking revenge.

This is a subtle way of writing, to live the story through the eyes and perceptions of one character at a time. But after a while—and with some practice at it—writing in character becomes second nature. And then the old style of the omniscient narrator, pointing out this and explaining that, dancing indiscriminately through everyone’s head at third hand, and making everyone’s feelings visibly manifest, as if they were painted on the top of their skulls and the surface of their skins … that’s what feels clunky, inept, and foolish.

1. See Son of a Mechanical Engineer from March 31, 2013, and Son of a Landscape Architectfrom April 7, 2013.

2. Of course, it is still possible to surprise the reader along with the other characters: the author simply does not show action through anyone who has a full perception of what is about to happen.

Sunday, June 25, 2017

About Nothing

They say it’s impossible for the human mind to think about nothing at all, but apparently we think about it a lot.1 For example, the Zen kōan, with its impossible question or illogical juxtaposition, is designed to disrupt the continuous buzzing of the active mind and send the practitioner into a relaxed, passive, receptive state. This is why meditation is so refreshing: it is like the darkness of deep sleep before the nightly pageantry of dreamtime begins.

But you don’t have to be a Zen master to contemplate emptiness. Quantum physicists attempt to understand the void of creation all the time. After all, empty space makes up the largest fraction of the universe. For example, it’s a common metaphor that, if the nucleus of an atom—any atom from hydrogen to plutonium—were blown up to the size of a baseball, then the electrons in their various energy shells surrounding it would be like flies buzzing around inside the space of a cathedral. If you could stop their motion, then you could sweep the dead electrons and the nucleus itself up with a brush and dustpan, leaving a cathedral-sized nothing behind. And if a molecule is a group of atoms linked by sharing their electrons, then molecules are simply a concatenation of cathedral-sized empty spaces. And even in the most densely packed material, like that brick of plutonium, the space between the molecules would be even emptier.

Outside the densely packed substance of the Earth and its atmosphere, in interplanetary space, the most prolific form of matter is particles of the solar wind. Depending on the state of the Sun and its recurring coronal mass ejections, these particles occur at a density of between four and ten per cubic centimeter.2 And most of them are not intact atoms from the Sun’s store of hydrogen and helium but instead their ions—that is, uncoupled atomic fragments like protons and electrons. Thin soup indeed! Interstellar space, beyond the boundary of the Sun’s heliosphere, is even emptier.3

And yet, in the mind of the physicist, the empty space between atoms and particles, even the space between the planets and between the stars, is laced with the fields that are associated with dynamic particles. These fields include the electromagnetic field accompanying the photons4 flying outward from the sun and from any other release of energy, or the Higgs field accompanying the long-sought Higgs boson5 that enables all the other particles in the grand vision of quantum mechanics to have mass. So “empty” space is full of—well, let’s call a field the “potential” for things to happen if the right amounts of matter and energy are present. So empty space has structure—or at least the possibility of structure—based on the presence and number of those nano-sized baseballs, dead flies, and other bits of matter or energy, on how much mass each one contains, and on how fast it’s moving.

Science fiction writers have taken this idea of the structure of empty space to absurd but imaginatively useful limits. For example, the empty space of the physical universe is envisioned as folded and crumpled in dimensions more numerous than the three—x, y, and z—coordinates we use for defining the space in which we normally move around. The idea goes that, if you could focus enough energy at a particular point in normal space, you could break through that folded structure and instantaneously arrive at another place that might be light-years away in your frame of reference but just around the corner in that multidimensional crumple.

Another useful fiction is that, with the application of enough energy, the structure of space itself can be pulled and pushed around like a lump of taffy. This give rise to the Star Trek warp drive. Using this hypothetical propulsion system, a starship can move faster than light while not exceeding the speed of light, c, the universal speed limit, because its “warp field” collapses the space in front of the ship and expands the space behind it. This is rather like being able to walk along at a hundred miles an hour, rather than the usual human pace of four miles per hour, because the sidewalk bunches up—in the example here, at the rate of twenty-five feet for every step—before your front foot hits the ground, and then it smooths out as you lift your back foot for the next step. You walk in a bubble of collapsing and expanding space and never exceed your normal walking pace. What the warp field does to the ship itself, the passengers, and the empty spaces inside their molecules and atoms is another question.

Some theoretical physicists, taking their ideas from the pixilation of a digital image or an LED television screen, propose that empty space is actually just a field of unfilled holes waiting to be occupied by matter and energy. In this view, space is like a giant honeycomb and, rather than moving through it haphazardly, particles and objects simply transition from one invisible cell to the next, blinking into and out of existence in an orderly fashion. For me, that’s a great mind game, but it doesn’t tell you more about the rules behind matter and energy than simply imagining particles and their associated waves flying through empty space.

Finally, because the movements of stars in the spiral galaxies that we can observe do not seem to match the masses and corresponding gravitational fields of those galaxies,6 physicists believe the universe has an unseen component called “dark matter.” This is not only matter we cannot see, but also matter we cannot detect with any of our instruments because it doesn’t interact with the atoms, energies, and fields—except for gravity—that compose the universe we live in. Based on the stellar movements we can observe,7 physicists think that “normal” or “baryonic” matter—that is, particles with known masses like protons and neutrons, the stuff we’re made of—composes only about five percent of the universe, while this dark matter makes up approximately twenty-seven percent.

It gets worse. The galaxies we can see are moving away from each other—and not just moving but accelerating, moving faster and faster—rather than collapsing inward under the gravity of all the matter we can see and detect, plus any contribution from the mass of all that dark matter. Since the outward fling imparted by the universe’s supposed origin in the Big Bang would be at a steady velocity—or even gradually decelerating, as gravity began to take over—something else must be pushing the galaxies apart. Again, whatever this “something” might be is invisible to our senses and undetectable by our instruments, and so it is called “dark energy.” Based on the observed acceleration of the galaxies, this energy is thought to constitute approximately sixty-eight percent of the matter and energy in the visible universe.

And we haven’t a clue about the nature of either dark matter or dark energy. Physicists attribute the former to objects called WIMPs—weakly interacting massive particles—and MACHOs—massive astrophysical compact halo objects. These are clever names that cloak a bit of an idea but essentially translate as “I don’t know.” And dark energy is sometimes attributed to “vacuum energy,” which is giving some structure or property to the empty space between those atomic baseballs and dead flies. Some theories propose that this energy comes from virtual pairs of particles—one of matter, the other antimatter—that randomly pop into existence in empty space and immediately annihilate each other without leaving behind any visible or audible “pop.” So the whole action is invisible to us. The amount of vacuum energy or the number of virtual-pair annihilations can be adjusted to account for the universe’s dark energy requirement. But hey, when you’re summoning pixies or counting angels dancing on pinheads, any number will suffice.8

So, while we can debate whether a glass is half-full or half-empty, we can also fill up that empty place with all sorts of imaginative particles, fields, and structures. For some of us, all this “nothing” seems to be our favorite subject.

1. You knew this one was going to be weird, right?

2. When I write “cubic centimeter,” think of a sugar cube—back in the days when sugar came in little cubes in a box that you poured into a bowl, instead of measured packets of white powder that is usually not real sugar.

3. What a concept is “emptier”! More empty than empty. Perhaps the construction should be “less filled up”—until we get to the something that is really, totally nothing.

4. I don’t count photons among the particles in the solar wind because the photon only has apparent mass—and so physical existence—because it’s traveling at the speed of light. If you stop it in its tracks, it transfers that energy into something else and simply disappears. Physics is complicated stuff.

5. See “What exactly is the Higgs boson? Have physicists proved that it really exists?” from Scientific American.

6. From the vantage point of Earth, all we can see are the stars in other galaxies. We know that they must also contain an amount of nonluminous matter like planets, asteroids, comets, and loose dust and gases. But since those quantities in our own local neighborhood are such a tiny fraction of the mass of the Sun itself, we discount them in computing the mass of any galaxy.

7. Based on the masses we can see, we would expect the stars closer to the center of the galaxy to move faster than those out on the rim, like wood chips circling inside a tornado or whirlpool. Instead, the stars appear to move in a relatively fixed pattern, as if they were painted on a spinning disk. To achieve this effect, you would need more mass in the system than you can account for by the stars we can see.

8. See also Three Things We Don’t Know About Physics (I) from December 30, 2012, and (II) from January 6, 2013.

Sunday, June 18, 2017

Iambic Life and Trochaic Life

Poetry in the English language seems to settle—when it settles down at all, given the modern distaste for rhyme and meter—into a series of mostly two-beat measures, like a continuous handclap: dee-DAH, dee-DAH. Or sometimes DAH-dee, DAH-dee. Kind of like a heartbeat: lub-DUB, lub-DUB.1

Compare the stressed and unstressed syllables in two pieces of poetry. One familiar from William Shakespeare’s Hamlet, like all his plays written in iambic pentameter:

To be, or not to be? That is the question—
Whether ’tis nobler in the mind to suffer
The slings and arrows of outrageous fortune,
Or to take arms against a sea of troubles,
And, by opposing, end them? To die, to sleep—
No more—and by a sleep to say we end
The heartache and the thousand natural shocks
That flesh is heir to—’tis a consummation
Devoutly to be wished! To die, to sleep.
To sleep, perchance to dream—ay, there’s the rub,
For in that sleep of death what dreams may come
When we have shuffled off this mortal coil,
Must give us pause. There’s the respect
That makes calamity of so long life.

Five measures to the line, and the second syllable in each measure stressed.2

Now read a piece from Rudyard Kipling’s The Explorer, written in trochaic octameter:

“There’s no sense in going further—it’s the edge of cultivation,”
So they said, and I believed it—broke my land and sowed my crop—
Built my barns and strung my fences in the little border station
Tucked away below the foothills where the trails run out and stop:
Till a voice, as bad as Conscience, rang interminable changes
On one everlasting Whisper day and night repeated—so:
“Something hidden. Go and find it. Go and look behind the Ranges—
“Something lost behind the Ranges. Lost and waiting for you. Go!”

Eight measures to the line, and the first syllable stressed in each measure.

The words are so chosen and placed, as if naturally occurring, that the lines can only be read in one way. Try reading them with the stresses reversed, and your tongue gets tangled up.

In the Shakespeare, you have to place the stress and the importance on the second syllable:

To BE or NOT to BE that IS the QUES-tion

For IN that SLEEP of DEATH what DREAMS may COME

In fact, you could drop out the unstressed words and you would still have the sense of the verse surviving in telegraphic form, almost like a text message.3

In the Kipling, the words and structure force you to pay attention to the first syllable:

BUILT my BARNS and STRUNG my FENC-es IN the LIT-tle BORD-er STA-tion

SOME-thing HID-den. GO and FIND it. GO and LOOK be-HIND the RANG-es—

Here again, the stressed words and syllables carry the sense of the poem. And the stress itself conveys the urgency of the whisper: “Go and find.Go and look.

It makes me think that these two opposite forms of reading—stress first versus stress second—almost define two separate approaches to life.

In the Kipling style, life is full of trochees, with that impetuous initial stress that leaves the second almost unvoiced. The tone is imperative, commanding, insistent, thrusting, and sure of itself. It is the voice of a British serving officer. It is the voice that drives men into battle or sends them overseas to seek their fortunes.

In the Shakespeare style, life is made up of iambs, with that hesitant initial stress and the second firming up the sense of the matter. The tone is reflective, contemplative, associative, conjoined with lots of “ands,” “fors,” and “ifs,” and yet ultimately resolute. It is the voice of a mature person weighing consequences—and not just in young Prince Hamlet considering suicide but in all of Shakespeare’s plays. It is the voice that invites us inside the character’s thinking.

When I think back on various people I have known, both in life and in literature—for yes, we readers have invisible friends—I believe many would line up under one banner or the other, the iambic types and the trochaic types.

The trochees are direct and obvious in their life and attitudes: slam-dunk, there-you-are, and sometimes in-your-face characters. For them, life is simple and unquestioned. Hit the ground running. Take the shot. Make your move. Accept the facts as they are presented. This might mean they sometimes jump to conclusions and precipitate hostilities that might better be avoided. But so be it. They also tend to win gun battles and, through their decisiveness and audacity, get the biggest piece of cake.

The iambs are more subtle and reasonable in their approaches: on-second-thought, but-what-about?, and sometimes oh-let’s-not! characters. For them, life is complex and full of questions. Pick and choose. Consider all the angles. Try to understand. Examine the facts before accepting them. This might mean they sometimes miss out on the best items in a holiday sale and fail to stand up to bullies. But they also win chess games by seeing three or four moves ahead and, through their thoughtful and sensitive natures, savor the piece of cake they do finally get.

Which personality is better? That depends on the circumstances. A trochee makes a good soldier and a competent administrator of complex systems that resolve into obvious patterns, like running a railroad or an electric-power grid. These are activities where the hesitations and second thoughts of an iamb can cause no end of trouble. But you don’t want a trochee for a military strategist or judge in a court of law. Those are activities where critical examination, questions, and playing three or four moves out are more reliable. Which makes the better and more lasting friend? That depends on whether your taste runs to playing football with its rough-and-tumble, block and tackle, or fencing with its subtle weave of parries and ripostes while respecting an opponent’s personal space. One kind is good at playing poker, the other tends to play bridge.

Do these opposites attract? In this case, I think not. The tendency for trochees to pounce and for iambs to react would lead the pair to get on each other’s nerves. The iamb would end up nursing hurts that the trochee might never perceive. Or the iamb would get back at the trochee in ways the latter would never see coming.

Are men trochaic and women iambic? Only in your dreams. I know women who are deadly quick and not at all subtle—and men who need to walk three times around the house before opening a drawer. These are not masculine and feminine characteristics played against type. They are basic approaches to life belonging to the species H. sapiens without gender distinction.

In The Iliad, Achilles and Agamemnon are blunt trochees, while Hector and Odysseus are subtle iambs. Anna Karenina and her impetuous cavalry officer, Count Vronsky, are a pair of trochees, while Stepan Oblonsky and his wife Dolly are, for all their frivolousness, more iambic. Ellen Ripley, in the Alien series, is an iamb despite her tough-gal heroism, because her basic attitude is stop-wait-and-look, and she sees right through Lieutenant Gorman or the Company’s devious Carter Burke. In the Dune series, the Fremen, despite their reputation as fierce fighters, “were supreme in that quality the ancients called ‘spannungsbogen’—which is the self-imposed delay between desire for a thing and the act of reaching out to grasp that thing.” That is an iambic trait: wait and see. Americans are generally considered to be trochaic, while Europeans, the Chinese, and Japanese are thought to be more iambic.

Of course, human beings in specific cases, taken one by one, are far too complex to exist under such a crude dichotomy of characteristics. That is why most of my examples above come from literature, where the author emphasizes one approach, one mindset or trait, to prove a point. And yet, in real life, some people still consistently hit that first syllable hard, while others pause and reflect on that second syllable. Dah-dee, or dee-dah, the beat of life goes on.

1. This may have something to do with the fact that English, as an amalgam language, drew on Celtic, Norse, and Germanic roots that were formalized and spread by bards and poets reciting their verses in the lord’s banquet hall, rather than by written records.

2. There are already exceptions to this formula, of course. For example, the first four lines demand that the final words—“question,” “suffer,” “fortune,” “troubles”—be partially swallowed on the second syllable in order to maintain the beat. Well, nobody’s perfect—and a perfectly restrictive meter would eventually become boring, like riding a rocking horse.

3. And now that I think of it, the actors in a noisy Elizabethan theater—where the patrons and groundlings are calling to one another and chatting among themselves—might have to shout their lines. Only the stressed words would cut through the noise, and they would have to carry the sense of the play.