Sunday, June 25, 2017

About Nothing

They say it’s impossible for the human mind to think about nothing at all, but apparently we think about it a lot.1 For example, the Zen kōan, with its impossible question or illogical juxtaposition, is designed to disrupt the continuous buzzing of the active mind and send the practitioner into a relaxed, passive, receptive state. This is why meditation is so refreshing: it is like the darkness of deep sleep before the nightly pageantry of dreamtime begins.

But you don’t have to be a Zen master to contemplate emptiness. Quantum physicists attempt to understand the void of creation all the time. After all, empty space makes up the largest fraction of the universe. For example, it’s a common metaphor that, if the nucleus of an atom—any atom from hydrogen to plutonium—were blown up to the size of a baseball, then the electrons in their various energy shells surrounding it would be like flies buzzing around inside the space of a cathedral. If you could stop their motion, then you could sweep the dead electrons and the nucleus itself up with a brush and dustpan, leaving a cathedral-sized nothing behind. And if a molecule is a group of atoms linked by sharing their electrons, then molecules are simply a concatenation of cathedral-sized empty spaces. And even in the most densely packed material, like that brick of plutonium, the space between the molecules would be even emptier.

Outside the densely packed substance of the Earth and its atmosphere, in interplanetary space, the most prolific form of matter is particles of the solar wind. Depending on the state of the Sun and its recurring coronal mass ejections, these particles occur at a density of between four and ten per cubic centimeter.2 And most of them are not intact atoms from the Sun’s store of hydrogen and helium but instead their ions—that is, uncoupled atomic fragments like protons and electrons. Thin soup indeed! Interstellar space, beyond the boundary of the Sun’s heliosphere, is even emptier.3

And yet, in the mind of the physicist, the empty space between atoms and particles, even the space between the planets and between the stars, is laced with the fields that are associated with dynamic particles. These fields include the electromagnetic field accompanying the photons4 flying outward from the sun and from any other release of energy, or the Higgs field accompanying the long-sought Higgs boson5 that enables all the other particles in the grand vision of quantum mechanics to have mass. So “empty” space is full of—well, let’s call a field the “potential” for things to happen if the right amounts of matter and energy are present. So empty space has structure—or at least the possibility of structure—based on the presence and number of those nano-sized baseballs, dead flies, and other bits of matter or energy, on how much mass each one contains, and on how fast it’s moving.

Science fiction writers have taken this idea of the structure of empty space to absurd but imaginatively useful limits. For example, the empty space of the physical universe is envisioned as folded and crumpled in dimensions more numerous than the three—x, y, and z—coordinates we use for defining the space in which we normally move around. The idea goes that, if you could focus enough energy at a particular point in normal space, you could break through that folded structure and instantaneously arrive at another place that might be light-years away in your frame of reference but just around the corner in that multidimensional crumple.

Another useful fiction is that, with the application of enough energy, the structure of space itself can be pulled and pushed around like a lump of taffy. This give rise to the Star Trek warp drive. Using this hypothetical propulsion system, a starship can move faster than light while not exceeding the speed of light, c, the universal speed limit, because its “warp field” collapses the space in front of the ship and expands the space behind it. This is rather like being able to walk along at a hundred miles an hour, rather than the usual human pace of four miles per hour, because the sidewalk bunches up—in the example here, at the rate of twenty-five feet for every step—before your front foot hits the ground, and then it smooths out as you lift your back foot for the next step. You walk in a bubble of collapsing and expanding space and never exceed your normal walking pace. What the warp field does to the ship itself, the passengers, and the empty spaces inside their molecules and atoms is another question.

Some theoretical physicists, taking their ideas from the pixilation of a digital image or an LED television screen, propose that empty space is actually just a field of unfilled holes waiting to be occupied by matter and energy. In this view, space is like a giant honeycomb and, rather than moving through it haphazardly, particles and objects simply transition from one invisible cell to the next, blinking into and out of existence in an orderly fashion. For me, that’s a great mind game, but it doesn’t tell you more about the rules behind matter and energy than simply imagining particles and their associated waves flying through empty space.

Finally, because the movements of stars in the spiral galaxies that we can observe do not seem to match the masses and corresponding gravitational fields of those galaxies,6 physicists believe the universe has an unseen component called “dark matter.” This is not only matter we cannot see, but also matter we cannot detect with any of our instruments because it doesn’t interact with the atoms, energies, and fields—except for gravity—that compose the universe we live in. Based on the stellar movements we can observe,7 physicists think that “normal” or “baryonic” matter—that is, particles with known masses like protons and neutrons, the stuff we’re made of—composes only about five percent of the universe, while this dark matter makes up approximately twenty-seven percent.

It gets worse. The galaxies we can see are moving away from each other—and not just moving but accelerating, moving faster and faster—rather than collapsing inward under the gravity of all the matter we can see and detect, plus any contribution from the mass of all that dark matter. Since the outward fling imparted by the universe’s supposed origin in the Big Bang would be at a steady velocity—or even gradually decelerating, as gravity began to take over—something else must be pushing the galaxies apart. Again, whatever this “something” might be is invisible to our senses and undetectable by our instruments, and so it is called “dark energy.” Based on the observed acceleration of the galaxies, this energy is thought to constitute approximately sixty-eight percent of the matter and energy in the visible universe.

And we haven’t a clue about the nature of either dark matter or dark energy. Physicists attribute the former to objects called WIMPs—weakly interacting massive particles—and MACHOs—massive astrophysical compact halo objects. These are clever names that cloak a bit of an idea but essentially translate as “I don’t know.” And dark energy is sometimes attributed to “vacuum energy,” which is giving some structure or property to the empty space between those atomic baseballs and dead flies. Some theories propose that this energy comes from virtual pairs of particles—one of matter, the other antimatter—that randomly pop into existence in empty space and immediately annihilate each other without leaving behind any visible or audible “pop.” So the whole action is invisible to us. The amount of vacuum energy or the number of virtual-pair annihilations can be adjusted to account for the universe’s dark energy requirement. But hey, when you’re summoning pixies or counting angels dancing on pinheads, any number will suffice.8

So, while we can debate whether a glass is half-full or half-empty, we can also fill up that empty place with all sorts of imaginative particles, fields, and structures. For some of us, all this “nothing” seems to be our favorite subject.

1. You knew this one was going to be weird, right?

2. When I write “cubic centimeter,” think of a sugar cube—back in the days when sugar came in little cubes in a box that you poured into a bowl, instead of measured packets of white powder that is usually not real sugar.

3. What a concept is “emptier”! More empty than empty. Perhaps the construction should be “less filled up”—until we get to the something that is really, totally nothing.

4. I don’t count photons among the particles in the solar wind because the photon only has apparent mass—and so physical existence—because it’s traveling at the speed of light. If you stop it in its tracks, it transfers that energy into something else and simply disappears. Physics is complicated stuff.

5. See “What exactly is the Higgs boson? Have physicists proved that it really exists?” from Scientific American.

6. From the vantage point of Earth, all we can see are the stars in other galaxies. We know that they must also contain an amount of nonluminous matter like planets, asteroids, comets, and loose dust and gases. But since those quantities in our own local neighborhood are such a tiny fraction of the mass of the Sun itself, we discount them in computing the mass of any galaxy.

7. Based on the masses we can see, we would expect the stars closer to the center of the galaxy to move faster than those out on the rim, like wood chips circling inside a tornado or whirlpool. Instead, the stars appear to move in a relatively fixed pattern, as if they were painted on a spinning disk. To achieve this effect, you would need more mass in the system than you can account for by the stars we can see.

8. See also Three Things We Don’t Know About Physics (I) from December 30, 2012, and (II) from January 6, 2013.

Sunday, June 18, 2017

Iambic Life and Trochaic Life

Poetry in the English language seems to settle—when it settles down at all, given the modern distaste for rhyme and meter—into a series of mostly two-beat measures, like a continuous handclap: dee-DAH, dee-DAH. Or sometimes DAH-dee, DAH-dee. Kind of like a heartbeat: lub-DUB, lub-DUB.1

Compare the stressed and unstressed syllables in two pieces of poetry. One familiar from William Shakespeare’s Hamlet, like all his plays written in iambic pentameter:

To be, or not to be? That is the question—
Whether ’tis nobler in the mind to suffer
The slings and arrows of outrageous fortune,
Or to take arms against a sea of troubles,
And, by opposing, end them? To die, to sleep—
No more—and by a sleep to say we end
The heartache and the thousand natural shocks
That flesh is heir to—’tis a consummation
Devoutly to be wished! To die, to sleep.
To sleep, perchance to dream—ay, there’s the rub,
For in that sleep of death what dreams may come
When we have shuffled off this mortal coil,
Must give us pause. There’s the respect
That makes calamity of so long life.

Five measures to the line, and the second syllable in each measure stressed.2

Now read a piece from Rudyard Kipling’s The Explorer, written in trochaic octameter:

“There’s no sense in going further—it’s the edge of cultivation,”
So they said, and I believed it—broke my land and sowed my crop—
Built my barns and strung my fences in the little border station
Tucked away below the foothills where the trails run out and stop:
Till a voice, as bad as Conscience, rang interminable changes
On one everlasting Whisper day and night repeated—so:
“Something hidden. Go and find it. Go and look behind the Ranges—
“Something lost behind the Ranges. Lost and waiting for you. Go!”

Eight measures to the line, and the first syllable stressed in each measure.

The words are so chosen and placed, as if naturally occurring, that the lines can only be read in one way. Try reading them with the stresses reversed, and your tongue gets tangled up.

In the Shakespeare, you have to place the stress and the importance on the second syllable:

To BE or NOT to BE that IS the QUES-tion

For IN that SLEEP of DEATH what DREAMS may COME

In fact, you could drop out the unstressed words and you would still have the sense of the verse surviving in telegraphic form, almost like a text message.3

In the Kipling, the words and structure force you to pay attention to the first syllable:

BUILT my BARNS and STRUNG my FENC-es IN the LIT-tle BORD-er STA-tion

SOME-thing HID-den. GO and FIND it. GO and LOOK be-HIND the RANG-es—

Here again, the stressed words and syllables carry the sense of the poem. And the stress itself conveys the urgency of the whisper: “Go and find.Go and look.

It makes me think that these two opposite forms of reading—stress first versus stress second—almost define two separate approaches to life.

In the Kipling style, life is full of trochees, with that impetuous initial stress that leaves the second almost unvoiced. The tone is imperative, commanding, insistent, thrusting, and sure of itself. It is the voice of a British serving officer. It is the voice that drives men into battle or sends them overseas to seek their fortunes.

In the Shakespeare style, life is made up of iambs, with that hesitant initial stress and the second firming up the sense of the matter. The tone is reflective, contemplative, associative, conjoined with lots of “ands,” “fors,” and “ifs,” and yet ultimately resolute. It is the voice of a mature person weighing consequences—and not just in young Prince Hamlet considering suicide but in all of Shakespeare’s plays. It is the voice that invites us inside the character’s thinking.

When I think back on various people I have known, both in life and in literature—for yes, we readers have invisible friends—I believe many would line up under one banner or the other, the iambic types and the trochaic types.

The trochees are direct and obvious in their life and attitudes: slam-dunk, there-you-are, and sometimes in-your-face characters. For them, life is simple and unquestioned. Hit the ground running. Take the shot. Make your move. Accept the facts as they are presented. This might mean they sometimes jump to conclusions and precipitate hostilities that might better be avoided. But so be it. They also tend to win gun battles and, through their decisiveness and audacity, get the biggest piece of cake.

The iambs are more subtle and reasonable in their approaches: on-second-thought, but-what-about?, and sometimes oh-let’s-not! characters. For them, life is complex and full of questions. Pick and choose. Consider all the angles. Try to understand. Examine the facts before accepting them. This might mean they sometimes miss out on the best items in a holiday sale and fail to stand up to bullies. But they also win chess games by seeing three or four moves ahead and, through their thoughtful and sensitive natures, savor the piece of cake they do finally get.

Which personality is better? That depends on the circumstances. A trochee makes a good soldier and a competent administrator of complex systems that resolve into obvious patterns, like running a railroad or an electric-power grid. These are activities where the hesitations and second thoughts of an iamb can cause no end of trouble. But you don’t want a trochee for a military strategist or judge in a court of law. Those are activities where critical examination, questions, and playing three or four moves out are more reliable. Which makes the better and more lasting friend? That depends on whether your taste runs to playing football with its rough-and-tumble, block and tackle, or fencing with its subtle weave of parries and ripostes while respecting an opponent’s personal space. One kind is good at playing poker, the other tends to play bridge.

Do these opposites attract? In this case, I think not. The tendency for trochees to pounce and for iambs to react would lead the pair to get on each other’s nerves. The iamb would end up nursing hurts that the trochee might never perceive. Or the iamb would get back at the trochee in ways the latter would never see coming.

Are men trochaic and women iambic? Only in your dreams. I know women who are deadly quick and not at all subtle—and men who need to walk three times around the house before opening a drawer. These are not masculine and feminine characteristics played against type. They are basic approaches to life belonging to the species H. sapiens without gender distinction.

In The Iliad, Achilles and Agamemnon are blunt trochees, while Hector and Odysseus are subtle iambs. Anna Karenina and her impetuous cavalry officer, Count Vronsky, are a pair of trochees, while Stepan Oblonsky and his wife Dolly are, for all their frivolousness, more iambic. Ellen Ripley, in the Alien series, is an iamb despite her tough-gal heroism, because her basic attitude is stop-wait-and-look, and she sees right through Lieutenant Gorman or the Company’s devious Carter Burke. In the Dune series, the Fremen, despite their reputation as fierce fighters, “were supreme in that quality the ancients called ‘spannungsbogen’—which is the self-imposed delay between desire for a thing and the act of reaching out to grasp that thing.” That is an iambic trait: wait and see. Americans are generally considered to be trochaic, while Europeans, the Chinese, and Japanese are thought to be more iambic.

Of course, human beings in specific cases, taken one by one, are far too complex to exist under such a crude dichotomy of characteristics. That is why most of my examples above come from literature, where the author emphasizes one approach, one mindset or trait, to prove a point. And yet, in real life, some people still consistently hit that first syllable hard, while others pause and reflect on that second syllable. Dah-dee, or dee-dah, the beat of life goes on.

1. This may have something to do with the fact that English, as an amalgam language, drew on Celtic, Norse, and Germanic roots that were formalized and spread by bards and poets reciting their verses in the lord’s banquet hall, rather than by written records.

2. There are already exceptions to this formula, of course. For example, the first four lines demand that the final words—“question,” “suffer,” “fortune,” “troubles”—be partially swallowed on the second syllable in order to maintain the beat. Well, nobody’s perfect—and a perfectly restrictive meter would eventually become boring, like riding a rocking horse.

3. And now that I think of it, the actors in a noisy Elizabethan theater—where the patrons and groundlings are calling to one another and chatting among themselves—might have to shout their lines. Only the stressed words would cut through the noise, and they would have to carry the sense of the play.

Sunday, June 11, 2017

Sense and Imagination

All art forms bear a certain similarity to each other. For example, they invite creativity: they allow for the expression of individual and personal tastes and interests; they celebrate the introduction of new constructions or combinations of existing ideas and forms; and they expect the artist to explore new methods, stretch current standards, and try novel perspectives and viewpoints. An artist working in any format is presumed to differ in substance and style from every other artist and to explore new ways of interpreting his or her art.

Almost all art forms appeal directly to the senses. For painters and photographers, it’s the visual sense associated with color, proportion, and perspective. For musicians, it’s auditory sense associated with timbre, harmony, and tempo. For perfumers, it’s smell and the associated scents of flowers, organic pheromones, and other chemical-based memories. For chefs, it’s taste and texture, associated with flavors, scents, and the visuals of presentation.

Writing is different, however. In reading a written piece, the image of the type on the page or the feel of the book’s binding is a minor sensory note that is not particularly related to the story. Writing appeals not to the senses but directly to the intellect and the imagination. That’s one reason why books as bound paper, electrons on a screen, or a voice reciting from a loudspeaker can equally carry the content of the work.

Other arts might also tell a story. Tchaikovsky’s ballet Sleeping Beauty presents the recreated visuals, with associated melodies and harmonies, of the classic fairytale. But one can watch the dance for just those graceful movements, or listen to the music for just those blended tones and tempos, and enjoy the ballet without knowing the story. Similarly, one doesn’t have to know the story of Peter and the Wolf or Lieutenant Kije to savor Prokofiev’s works. Indeed, a Tchaikovsky or Prokofiev symphony has no story thread at all, and it’s still comprehensible and enjoyable.

Similarly, you can look at a painting by Monet or Bierstadt and learn something about the environs of Paris or the grandeur of the American West. But you can also enjoy these works just for their color and their use of light and shadow. Indeed, you can also look at any abstract painting for its blend of shapes and colors, because it has no recognizable object and may not even have a unifying idea, and it’s still enjoyable.

When a writer tries to emulate an impressionist painter’s approach in telling a story, the reader is often left unsatisfied. That’s because most readers treat what they are encountering in the words on the page as a form of concrete reality that only differs from real life in that it is simply occurring inside their heads.1 Even a work of fiction draws on images, ideas, emotions, and dialogue that the reader can treat as if they were a form of reality.2 Vague imagery and surreal dialogue—meant to convey foggy or drug-induced impressions and half-remembered memories, without that hard-edged sense of concrete reality—usually create only uncertainty and confusion in the reader’s mind. And when a writer tries to emulate an abstract painter’s disconnected shapes and colors, abandoning story and sense for the sake of pretty words, like a Dadaist poet, the work becomes virtually unreadable. Either that, or it can only be appreciated by readers who care more for innovative and daring stylistics than they do for immersing themselves in the story.

And there, I believe, arises the power of writing over other art forms. More than painting or music, the written word requires the active participation of the reader. A gallery patron can wander from room to room, appreciating this painting, ignoring that one. A concert goer can listen intently to the music or ignore it, letting the blend of sounds wash past his or her ears while thinking of something else. A diner can wolf down an exquisite meal without savoring its flavors or appreciating its presentation. But a reader cannot follow the thread of an article, argument, or story without focusing on the words, absorbing them, interpreting them in terms of his or her own vocabulary, knowledge, and experience, and helping the author create the logical or imaginative structure—the relationship of ideas, or the embodiment of character and plot line—inside his or her own mind.

Unlike the sensual arts, which can stay outside at the limits of our ears and eyeballs, or pass quickly over our tongues, the rhetorical and literary arts must pass through to the brain and work their magic directly on the reader’s insight and imagination. This is where the conscious mind builds its perceptions of the world. Unless this active collaboration proceeds, the words remain inert marks upon the page or sounds spoken into empty air. This need for reader collaboration creates a particular challenge for the writer.

Any artist faces a certain amount of audience resistance. Gallery patrons tend to focus on and gather around paintings that have some familiarity for them, something they can approach as they have approached it before. This is why artist retrospectives and museum exhibits of famous paintings from another era are so successful: the public already knows that it will like and understand what it sees. But the new painter, striving to present some of that individual taste or explore those stretched standards, presents even the most active and receptive viewer with a question mark. “Do I like this?” “Do I understand what the artist is doing?” And ultimately, “Do I care about this?”

Similarly, a musician trying out new rhythms and new blends of harmonics risks having the audience react at first as if they were hearing mere noise. Two hundred years ago, the public and the music critics both reacted to Beethoven’s now-beloved symphonies as discordant and a caricature of other, more familiar composers.3 This may be one reason why many 19th-century composers like Dvorak and Holst took their themes from folk songs and country dances. In many ways, because a piece of music flows across time and at first hearing cannot be stopped, studied, and analyzed the way a painting can, the audience for a new musical work has less chance of asking those probing questions about liking and understanding.

The writer’s challenge is that readers are even more selective. While a person in a museum might glance at a Dali painting, even though he or she cares nothing for whimsically impressionist art, or a radio listener might catch part of a song from a heavy-metal rock band, even though his or her tastes run to country music, a reader is much less likely to pick up a book or a magazine full of stories devoted to an unfamiliar or disliked genre. A person who avidly reads science fiction might never encounter a romance story, and vice versa. And unless the reader opens the book, focuses on the words, starts giving them attention, and follows the thread … the magic does not happen.

Even when the tastes and taints of genre fiction are not involved, such as a straightforward think piece on some popular scientific, political, or economic question, the reader’s mind may have already erected barriers based on his or her previous thinking about the subject. So, to be read at all, to even start the reader’s mind along the thread of the article’s logic or the story’s plot, the writer must create a breakthrough moment. The article must start with a claim or a question that the reader has not thought about before or that ignites new impressions jarring his or her ordered sense of the world. The story must begin with piece of action or a mystery that draws the reader deeper into the plot and characters. And even before that, the book or magazine seeks a dynamic piece of cover art or a gripping blurb to draw the reader inside to the words on the page.

Writing in its appeal to the imagination and understanding, rather than the senses, differs from the other art forms in another way as well. It’s the only form that has no raw materials and uses no instrument in its expression. The painter buys canvas by the yard and pigments by the tube. He or she prepares one canvas at a time and sells it to one buyer only. The photographer and the digital artist might do a little better, in that a pixelated image can be copied, reproduced, and sold many times to many different buyers. The musician plays an instrument or sings inside a venue once for a paying audience whose size is limited by the capacity of the club or concert hall. He or she may have the performance captured as sound waves on tape or in digital format and sold again and again. The chef creates a meal out of selected raw ingredients, working in a single kitchen space, and then sells the product at the rate of one plate to a customer.

The writer, in contrast, has no physical raw materials. Well, in the most basic form, a pen spreads ink lines across a piece of paper, and for a novel that’s a lot of ink and paper. Most writers these days use a computer, where the ink lines become typed characters that flash briefly on the screen, become stored as ASCII codes in dynamic memory or on a hard disk, and get translated into electrons traveling through wires and across the air to the reader’s screen, or become imposed in patterns of ink or tone powder on a roller and spewed out in multiple copies of printed pages. The physical form is irrelevant. Some writers even compose most of the story and dialogue in their heads before ever setting pen to paper or fingers to keyboard.4 The writer’s stock in trade is invisible, not even as tangible as the sound waves the musician or the singer produces. The “stuff” of an article or story is built wholly out of the writer’s vocabulary, his or her sense of grammar, syntax, and structure, and an act of pure imagination.

As an idea, the writer’s art from is conceived and produced, and as an idea it is received in the reader’s head. All the rest is energy and electrons. And that is the mystery of being a writer.

1. Actually, all reality occurs solely inside our heads. Our brains make up what we think of as objective reality from visual, auditory, tactile, and other cues brought in through nerves connected with our various sense organs. Yes, the “real world” does exist outside of us, but our perception and understanding of it are a construct as ephemeral—existing only in our short- and long-term memories—as any fairytale.

2. And when that seeming reality tells a story with fantastic, imaginative, or magical imagery, elements, and insights—as if the story constituted a part of the reader’s everyday world—then the pleasurable effect is heightened. At least, it is for some readers.

3. A view that I personally maintain—minus the aspects of caricature—for most of the works of Dmitri Shostakovich. But, ah! I do love his Symphony No. 10 in E minor.

4. I can’t do that, of course, but I still must have some pieces of the story, fragments of sentences and paragraphs, and the voices and partial exchanges of my characters swirling around in my head before I can sit down to write my fiction.

Sunday, June 4, 2017

A Game for Gentlemen

Perhaps I appreciate baseball only because I did not start to watch the sport and follow my home team—the San Francisco Giants—until after I arrived at middle age. My dad followed baseball and his hometown team—the Brooklyn Dodgers—when I was growing up. But, back then, I only watched intermittently when he had a game on television. It’s not that I wanted a faster-moving or more violent sport; my preference was for an old movie on the back channels. He tried to teach me about the game and get me to share his interest, but I only half-listened. Now, years later, I have come to see what he liked about baseball: it’s all about personal behavior, and this is the behavior of gentlemen.

To start with, baseball is not a game played by the clock, unlike football, basketball, or soccer. There is no time pressure, although the umpire will urge the game along if a pitcher or batter takes too long. But the play is not framed in periods of minutes, and there is no end point until one team has beaten the other after nine—or more—innings of play. So a baseball game can take all afternoon, or go on into the night—as one recent Giants night game did, for seventeen innings, long after my bedtime. Gentlemen are not pressured by the clock; they take their time and do things right.

And then, baseball is not a contact sport. Other than a baseman1 laying the gloved ball or touching the ball itself against some part of a runner’s anatomy, players do not intentionally touch each other or interfere with their play. The opponents do not tackle one another as in football, or guard and block one another as in basketball. Sometimes a pitcher will hit the batter with a pitch, but it’s not intentional—usually, unless it’s payback for an earlier incident—and the penalty is that the batter immediately goes to first base. Yes, players do get injured. Outfielders and infielders both dive for catches or collide with each other going for the same fly ball. Runners jam fingers and joints sliding into base, and they can collide with basemen. Catchers and umpires get hit with pitches. But these injuries are never intentional punishment, and there are no bad feelings. Or not usually.

Although the sport is played with great emotion and intense team rivalry, the players clearly do not hate or despise each other. You can see a runner standing next to a baseman and exchanging a friendly comment or sharing a joke. You listen to the announcers, who are generally assigned to one team and are as partisan as any fan, and they will praise the skills of an opposing player. Baseball is a game of personal skills: Can you pitch? Can you hit? Can you catch? Can you run? The announcers and the players in interviews never talk about how badly the opposition might be playing—or perhaps they will say, with some regret, that the other team or player is in a slump—but instead how hard they have to work to beat them. The losing team never talks about how the winners might have used some trick or cheat to beat them, only that they themselves could have done better. When a batter strikes out, he is not angry at the pitcher’s clever use of fastballs and curves or sliders, but angry at himself for missing them. When a runner gets thrown out at a base, he is not angry at the skill or speed displayed by the baseman, but angry that he himself didn’t run harder or slide more purposefully. Of course, everyone gets mad at the umpire sometimes over what he thinks is a bad call. But the anger doesn’t last long.

This is a game that rewards sportsmanship. And the great players are respected for their kindness and good spirits. The fan favorites are the players with the best and most cheerful attitudes.

In the same way, this is a game that recognizes and rewards personal effort and excellence. Unlike football or basketball, where a player’s individual actions can become lost in the flurry of activity that follows the ball across the field or court, in baseball the motion of the ball highlights the efforts of only one or two men at a time. The pitcher is under scrutiny for either a balk or a throw to base with a runner trying to steal until the ball leaves his hand and approaches home plate. The batter is under scrutiny as the ball comes at him and he either watches it go by or swings at it—and then either connects or misses. The catcher is watched to see if he makes a clean catch or fumbles a scud or a wild pitch into a loose ball that lets any runners advance. And when the ball is hit, it falls into the sphere of one or two infielders or outfielders who have the responsibility for catching and returning it. Although everyone plays a part on the team, some more important—say, the starting pitcher—than others—such as any one of the basemen or outfielders—for a few seconds of play the entire stadium is focused on the ball’s flight and the man who is throwing, hitting, or catching it.

Unlike other games where much talk—and sometimes bets—are made about the “point spread” or by how much one team outscores the other, baseball is a game of win or lose. Yes, winning a game by a crushing ten-to-one will send the fans on one side home happy for the night, while fans of the losing side will commiserate and cry woe for a day. But the game goes down in the record books as simply a win whether it’s ten-to-one or two-to-one. In a game that’s played almost every day in a long season, rather than just once a week in the fall, the figures to watch are total wins and losses, not by how much.

While the team is not judged in the long term by whether its wins and losses were crushing or achieved with a single run, each player bears a huge catalogue of statistics, marshalled by his career average, seasonal average, and record against the opposing team—and sometimes against another opposition player, such as a batter against a pitcher. Starting pitchers are judged by how many innings they stay in the game, how many opposing runs count against them, and the number of walks and strikeouts they throw. Hitters are judged by their average number of hits per at-bat, how often they get on base, how many bases they can run per hit—the “slugging percentage”—and how many home runs and runs batted in they score. Runners are judged on how often they can steal a base. It’s through these individual totals and percentages that the team’s lopsided wins and losses become visible. So, while the game is a team effort, it’s the individual records that tell the story.

When I was in high school, I had a teacher who was also a coach in one of the other popular sports—football or basketball, I forget which—who said baseball players weren’t real athletes. He pointed out that they could still promote cigarette brands—back when anyone advertised those foul things—because baseball players only had to throw or hit a ball every once in a while, and then they ran for just ninety feet. Basketball players, he said, were in continuous movement for ten or twenty minutes at a time, and that required real stamina. But now I see that baseball players show a different kind of athleticism, one that’s both mental as well as physical.

An outfielder has to stand a hundred or two hundred feet from the action at the plate and has nothing to do with his feet or hands for whole minutes at a time while the pitcher throws and the batter and catcher contend with a series of balls and strikes. An infielder may stand just ninety feet or so from this action in the same apparent idleness. But all those people playing behind the pitcher must follow the action intently, because a hitter who connects with the ball can send it on a line drive or pop fly to any part of the field, and then the fielder in line with the ball’s flight has just two or three seconds to observe and react. An outfielder may have to run fifty feet to the right or left to catch the ball. An infielder has less time and often has to dive right or left and land with his glove outstretched to catch the ball. A starting pitcher, on the other hand, is in near-continuous movement during the inning and has to throw the ball as many as a hundred times with perfect concentration and control over a game that might last three hours. Baseball is a game of intense mental focus and taut-nerved preparedness in apparent idleness during an inning that can sometimes last as long or longer than a basketball period or a football quarter.

Baseball is a simple game that children can learn and play with enjoyment, or that people at a picnic or barbecue without much experience can pick up and play barehanded. It’s also a game of subtle skills and strategies. A pitcher who can shave a fraction of a second off his delivery time, gain a few miles per hour on his fastball, or master a complicated throw like the slider or changeup can increase his standing in the record books. A batter with the good sense to lay off a pitch that’s headed away from the strike zone can increase his on-base percentage. And knowing when to rein in an eager player who swings at everything, or intentionally walk a skilled batter likely to make a double or triple, can increase a manager’s win-loss record.

Baseball is not for everyone. Certainly, football stadiums hold more people and fill more seats on a Sunday. Basketball tournaments enjoy a more intense following, especially during March. But I am proud that baseball, which plays nearly every day from April through October, is still considered America’s national pastime. A culture that values this game which celebrates patience, concentration, personal excellence, and sportsmanship is still strong at its roots.

1. Throughout this article, I use the term “baseman” and the masculine pronouns “he,” “his,” and “him” intentionally, because that is the composition of players in the major leagues today. Of course, women can and do play baseball and its close cousin, softball. And when women are admitted to the major leagues—finally! again!—it will be interesting to see if the game changes much at all.

Sunday, May 28, 2017

The God Molecule

I am a convinced evolutionist. Having worked in a biotech firm that made genetic analysis equipment and reagents, and so having learned a bit about biology, I can see the relationships among all life on this planet through our shared inheritance of the DNA-RNA-protein coding system. Everything on this planet that we think of as being alive—from bacteria to bats to banyan trees—uses this system. And every species, genus, family, order, class, and on upward through the traditional biological classifications attributed to Carl Linnaeus in the 18th century can be measured and compared by the number of shared genes and genetic mutations the representative organisms possess within this coding system.1

It’s not that every organism has a “DNA-like” system, allowing for some mechanical or chemical variations. No, they all—from slime molds to sea urchins to sparrows—use the same DNA system, intact and whole. That system has four bases—adenosine (A), cytosine (C), guanine (G), and thymine (T)—which are always paired A to T and C to G. It arranges these four bases, which are variously purines or pyrimidines, in a three-base “reading frame” yielding sixty-four possible code combinations, called “codons.” It uses these combinations to call up just twenty amino acids from among the 200 or more amino acids that exist in nature.2

There are minor variations within the system itself. For example, DNA differs from RNA in that the second carbon atom in the sugar ring of each base has an attached hydrogen atom (H) rather than the hydroxyl group (OH) found in RNA—and thus DNA is the “deoxy” ribose nucleic acid. And the RNA strand itself substitutes the base uracil (U) for thymine (T) in transcribing the coding sequence. But those are about the only differences—and they are used in all branches of life.

All organisms use the same DNA start codon ATG—the only codon calling for just one amino acid, methionine—as the beginning of any protein-coding gene and as the start of the messenger RNA strand (where it’s written AUG) that will translate the gene into a protein. So all protein strings start with methionine. And all organisms use one of the three DNA stop codons TAG, TAA, or TGA (in RNA as UAG, UAA, or UGA)—and none of these sequences codes for any amino acid—to end the gene and its corresponding messenger RNA strand.

Nowhere on Earth do we find organisms that use a different coding system; or use different bases from among the eight possible purines and pyrimidines found in nature; or employ a four- or five-base reading frame for more possible combinations, or a two-base frame for more compact and efficient coding; or call on a different set of amino acids to create new and exotic proteins. Some of the mechanisms that support the system are different. For example, single-celled prokaryotes, which have their DNA scattered throughout the cell body (rather than contained within a nucleus, like the multi-celled eukaryotes), use a different ribosome to translate the messenger RNA strand into a protein string. This is one of the reasons you can take an antibiotic to kill the bacteria in your body without harming your own cells: the medicine attacks the bacteria’s protein coding mechanism, and yours is different from that of the bug that’s infecting you.

As a mechanism for evolution, the DNA-RNA-protein coding system is superb. The DNA molecule is fragile, susceptible to environmental abuse through radiation and chemical assaults. But the system itself, with sixty-four bases calling twenty amino acids, has a lot of redundancy. You can change or knock out the third base in the reading frame’s codon—sometimes even the second base—and you still have a good chance of calling the appropriate amino acid.3 And large complex proteins often have parts of their molecular structure that can be modified or removed without changing their essential function. If some smart person were trying to design a chemical system that allowed for environmental change, but not too much and not too fast, the DNA molecule would be it.

And one of the great biological and geological understandings of the past hundred years or so is that the Earth’s environment is constantly changing, and that species must continuously change with it or else they would die out quickly. Our Sun goes through millennial and epochal cycles of warming and cooling, the continents drift over the ages on tectonic plates, mountains rise and erode away, streams meander and shift, lakes form and evaporate. An animal or plant species that could not change its form or function in some degree, large or small, over the generations would not survive in this changing landscape.

So the old Platonic idea, preserved in Genesis, is pure intellectual fantasy. Plato held that, while horses in the field may come in many varieties, different sizes and colors, shapes and strengths, there exists somewhere an ideal form—the Horse, perfected in the mind of God. This gives rise to the notion that a species, like Equus ferus caballus or Homo sapiens has some mystical, “pure” essence. But each variety of horse, as well as those of zebras and donkeys, were all adapted through a haphazard blending of mutational changes to fit its particular niche in an environment that was basically solid, level ground suitable for running over and covered with grass that was good to eat. If it had benefited the horse family to have toes like wolves or lions—or like their ancestor, the multi-toed Eohippus—instead of hooves, then we would now have horses with toes. If an early horse family and its line of ancestors going back millions of generations to some Theropoda dinosaur had evolved the metabolism to fill a niche that profited from eating grubs and worms, then we might now have horses with beaks.

But there’s paradox in all this. While the DNA-RNA-protein coding system is admirably suited to evolve the organism it creates and fit that species into its current environment, the system itself shows no hint of its own evolution.4 Again, the supporting mechanisms may have evolved, like the various forms of ribosomes. For example, the single-cell prokaryotes have DNA chromosomes that are looped in a double-stranded structure, called a “plasmid,” that floats inside their cell bodies, while the multi-cell eukaryotes have chromosomes that are tightly coiled, wrapped around knobs of protein called histones, and segregated inside the cell’s nucleus. In either case—from Salmonella typhi to salmon—it’s the same DNA, transcribing its code onto messenger RNA, which some form of ribosome then translates into a protein.

It is possible that the Earth originally spawned multiple coding systems. DNA-RNA-protein might have been in competition with other molecular forms and chemistries.5 For example, within the domain of our current carbon-based organic chemistry, early life forms might have employed more bases, larger reading frames, and more amino acids from which to choose. The evolutionary development of something as fundamental as a molecular coding system would have taken place so early in the start of life on this planet that other competing forms might have died off before the crust quite cooled. And these complex chemistries, without the skins, shells, and scales of organic bodies to protect them, would have vanished without a trace in the turbulent environment of early Earth. But surely somewhere, hidden among the lichens and fungi, the bacteria and the molds, shouldn’t we have found an example of some poor, under-developed organism that preserved at least one of these alternate coding systems? No, nowhere. Not in the strange tube worms and sea spiders clinging to a volcanic vent in the deep ocean a hundred miles from their nearest organic neighbors. Not in Antarctic lakes buried under miles of ice for millions of years. The entire living world is created from the same DNA-RNA-protein coding system that gives us mangos, manatees, and human beings.

Which raises the tantalizing prospect that this coding system did not evolve here in the first place. Maybe it was seeded when a carbon-based astronaut from a distant star, who was visiting Earth soon after the crust hardened and the oceans formed, dropped a glove with a bit of his/her/its cellular chemistry attached. Maybe a spore of some alien bacteria blew into the solar system on a grain of dust from some other star system.

And there are people who can accept that organisms may evolve through changes in their DNA but who doubt that the basic chemistry—those ribose rings and phosphate bonds, the structures of purines and pyrimidines, and all the rest—could have come together in the first place to record molecular traces. Not here, and maybe not anywhere in the universe. And even if those first long-chain molecules could arise here or elsewhere, why would they? What would they be preserving for posterity, other than their own coding? And then, why would they put together A and T, C and G, in preference to any other combination? For, after all, molecules at that level might form spontaneously, but they wouldn’t create anything. Eventually, they would break apart under the stress of some other chemical reaction. Bare molecules have no reason to preserve one sequence as more successful in their environment than any other.

Although I am not a deist and have no belief in a supreme being, it is possible that the DNA-RNA-protein coding system was designed for the purpose that it so admirably fulfills. DNA and RNA are not even that hard to make. At the biotech company, we had factory that manufactured the ribose rings with their attached bases and their phosphate-bonding tails. We synthesized long chains of single-stranded DNA as primers designed to anneal to the sequences in wild DNA, copy them, and then reproduce to amplify them, in the process known as “polymerase chain reaction.” This is the basis of most techniques of genetic analysis and sequencing. We now know how to knit those synthetic strands of DNA into artificial chromosomes, wrap them in a coat of proteins and lipids similar to a cell membrane, give them a basic metabolism, and nurture them in the laboratory as a primitive cell.6

If human beings can master—or at least start using—this technology a mere sixty years after we first defined the DNA molecule, then a more advanced civilization on a planet around one of the billion stars in the Milky Way, or in one of the trillion other galaxies in our universe, might have become very good at this kind of synthetic creation. Perhaps this forefather civilization was carbon- and DNA-based itself, and they metaphorically plucked a rib from their side and turned it into the seeds that became all life on Earth and perhaps on other planets, here and around other stars. In that case, they might know the secret of how the first DNA coding system itself evolved. Or perhaps these scientists or missionaries—or gods—evolved in some other form and merely thought up the DNA-RNA-protein coding system as a good way for creating that temporary reversal of entropy we call life.

And maybe—I’m just freewheeling here—there really is a Supreme Being, an All Soul, a Divine Spark, Which/Who touched the Earth’s chemistry with its all-seeing inspiration and foreknowledge, and thereby designed a molecular system that would transform a barren planet into one teeming with chemical energy, adaptability, and eventually with thought itself.

1. See also DNA is Everywhere from September 5, 2010.

2. Two additional amino acids in common use are added to some proteins by enzymatic action after the messenger RNA sequence has been translated into the polypeptide chain.

3. See, for example, this chart of the genetic code from a biology course at Kenyon College.

4. Well, one hint. The RNA molecule is always single stranded, and that OH group on the second carbon in each base keeps the long-chain molecule from coiling around itself. DNA is always double stranded, and that missing oxygen lets it form the iconic helix shape. So the RNA form is simpler, straighter, more exposed to chemical damage, and perhaps more primitive. This suggests to some biologists that, as a record-keeping molecular system, RNA may have come first and then DNA, with its tighter, more complex, more robust structure, may have evolved from RNA. But we find no living organisms that use only RNA not in combination with DNA. However, some viruses, called “retroviruses,” have RNA-only coding and use a reverse transcriptase to make DNA copies of their code once they invade a host cell.

5. For example, it’s possible to imagine a silicon-based life form, with silicon taking the place of carbon in its DNA-surrogate’s ribose rings, purines, and pyrimidines, and with arsenic taking the place of phosphorus in the phosphate bonds that provide cellular energy and knit together those ribose rings into the DNA structure. Such a system would have heavier molecules, because the component atoms have a higher atomic weight, and the molecular bonds would be weaker, because the electrons holding the atoms together would be traded among shells farther out from their atomic nuclei. However, such heavy, fragile molecules would be at a disadvantage in competition with a lighter, stronger DNA molecule.

6. See, for example, “ ‘Minimal’ cell raises stakes in race to harness synthetic life” in Nature News, March 16, 2014.

Sunday, May 21, 2017

History a Thousand Years from Now

If historians from the 30th century, the far future, look back on our time, what will they see? What will they know and understand about our period?

First, let’s assume that they have full access to our writings and recordings, but they are not personally subject to our current states of thinking and feeling. That is, future historians might be like a modern audience seeing Romeo and Juliet with the original Shakespearean dialogue. They might hear the words as actors of the late 16th century spoke them, and they might have some sense of their meaning. But a modern audience might not fully understand what social forces drive two powerful families like the Montagues and Capulets to gather their extended kin into rival armies and fight in the streets of medieval Verona. And if they did understand those forces, they still might not feel the hatred in their own breasts. For us moderns, the rage of Tybalt and the wit of Mercutio remain distant curiosities.1

However, even an Elizabethan audience might not have been able to say what exact qualities of mind or habit made the Montagues so loathsome to the Capulets and vice versa. The evidence supporting the feud between these “two households, both alike in dignity” does not exist in the play. Instead, it’s an “ancient grudge”—which is Shakespeare’s version of the MacGuffin that sets his plot in motion.

In similar fashion, what would far-future historians know about the passions that seem to be tearing the United States apart and ravaging many of the other Western democracies as well?

On one side, they might see a radical strain of progressivism or passionate futurism. This is expressed in a number of different popular movements—inspired by the writings of various political philosophers like Marx, Lenin, and Mao—but with the same aim of overturning the adherents’ existing society and its political, economic, and social structures. The radical’s goal is to create a new state, a new basis for economic transactions, a new morality, and a new set of relationships between men and women, between parents and children, and between citizens and the state. This new order is always projected from rational, egalitarian, humanitarian, compassionate, and collectivist principles. It is based on theories about human nature and visions of a future that has never before been experienced on a countywide scale.2 And the fact that every country which has attempted to enact these theories and visions thereby sank itself into chaos, political repression, and self-inflicted poverty is dismissed as a failure, not of the theories and visions themselves, but of the imperfect people making the attempt.

On the other side, the future historians would see a passive strain of conservativism or traditionalism.3 This side of the argument has no popular movements and only weak political associations. This side has no coherent philosophy based on novel thinking but is reflected in writers like Edmund Burke, David Hume, and Adam Smith, who—rather than creating a vision of a new social and economic order—were trying to understand and express how people actually go about their political and economic business in developed societies. The adherents’ goal is not so much to build anything new as to preserve and defend those structures and relationships that have grown up in their own society over the decades and centuries. They don’t mind certain people obtaining and wielding political power and economic advantage, but they haven’t subscribed to the new theories and visions. They don’t mind their society evolving, moving slowly toward different values and accepting different relationships through a kind of unspoken plebiscite. But they resist the notion that a cohesive and doctrinaire group, fired with strong ideas and emotions, should push their society into configurations that are either untested or have been found in the past to be disastrous.

The historians might trace out the sequential steps—the published positions, the party platforms, and the pivotal elections—in this growing social disruption. They might assign causes to the society’s failure to adhere to its former constitutional structures and its market principles, as well as its failure to provide adequate rewards for effort and risk taking. They might read about demands for personal compassion and complete equality among all members of society. They might hear the siren call of an enlightened utopia … but they will not be able to feel its pull. They will remain deaf to the dimension of imagination that drives the collectivist movement. And without an exercise of imagination and empathy, they will not know the depth of revulsion among those others who can accept social evolution but not political revolution.

What the historians will see in our times is a social madness based on masochism and fear. They will see people in the most advanced societies the world has ever known struggling with a loss of existential faith. People on one side insist their lives are impoverished among an outpouring of goods and services. People on the other insist their lives are threatened simply by words and ideas. People on both sides are convinced the other has no understanding of—or respect for—them and their cause. Modern Montagues and Capulets bite their thumbs and spit on each other’s shadows without a clear remembrance of the ancient grudge that separates them.

Modern Americans look back on the political and social tensions of the 19th century that led to the first Civil War, and they grope for an explanation. “It’s all about slavery.” “It’s all about our way of life.” “It’s all about human rights.” “It’s all about my rights.” We look for simple, easy explanations of a complicated past that ended in five years of bitter war and the loss of more than American 600,000 lives.

If the divisions tearing at our society right now result in a second Civil War, what will future historians say? Granted, that first conflict was regional, based on two societies, North and South, which had grown apart—or had never actually been much alike—with their differing social values and economic systems, although with a single constitutional basis.4 The next conflict will be intellectual and visceral, with enclaves of sentiment and purpose concentrated along the country’s two coasts and among its urban elites, but otherwise with neighbor opposing neighbor across the width of a backyard fence. The next war is going to look more like a street fight or a riot—Montague and Capulet style—than any conflict between settled countries.

Will anyone in the far future understand it better than anyone alive today?

1. For a different sense of this family antagonism, here is Prokofiev’s “Montagues and Capulets” from the Romeo and Juliet ballet. It’s one of my favorite pieces.

2. But see When Socialism Works from October 10, 2010.

3. The word of the moment is “populism.” Supposedly, this reflects an aversion by the average person and the populace as a whole from the theories and visions espoused by a radical elite of political, academic, and cultural thinkers and leaders. A decade and more ago, it was the “people,” the populace itself, who were supposed to align with these theories and visions against the repressive forces found in traditional society. See how the language changes?

4. In fact, the Constitution of the Confederate States of America was practically a word-for-word duplicate of the U.S. Constitution, with some significant differences particular to the Confederate cause. Clearly, the foundation and structure of the government were not in serious contention.

Sunday, May 14, 2017

Wolves and Dogs

By now it is generally accepted, although not entirely proven, that dogs evolved from wolves.

The best current theory is that, rather than humans stealing wolf pups and feeding and raising them at their campfires, some subset of wolves domesticated themselves. In this theory, the hunting pack was supposedly attracted to the edible scraps found in the humans’ kitchen middens—waste piles where hunter-gatherer groups tossed their old bones, discarded skins, and other refuse. Because the humans came randomly and often to dispose of these wastes, the wolves could not avoid contact with this large, strange, and unpredictable species. Over time, the wolves which demonstrated the most tolerance of human presence got the best and the freshest scraps. Fearful or hostile wolves kept their distance and got less of the good stuff.

This theory dovetails nicely with the work of Russian biologist Dimitri Belyaev, who bred foxes for tameness. Working sixty years ago, this dissident from Soviet biology began studying foxes in order to disprove Lysenkoism—the Lamarckian theories of Trofim Lysenko, who said traits acquired in life could be passed along to later generations. Stalin loved Lysenko’s ideas, because they proved that the Soviet state could, with sufficient force and enough reeducation camps, create a new “Soviet man,” whose selfless passivity and obedience to the Party would breed true and ensure Communist dominance into the future. Belyaev’s foxes—animals of the same family, Canidae, as dogs but not the same genus—gradually changed their physical appearance as well as their behavior. Through hormonal mutations associated with their tolerance of humans, the foxes over generations developed shorter snouts, rounder heads, and changes in coloring—among the same set of features that differentiates dogs from wolves.

How is this not evidence of Lysenkoism? Because in both cases—the wolves at the kitchen middens, Belyaev with his caged foxes—the changes depended on selective breeding for certain qualities. Both cases depended on various traits—fearfulness and hostility, or their lack, along with the associated neurochemical and hormonal differences—existing in the animal at birth. In the beginning of each transformation, these traits existed as random genetic mutations; in later generations, they were selected and reinforced through breeding. Wolves that could tolerate the human presence ate better and were more successful in mating; wolves that feared or avoided human contact either died out or returned to the forest. Belyaev’s fox pups that could tolerate being handled and liked being played with were allowed to breed at maturity; pups that snarled and snapped like wild animals were discarded from the experiment. Whether the selection is a natural circumstance of the environment around the midden or an intentional choice by a human breeder, the result was the same. A gradient of selection—a test for survival traits—was imposed on the breeding group, and the preferred traits were passed along to succeeding generations.

Every farmer does this, and it’s been going on since human beings first stopped roaming after the wild herds and settled on the land. We find a type of berry we like, plant it separately, control its pollination, and turn it into a brilliant red tomato—or a coffee bean with a particular flavor, or a luscious strawberry, or a conscious hybrid like the loganberry. We find a type of grass whose seeds are palatable and turn it into wheat—or corn, or any other type of grain. We find boars with the tenderest meats or wild horses with the strongest backs and turn them into farm animals. And dog breeders, like my aunt, find poodles with the best combination of form, disposition, and coloring and breed them to create a line of miniature and toy dogs that are exact replicas of their larger cousins. Other breeders find dogs that are attentive to human desires as well as quick and clever with sheep and turn them into herders and healers. We’ve been doing this for ten thousand years.

In most cases the original specimen remains, for the rest of us, obscure. The original and unattended tomato plant has either died out for lack of habitat or hides in a forest glade somewhere, unrecognizable to passing hikers. The original boar might lurk in the forest and become the target of occasional hunting parties, but for the most part the production of pork for barbecue ribs and savory sausages remains hidden from the average customer’s attention.1 Wild horses still exist in the American southwest, but they are only the feral descendants of domestic horses brought to this continent by Spanish explorers. The original, prototypical horse—Przewalski’s wild horse of the Central Asian steppes—had once almost gone extinct and has since been preserved only as a curiosity.

In these cases, the average person has no emotional attachments, either to the farmed pig or the feral boar. But in the case of wolves and dogs we have both attachments and opinions.

Wolves exist in the public imagination as noble creatures. They are bound to the pack, loyal to their mates, fierce in their hunting, sleek in appearance, and bold in their status as predators. Although wolves might be the subject of childish fears born out of fairytales and horror novels, for most people they the emblem of everything that is implacably wild and free—and true to itself. The wolf has its own nature.2

Dogs exist in our homes as loving companions. They are biddable, fawning, loyal to our family, suspicious of strangers, and gentle with our children. Many people sleep in the same bed with their dogs. The average dog, with its rounded head, floppy ears, and wagging tail, is now more our court jester and emotional pillow than our guardian and defender. Yes, a large dog can be trained to become fierce and unfriendly, but they do so only in response to human bidding. Their nature is to trust and depend. The dog has the character we give it.

For many people, the transformation from wolf to dog is a travesty, if not a tragedy. We—or our table scraps—have created something unnatural, in defiance of nature. We have taken an animal that was once self-sufficient and uncompromising and turned it into a beggar and a clown. But the wolf of our imagination would make a poor playmate for our children, have no interest in defending our homes, and would not sleep in our beds or even doze in our strange and dangerous presence.

For others, the wolf in the wild is a menace to livestock, a danger to house pets and babies, and at the very least an unpredictable presence around ranches and farms. There are still people who will shoot a wolf on sight, even while environmentalists are trying to restock and encourage them in habitats where they once roamed. The wolf is a topline predator in an environment that offers ever fewer prey animals and so has become a nuisance.

None of these considerations, of course, is of any concern to either the wolf or the dog. It is performing in its environment and reacting to stimuli exactly as its genes were selected to do. It is fitted for survival under the circumstances in which it finds itself. And this is perfectly natural.

Wolves and dogs are both still fresh in the human consciousness and imagination. They are a reminder that our species has changed the natural world in ways that we believe are both good and bad. We bend species to our will. We change forest and field into plantation and farmland. We occupy so much of the land and use so much of the rivers flowing across it that, in many areas, “nature” is a thing that must be preserved behind a fence.3

Which is preferable—wolf or dog? That depends, like so much else, on your viewpoint and your purpose. I for one am glad that the distinction exists. I am proud that we have had a hand in engineering a companion who can remind us to be kind to creatures that are different from and less capable than ourselves. And I am pleased that we can still value the wolves of our imagination while petting the dog that stands at our side.

1. The visceral distinction between husbanded animals in production on the farm and their prepared flesh on the customer’s fork is linguistically preserved in English as a relic of the Norman Conquest. The words “pig” and “cow,” from Middle English, are retained for livestock, while the words “pork” and “beef,” from Old French, are kept for meats in the kitchen and on the table. This verbal distinction came about when the Anglo-Saxon field hand still worked to raise the food for his Norman overlord.

2. We have friends who once had a pet dog that had been bred from a line intended for pulling sleds in races like the Iditarod. Its genetic mix was part Siberian, part husky, and part wolf—the latter added for endurance in long races. This particular animal was devoted and loyal to its humans, but it did maintain a certain aloofness and dignity which the owners attributed to its wolf nature. For example, the dog would enjoy cuddling and being petted but would not allow you to touch it with your feet. The animal seemed to sense that feet were different from hands and represented an indignity.

3. However, the people who think we’re “destroying the planet” need to get out of their apartments in Berkeley. Great tracts on this continent—and on most of the others—still manage themselves pretty well under natural conditions. Humans, for the most part, live clustered along the coasts and in the river valleys. We are still thin on the ground over much of the Earth.

Sunday, May 7, 2017

On Safety Measures

Human nature seems to have a built-in equilibrium system. We are very good at evaluating situations and systems and then invoking a sort of psychological compensator.

Take, for example, the latest models of full-featured cars. They all come with advanced safety options like lane-departure warnings, backup cameras, adaptive cruise control—which adjusts its speed to that of the car ahead and even stops—and other technology that helps the human driver guide and control the car. These vehicles are halfway between the old-fashioned automobile with just human hand-and-foot controls and a clear glass windshield, and the prototype self-driving cars that can follow the road and make complex decisions without human assistance.1

I predict that, rather than make us all safer, these devices—especially lane departure and collision avoidance—will make the person behind the steering wheel less attentive to driving. Many drivers will now think that it’s okay to text or chat on the phone, even with strict laws against these practices, because the car will warn them if anything really important is happening. That’s the psychological compensator in action.

Of course, people still text and chat without any such assistance. When I was doing the long commute down to the biotech company each day, for a while I rode in the company-sponsored vanpool. Sitting high above traffic, we could look down into the cockpit of the cars moving alongside the van. I would routinely see a woman applying eye makeup using the rearview mirror, a man shaving, or someone reading a book propped against the steering wheel. The trouble was, they were all doing sixty miles an hour in dense traffic. But because the road was fairly straight and all the cars around them seemed to be holding their position, these drivers thought it was safe to keep just part of their attention—the eye not getting the makeup, or an occasional glance at the road every other sentence—on the business of driving. With automatic helpers like side cameras and front-mounted radar, these drivers will spend even more of their time on personal business.

In the same way, I’m pretty sure that people who have robotic vacuum cleaners—those flat disk-things that prowl around the room sweeping up crumbs and pet hair—are less conscious about picking up after themselves. And now that every processed food is designed for the microwave, and people are already less conscientious about preparing their own meals, many are careless about reading and following the instructions on the box or can. Just put it in and “nuke it” for a minute and thirty seconds on full power. See how that turns out and repeat as necessary.

People are not necessarily scofflaws or careless. They just have an evolved sense of how to read a situation and judge their own safety and efficiency. Sometimes this sense is faulty—as we can see from those humorous “fail” video clips on Facebook, showing young men attempting parkour jumps from a second-story roof onto a dumpster, or riding a motorcycle at speed up a plank into the bed of a pickup truck. But for the most part people look at their life situation with some precision, and they weigh their expenditures of attention, energy, and time against what they see.

For example, we live in a high-rise condominium with a three-level garage that has one entry point at the top of the structure—the garage is built into a hillside—and an exit on each of the lower floors. Everyone parking there must go some distance around the garage to get to their stall, and many people with stalls on the bottom floor must make two full circuits. Crosswalks are well marked but sometimes blind, and each of walkway is protected with a stop sign for oncoming vehicles. Still, most people travel through the garage at about fifteen to twenty miles an hour. Given the available sight lines, that feels like a safe speed. Some people travel thirty miles an hour or more, and that’s just too fast if others are backing out of or walking away from their stalls. And yet the condo association has rules limiting the garage speed to five miles per hour and has posted this limit on every other pillar in the structure.

Five miles per hour is just a fraction above human walking speed. Given the distances involved, that means most people will take ten to twenty minutes to drive from the entrance to their stall, or from their stall to an exit.2 Most people don’t factor that kind of delay into any of the trips for which they want to use their car: starting on the morning commute, getting back in the evening, of just dashing over to the shopping center. Nobody in our garage drives five miles an hour. They don’t have to, because their internal sensor says that three to four times that speed is still safe. There’s no cop around to ticket them—just occasional newsletter blasts from the homeowners association reminding us of the speed limit. And yet, at those higher speeds, massive carnage does not take place.

In the broader society, we have town councils, state lawmakers,3 and federal legislators passing all kinds of rules and regulations designed to make people safer. For example, the State of California posts sixty-five miles per hour as the freeway speed in most areas, only going up to seventy on certain long-distance freeways out in the countryside, like Interstate 5, where the sight lines stretch for miles. I can tell you from experience that if you’re not doing seventy-five or eighty on the road, you’ll get run over. And I have had Highway Patrol cruisers careen past me—sans lights, sirens, or any other sign of authority in a hurry—as if I was blocking their lane. When all the fish in the stream are breaking the speed limit by ten to fifteen miles an hour, I guess you save your tickets for the ones doing ninety or a hundred and weaving in and out of lanes.

The city might put a stop sign at every corner and a traffic light every other block, and people would still roll through if they could see that nothing’s coming at them for a quarter-mile. More safety measures, especially those that fly in the face of a human being’s internal evaluation of the situation, don’t make us safer. They just make us feel guilty—well, mildly—as we go about our business.

When cars are truly self-driving, like little personal buses, then we won’t even bother to look out the window or close the door when we get in or out.

1. See The Future of Self-Driving Cars from March 12, 2017.

2. I’ve walked through the garage—and I’m a fast walker. It takes time to make a full circuit.

3. Don’t get me started on the Proposition 65 warnings about “chemicals known to the State of California to cause cancer and reproductive harm.” They are posted on virtually every building and enclosed structure because, hey, modern life is full of chemicals. Who among us pauses in thought and then decides to stay outside?

Sunday, April 30, 2017

On Sincerity

When I was at the university in the late 1960s, the campus revolution was just getting under way. Although it was mainly fueled by anti–Vietnam War protests, the student demands spread in all directions, calling for a virtual redefinition of the university structure and of society itself. In coursework, the new demand was for “relevance”—meaning politicized teaching of the correct sort—and started the movement to dump the collected works of William Shakespeare for the collected works of Eldridge Cleaver. In addition to “relevance,” the other spiritual demand of the time was for “sincerity.” This would be in preference, I suppose, to blatant hypocrisy.

One of my former philosophy professors, when questioned in a student-led colloquium, stated: “Sincerity is a trivial virtue.” I knew immediately what he meant: many other human virtues are far more important. I would prefer a person who keeps promises, pays debts, abides by contracts, performs acts of kindness and public service, takes care of family members and friends, treats other people with respect, smiles politely, and otherwise behaves in concrete ways. Whether the person “really means it” or is “faking it” is far less important to me.

This preference is, of course, from my own point of view. If a person says “Thank you” when I give them something or perform for them some small service, it makes me feel good. Whether or not the person actually means it, or feels truly grateful—that is, is sincere about this minor politeness—does not matter to me. Surely, if the person is grimacing, making faces, rolling their eyes, or using a sarcastic tone, to imply that no thanks are actually involved, then I know that the words are not meant sincerely. But my hurt comes not because of their lack of sincerity, but because of the implied mockery, as if my small action was really beneath their notice, or not kind and helpful at all, and thus deserving of their scorn. Otherwise, if a person says “Thank you,” even if it’s a murmur and there is no eye contact or other sign of heartfelt emotion, I can accept this as an empty politeness from an obviously well-trained, civilized individual.

Politeness is the verbal grease that keeps us descendants of howler monkeys from screaming in rage and trying to kill each other.

From the speaker’s point of view, the sincerity—or lack of it—in the exchange is a small measure of the state of that person’s soul. The saint or the deeply feeling person who says “Thank you” with sincere gratitude, virtually blessing my small gift or act of service with reciprocated good wishes, is expressing their own feeling of being at peace with the universe and gladness upon recognizing and being recognized by a fellow human. The hurried person who murmurs “Thanks” out of pure reflex, the ingrained habit of good breeding, and is unconscious of any felt gratitude, is at least practicing that verbal grease which keeps us all functioning. And the snarky person who sneers and rolls their eyes, to laden that “Thanks” with double meaning, is spreading their own bile and cynicism, fouling the gears of civil discourse. That little bit of intentional meanness is hurting them, corroding their soul, much more than the momentary confusion and pain might cause me.

In human interactions, the measure of sincerity is much like the Turing test for artificial intelligence.1 If you cannot tell whether the person is being sincere or not, it doesn’t matter who’s typing on the other side of the wall. You accept the person’s statements or intentions at face value and move on.

A society that valued sincerity as a primary virtue would be far different from our own.

Yes, I know the intent of those early student demands. By rooting out hypocrisy—the evil of paying lip service to popular principles but then regarding oneself as free to act in accordance with private intentions—the promoter of sincerity hopes to bring those hidden intentions to the surface. When you demand that people act sincerely, you expose falsehood and can then hope to enforce proper action. People who can say one thing and do another would be revealed as perpetrating a hoax on the society around them.

But the purpose of the exercise will backfire. A society of people who are forced by cardinal values to always say and do what they mean and what they are feeling at the moment will be a harsh and abrasive society. “Gee, Grandpa, that’s a measly five dollars you put in my birthday card.” “No, lady, you don’t get any ‘thank you,’ because it’s your job to pour my coffee.” “You men all think I can’t open a door for myself, you bastards!”

And we have before us the example of the most strongly politicized societies—which are usually the goal of those who would most earnestly promote the virtue of sincerity—as hotbeds of rampant insincerity. There people will loudly proclaim the party line, sing the party songs, and march in lockstep with the party cadence, while secretly loathing the party and all its purposes. And the more the party demands of them proper feelings of allegiance and respect, the greater becomes their popular hypocrisy—but always well hidden, driven underground. People are just ornery that way.

No, people still own the real estate inside their heads. They need the space, the personal freedom, of being able to think one way and act another. They need to smile when they are tearful, to force a polite response when they want to scream at you, to turn away with a murmured courtesy rather than engage and share their deepest thoughts. Humans have always been a bi-level species. We have always used meaningless courtesies to smooth over differences between individuals that would otherwise have us howling all the time. Similarly, we use formalized, ritual diplomacy to moderate relations between nations that would otherwise have us always on the brink of war. Hypocrisy and insincerity let us pick and choose our battles. They allow us to live.

So yes, while we would like to think that our friends and family are always sincere in their expressions of love, gratitude, and contentment, that the barista at Starbucks is doing us a favor that deserves our thanks, and that corporate executives mean in their hearts every word they put in a press release—it is not always so. And our indulging the small hypocrisy of not really noticing—that, too, is part of the social grease that makes life tolerable.

1. Computer pioneer Alan Turing in 1950 proposed that, to test a computer for artificial intelligence, we station a person on one side of a wall and have them communicate with a respondent on the other side through typewritten messages. The first person does not know if the other is a real human being or a very fast and well-programmed computer. If the person on this side cannot tell after about five minutes whether the respondent is human or not, then the responding machine is artificially intelligent. This test has since been superseded by others that measure specific outputs and performance in more dimensions than simple chat, because mindless, confabulating language processors, like ELIZA in the mid-1960s, easily passed the Turing test but were hardly intelligent.

Sunday, April 23, 2017

On Teaching Writing

I can’t teach another person how to write. I don’t think anyone can. This is not to disparage those who teach writing courses and run workshops and retreats for beginning writers. Some of the basic skills are necessary and teachable. New writers also believe that the art includes structures they need to learn and professional secrets that can be taught. And I am not ungrateful to the excellent English composition teachers I had in high school and college who did teach me a thing or three. Finally, every new writer needs someone in a position to know something about writing who will read their work and give them both positive and negative feedback, because that builds confidence. But the art itself, the essence of writing—that can’t be taught, because it grows from inside a person.

Every new writer needs to know the basics. Becoming a writer is impossible without knowing the language in which you will write. For English, and for most other Indo-European languages, that means understanding its grammar, the parts of speech, verb tenses, slippery concepts like mood and, in English, the subjunctive, as well as sentence structure and diagramming, the rules and the malleability of syntax, a focus on words with vocabulary and spelling drills, the difference between a word’s denotations and its connotations, and on and on. It also helps immensely to study other languages that have contributed to your native tongue—as I did with French, Latin, and Greek—as well as one or more parallel languages—as I did with Russian and Portuguese—in order to recognize cognates and word borrowings, and to puzzle out the meaning of new words and gain a taste for their flavor.

The art of writing also has some broader structures that the novice can learn by rote. In the nonfiction world, the writer can study the organization of an argument: going from specific to general, or general to specific, and the logical fallacies that invalidate an argument in the eyes of any educated reader. A journalist learns the inverted pyramid structure, where the most important facts of the news story—the Five W’s of who, what, where, when, and why or how—occupy the lead, while other details and analysis necessarily follow. An essayist learns to find a hook in everyday experience—such as a common question or problem—that will draw the reader into the thread of the argument. A technical writer learns to break down processes into discrete steps and to address each one, with all its variables, separately and usually in chronological order.

In the fiction world, there are fewer formal structures to observe. Short stories are usually more compressed in time and space, and involve fewer characters, than novels.1 Most stories of any length are structured around some kind of loss, struggle, journey, or other contrivance that serves to keep the action moving forward. Most of them arrive at some kind of climax, where all the characters, plot lines, and problems come together and are addressed or resolved. And most also have a denouement, which tidies up any last unresolved plots and suggests how the characters will spend the rest of their lives. A playwright learns how to frame a story into separate acts, and how to move the action toward and break it—or appear to resolve at least some of it—at the end of each one. A playwright also learns to convey character and action through dialogue, where excursive histories and graphic action are not possible on a closed stage. A screenwriter must also learn a more formal writing structure, which involves placing the margins of action and dialog separately, so that one page of script roughly equals one minute of screen time, and to put certain words associated with sound or visual cues in all caps, so that they be noticed and given proper treatment in production. To be successful, the screenwriter also must obey current conventions about act structure and timing.

But aside from these generalities, the art of writing is something that either takes you into its confidence—or it doesn’t. Your mindset, daylight dreams, life experiences, and past reading either prepare you to tell stories, paint with words, and sing cantos to your readers—or they don’t.

Two years ago, I decided to make a formal study of music because, while I have always loved listening to music both popular and classical, my knowledge of it remained rudimentary. I wanted to be able to make music as well.2 I also want to keep my brain active and alive as I enter my later years, and learning new skills and tackling new challenges seem like a good idea. I began taking lessons on the keyboard—generic for pianos, organs, and synthesizers—and bought myself a Hammond drawbar organ with two keyboards, presets, vibrato and chorus, the Leslie function—don’t ask—and a set of pedals.3 I chose to learn the keyboard because it would teach me about chords and voicings in the way a single-note instrument like the trombone could not, and it had a fixed sound the way a stringed instrument—which needs to be constantly tuned—does not.

My teacher, who is a lifelong musician himself, had me learn scales and taught me the Circle of Fifths, which unlocked for me the structure of music. I already had some sense of how notes are represented on the staff, what the timing intervals are, and other parts of musical notation. I now learned how chords are structured, with all their variations, as well as chord progressions, which appeal to the ear and are the basis for most popular songs. This was like learning grammar and sentence structure as a prelude to writing.

My teacher has also taught me to differentiate harmony from melody, how to break down a new piece of music into its chords and their roots, to play them solo at first, and only then to work on the melody. This prepares me both to play the song and to accompany a singer or other band members. I also learned to blend the two functions of harmony and melody through voice leading. I learned to keep time in the bass and to do bass walks—although my timing is still faulty. I am now learning blues scales and their progressions. This is all like learning the various structures of nonfiction writing formats, or the differences between a short story and a play.

But … after two years, I am still at the stage of analyzing and interpreting, of working the thing out intellectually rather than emotionally. I am working my left hand to form chords or walk the bass, my right hand to play melody or voice lead, but the two are not yet coming together. I approach each new song as an exercise in deconstruction. A song is an intellectual challenge, not an act of personal expression. I can make music, but I don’t yet make it sing.

This is the essence of art. In music, you can learn notes, scales, chords, and progressions—but something inside you must open up and sing. In writing, you can learn grammar, vocabulary, and rudimentary structures—but something inside you must catch fire with story. A teacher cannot tell you how to light that fire. Oh, he or she can light matches and flip them at your head. A teacher can spot the small sparks that sometimes appear in your writing, point them out to you, praise them, and try to nurture them. But if the student’s mind is—figuratively—damp wood, then nothing will catch.

Armed with the basics of language and structure, any writer still must eventually teach him- or herself how to make a story come alive. In part, we do this by reading widely and observing what other writers have done and how they do it. For this, I love to study how a tale is formed in a short story or novel and then remade into a movie. Between one form and the other, the essence of storytelling emerges, stripped away and rebuilt, like a butterfly from a caterpillar’s chrysalis. As writers, we can also ask questions of the stories and novels we read: How did the author do that? What was he or she trying to achieve? Why does this feel right (or wrong)? We also absorb, as if by osmosis, what constitutes good taste in storytelling and what leaves a dull thud. And, of course, we learn by doing, by trying out new approaches, by seeing what works for us in our own personal style, how to create and move characters, by alighting on the forms and structures that work, by discarding the techniques and tropes that seem awkward and excessive.4

Writers learn by writing and adapting stories to their own personal taste and style, just as musicians learn by playing and adapting songs to the rhythms and harmonies that personally move them.

Ultimately, anyone with a sense for language and logic can learn to write an adequate newspaper article or a technical manual. These are about facts and only need an awareness of the reader and what he or she wants—or needs—to know in order for the writing to work. But stories involve something more, the dimension of emotions, of aspirations, desires, fears, and disgusts. Storytelling must look inside the writer’s own self and his or her own experiences to find the expression of ideas and emotions that will touch a similar chord in another human mind. Stories are as much about being human as they are a collection of words, imagined conversations, described actions, and resolved plot lines.

This is why I believe machine intelligences may one day write adequate newspaper articles and technical manuals, but they will never excel at writing fiction. Not, that is, until machines become so complex and involuted themselves that their programs resemble the human mind. Humans live in a shadow world: part of our daily life revolves around the facts and consequences we glean from the external world, and part lies in the interpretations we place upon them. And these interpretations, our attractions, aversions, and distractions—the push and pull of hot and cold feelings, towards and away from various thoughts and objects—are shaped by all of our mind’s unexpressed desires, hidden agendas, disguised hatreds, and other emotion influencers which lie buried in the subconscious and only come out in random thoughts and in our dreams.

If the writer of fiction does not touch this subconscious level,5 then the story will remain a mechanical exercise, a study of forms. It may involve extremely well-crafted characters carved from clues written on bits of paper drawn out of a hat. It may involve an intricate plot filled with actions traced on an Etch-a-Sketch. But it won’t make the reader glow with recognition, or identify with the situation, or even much care. That kind of story comes from within, and it’s nothing that one human being can teach another how to create.

1. Some fiction classes might also go into technical details that—for me—are esoteric to the point of disappearing into the aether, such as the differences between the novelette and novella and the novel. Aside from placing limits on length, I don’t know how these forms differ, other than being longer and more complex than a short story. How long should any piece of fiction be? Long enough to tell the story and satisfy the reader. Any other consideration is the business of publishers, such as how much paper and ink they will need to buy.

2. When I was in fourth grade, I started playing the trombone, along with dozens of other students who were being introduced to the school band with their own instruments. I could read music to the extent of linking notes on the staff with positions on the slide. I understood the timing of notes and tempo. I grasped that flats were a half-tone down, while sharps were a half-tone up. But I never understood the structure of our Western, twelve-tone music which makes the white and black keys of a piano a physical necessity. I never associated all those sharps and flats written on the staff at the start of a piece of music with anything other than the composer’s prefacing remarks; so I did not understand how they changed the playing of notes that appeared halfway down the page. The idea that music had different scales and keys, and that they were all part of a greater structure, was never fully explained to me. Naturally, I was terrible at playing the trombone.
       Nevertheless, my hunger to make music persisted through the years. I tried at various times to teach myself the violin, the guitar, the Chapman Stick®, and the French horn—all without success. Then, two years ago, I finally got serious and began studying music as a thing in itself, and I started taking lessons on the keyboard.

3. When I was growing up, my Dad bought a Hammond organ with rotary tone wheels and learned to play it. I never actually played the thing, but I did goof around on it, fiddled with the drawbars, and listened as the various pipe lengths and their voices came together to make a sound. So this instrument was more familiar to me than a piano and less bizarre than a synthesizer.

4. Early in my writing career, I heard an interview with Marilyn Durham about her novel The Man Who Loved Cat Dancing, where she mentioned the problem of getting a character to walk through a door. This is harder than it sounds, because any writer grapples—as I was grappling at the time—with how much to show and tell in every action. Do you describe the door? Do you show the character twisting the doorknob? Do you use the sweep of the opening door to describe the room inside? My epiphany then and my practice ever since is that, unless the door is important and hides something tantalizing, ignore it. Start the action inside the room. Doors are for playwrights who have to get their characters onto a stage.

5. See Working With the Subconscious from September 30, 2012.