Sunday, April 23, 2017

On Teaching Writing

I can’t teach another person how to write. I don’t think anyone can. This is not to disparage those who teach writing courses and run workshops and retreats for beginning writers. Some of the basic skills are necessary and teachable. New writers also believe that the art includes structures they need to learn and professional secrets that can be taught. And I am not ungrateful to the excellent English composition teachers I had in high school and college who did teach me a thing or three. Finally, every new writer needs someone in a position to know something about writing who will read their work and give them both positive and negative feedback, because that builds confidence. But the art itself, the essence of writing—that can’t be taught, because it grows from inside a person.

Every new writer needs to know the basics. Becoming a writer is impossible without knowing the language in which you will write. For English, and for most other Indo-European languages, that means understanding its grammar, the parts of speech, verb tenses, slippery concepts like mood and, in English, the subjunctive, as well as sentence structure and diagramming, the rules and the malleability of syntax, a focus on words with vocabulary and spelling drills, the difference between a word’s denotations and its connotations, and on and on. It also helps immensely to study other languages that have contributed to your native tongue—as I did with French, Latin, and Greek—as well as one or more parallel languages—as I did with Russian and Portuguese—in order to recognize cognates and word borrowings, and to puzzle out the meaning of new words and gain a taste for their flavor.

The art of writing also has some broader structures that the novice can learn by rote. In the nonfiction world, the writer can study the organization of an argument: going from specific to general, or general to specific, and the logical fallacies that invalidate an argument in the eyes of any educated reader. A journalist learns the inverted pyramid structure, where the most important facts of the news story—the Five W’s of who, what, where, when, and why or how—occupy the lead, while other details and analysis necessarily follow. An essayist learns to find a hook in everyday experience—such as a common question or problem—that will draw the reader into the thread of the argument. A technical writer learns to break down processes into discrete steps and to address each one, with all its variables, separately and usually in chronological order.

In the fiction world, there are fewer formal structures to observe. Short stories are usually more compressed in time and space, and involve fewer characters, than novels.1 Most stories of any length are structured around some kind of loss, struggle, journey, or other contrivance that serves to keep the action moving forward. Most of them arrive at some kind of climax, where all the characters, plot lines, and problems come together and are addressed or resolved. And most also have a denouement, which tidies up any last unresolved plots and suggests how the characters will spend the rest of their lives. A playwright learns how to frame a story into separate acts, and how to move the action toward and break it—or appear to resolve at least some of it—at the end of each one. A playwright also learns to convey character and action through dialogue, where excursive histories and graphic action are not possible on a closed stage. A screenwriter must also learn a more formal writing structure, which involves placing the margins of action and dialog separately, so that one page of script roughly equals one minute of screen time, and to put certain words associated with sound or visual cues in all caps, so that they be noticed and given proper treatment in production. To be successful, the screenwriter also must obey current conventions about act structure and timing.

But aside from these generalities, the art of writing is something that either takes you into its confidence—or it doesn’t. Your mindset, daylight dreams, life experiences, and past reading either prepare you to tell stories, paint with words, and sing cantos to your readers—or they don’t.

Two years ago, I decided to make a formal study of music because, while I have always loved listening to music both popular and classical, my knowledge of it remained rudimentary. I wanted to be able to make music as well.2 I also want to keep my brain active and alive as I enter my later years, and learning new skills and tackling new challenges seem like a good idea. I began taking lessons on the keyboard—generic for pianos, organs, and synthesizers—and bought myself a Hammond drawbar organ with two keyboards, presets, vibrato and chorus, the Leslie function—don’t ask—and a set of pedals.3 I chose to learn the keyboard because it would teach me about chords and voicings in the way a single-note instrument like the trombone could not, and it had a fixed sound the way a stringed instrument—which needs to be constantly tuned—does not.

My teacher, who is a lifelong musician himself, had me learn scales and taught me the Circle of Fifths, which unlocked for me the structure of music. I already had some sense of how notes are represented on the staff, what the timing intervals are, and other parts of musical notation. I now learned how chords are structured, with all their variations, as well as chord progressions, which appeal to the ear and are the basis for most popular songs. This was like learning grammar and sentence structure as a prelude to writing.

My teacher has also taught me to differentiate harmony from melody, how to break down a new piece of music into its chords and their roots, to play them solo at first, and only then to work on the melody. This prepares me both to play the song and to accompany a singer or other band members. I also learned to blend the two functions of harmony and melody through voice leading. I learned to keep time in the bass and to do bass walks—although my timing is still faulty. I am now learning blues scales and their progressions. This is all like learning the various structures of nonfiction writing formats, or the differences between a short story and a play.

But … after two years, I am still at the stage of analyzing and interpreting, of working the thing out intellectually rather than emotionally. I am working my left hand to form chords or walk the bass, my right hand to play melody or voice lead, but the two are not yet coming together. I approach each new song as an exercise in deconstruction. A song is an intellectual challenge, not an act of personal expression. I can make music, but I don’t yet make it sing.

This is the essence of art. In music, you can learn notes, scales, chords, and progressions—but something inside you must open up and sing. In writing, you can learn grammar, vocabulary, and rudimentary structures—but something inside you must catch fire with story. A teacher cannot tell you how to light that fire. Oh, he or she can light matches and flip them at your head. A teacher can spot the small sparks that sometimes appear in your writing, point them out to you, praise them, and try to nurture them. But if the student’s mind is—figuratively—damp wood, then nothing will catch.

Armed with the basics of language and structure, any writer still must eventually teach him- or herself how to make a story come alive. In part, we do this by reading widely and observing what other writers have done and how they do it. For this, I love to study how a tale is formed in a short story or novel and then remade into a movie. Between one form and the other, the essence of storytelling emerges, stripped away and rebuilt, like a butterfly from a caterpillar’s chrysalis. As writers, we can also ask questions of the stories and novels we read: How did the author do that? What was he or she trying to achieve? Why does this feel right (or wrong)? We also absorb, as if by osmosis, what constitutes good taste in storytelling and what leaves a dull thud. And, of course, we learn by doing, by trying out new approaches, by seeing what works for us in our own personal style, how to create and move characters, by alighting on the forms and structures that work, by discarding the techniques and tropes that seem awkward and excessive.4

Writers learn by writing and adapting stories to their own personal taste and style, just as musicians learn by playing and adapting songs to the rhythms and harmonies that personally move them.

Ultimately, anyone with a sense for language and logic can learn to write an adequate newspaper article or a technical manual. These are about facts and only need an awareness of the reader and what he or she wants—or needs—to know in order for the writing to work. But stories involve something more, the dimension of emotions, of aspirations, desires, fears, and disgusts. Storytelling must look inside the writer’s own self and his or her own experiences to find the expression of ideas and emotions that will touch a similar chord in another human mind. Stories are as much about being human as they are a collection of words, imagined conversations, described actions, and resolved plot lines.

This is why I believe machine intelligences may one day write adequate newspaper articles and technical manuals, but they will never excel at writing fiction. Not, that is, until machines become so complex and involuted themselves that their programs resemble the human mind. Humans live in a shadow world: part of our daily life revolves around the facts and consequences we glean from the external world, and part lies in the interpretations we place upon them. And these interpretations, our attractions, aversions, and distractions—the push and pull of hot and cold feelings, towards and away from various thoughts and objects—are shaped by all of our mind’s unexpressed desires, hidden agendas, disguised hatreds, and other emotion influencers which lie buried in the subconscious and only come out in random thoughts and in our dreams.

If the writer of fiction does not touch this subconscious level,5 then the story will remain a mechanical exercise, a study of forms. It may involve extremely well-crafted characters carved from clues written on bits of paper drawn out of a hat. It may involve an intricate plot filled with actions traced on an Etch-a-Sketch. But it won’t make the reader glow with recognition, or identify with the situation, or even much care. That kind of story comes from within, and it’s nothing that one human being can teach another how to create.

1. Some fiction classes might also go into technical details that—for me—are esoteric to the point of disappearing into the aether, such as the differences between the novelette and novella and the novel. Aside from placing limits on length, I don’t know how these forms differ, other than being longer and more complex than a short story. How long should any piece of fiction be? Long enough to tell the story and satisfy the reader. Any other consideration is the business of publishers, such as how much paper and ink they will need to buy.

2. When I was in fourth grade, I started playing the trombone, along with dozens of other students who were being introduced to the school band with their own instruments. I could read music to the extent of linking notes on the staff with positions on the slide. I understood the timing of notes and tempo. I grasped that flats were a half-tone down, while sharps were a half-tone up. But I never understood the structure of our Western, twelve-tone music which makes the white and black keys of a piano a physical necessity. I never associated all those sharps and flats written on the staff at the start of a piece of music with anything other than the composer’s prefacing remarks; so I did not understand how they changed the playing of notes that appeared halfway down the page. The idea that music had different scales and keys, and that they were all part of a greater structure, was never fully explained to me. Naturally, I was terrible at playing the trombone.
       Nevertheless, my hunger to make music persisted through the years. I tried at various times to teach myself the violin, the guitar, the Chapman Stick®, and the French horn—all without success. Then, two years ago, I finally got serious and began studying music as a thing in itself, and I started taking lessons on the keyboard.

3. When I was growing up, my Dad bought a Hammond organ with rotary tone wheels and learned to play it. I never actually played the thing, but I did goof around on it, fiddled with the drawbars, and listened as the various pipe lengths and their voices came together to make a sound. So this instrument was more familiar to me than a piano and less bizarre than a synthesizer.

4. Early in my writing career, I heard an interview with Marilyn Durham about her novel The Man Who Loved Cat Dancing, where she mentioned the problem of getting a character to walk through a door. This is harder than it sounds, because any writer grapples—as I was grappling at the time—with how much to show and tell in every action. Do you describe the door? Do you show the character twisting the doorknob? Do you use the sweep of the opening door to describe the room inside? My epiphany then and my practice ever since is that, unless the door is important and hides something tantalizing, ignore it. Start the action inside the room. Doors are for playwrights who have to get their characters onto a stage.

5. See Working With the Subconscious from September 30, 2012.

Sunday, April 16, 2017

Objective, Subjective, and Narrative

Is the world external to ourselves, internal in our minds, or shared with others through the process of narrative? To answer that, first we need to decide what we mean by “the world” or “reality”?

Certainly the brain as an organ of thought can have no direct experience of the world outside the bony skull. It can have no actual sensation of any kind. Even “headaches” are a matter of blood pressure, muscle spasms, and pain in the nerves of the head and neck, not in the cerebral lobes. The only events that the brain can detect directly are cerebral accidents like strokes and concussions, and then most of those events are mediated by consciousness and its loss, or by changes in brain function and awareness, rather than by direct detection and assessment of the damage. Our minds are circuit ghosts that run through a network of neurons, rather than the neurons themselves.

Everything that we think of as reality and the world around us is conveyed to the brain by nerve stimulation. This sometimes occurs directly, as with nerve endings embedded in the skin and other tissues, yielding the senses of touch, taste, and smell. Mostly—and for the senses our minds are most aware of and involved with—the nerves connect with organs that receive signals from and interpret the external environment: the eyes to detect light, eardrums to detect pulses in the surrounding air, the inner ear to detect the pulls of gravity and acceleration, and so on. Each of these organs as well as the sensors in the skin, nose, and tongue are reporting their findings to the brain all the time.

Here is where the subjective/objective problem starts. The brain can process all these inputs, but the mind—the active awareness, the collective effect of those “circuit ghosts”—cannot pay attention to all of this simultaneous and often conflicting information. The brain of a developing organism automatically sets up hierarchies of stimulation awareness that govern the mind’s picking and choosing among the continuous flow of signals. For example, we react to the sound of our own name even among a jumble of overlapping restaurant chatter or a stream of messages through a public-address system. The loudest noises and the brightest lights and colors get priority in this hierarchy and pass directly to our awareness, because they are usually associated with danger. The brain’s monitoring of the edges of the retina—our peripheral vision—gives priority to movement there over signals that indicate outline or color, because the movement of things around us is also usually associated with danger and more important than their shape or color.

So, even though our brains may receive accurate reporting of signals related to sight, sound, and touch, they don’t always reach our awareness for interpretation and action. In this way, the objective data become partially subjective. Of course, much of that external reality is not subject to either mediation or reinterpretation by the brain. Fall out of a window, and you will find that gravity takes over and you impact on the pavement whether you think about it or not. In the same way, explosions, collisions, accidents, and intentional violence all have a reality that is not brain-mediated and subject to our awareness.

Human beings have also developed mechanical instruments designed to isolate and mediate signals from the physical world before they reach our senses for interpretation. For example, a spectrometer can pick out the actual wavelength of a beam of light. It can measure and tell a human observer that the photons in what appears to be blue light are oscillating with a wavelength of 440 nanometers, or green light at 530 nm, red light at 700 nm. Even if you have been raised by aliens or by color-blind humans, so that you traditionally call red light “green” and vice versa, the spectrometer can give you an accurate reading of the light as it is defined by the majority of other human beings. In the same way, an electronic tuner can identify the pitch of a note, say Concert A at 440 Hertz, even if you are personally tone-deaf.1

So, even though the brain mediates, selects, and interprets the reality coming in from the sense organs, a measurable and appreciable—that is, objective—reality does exist outside of human awareness. Still, awareness plays a big role in interpreting and providing context for that external reality. Is red light more pleasant to the brain and to the awareness “ghost” that rides inside it than green or blue light? That is a matter of subjective interpretation. Does a red light in a traffic signal mean “stop,” while green means “go,” and yellow means “do you feel lucky”? That, too, is a matter of interpretation based on learned—or, in the case of yellow, badly learned or misremembered—rules governing human activity.

The truth, as I see it, is that reality exists objectively but is interpreted subjectively. And when we go about putting reality into words—as I am doing right now, or when I try to relate an experience or tell a story in one of my novels—then the process becomes even more subjective. The idea, the story, the sense data that my brain recalls and tries to fix in words which other humans can then read, understand, and so vicariously experience becomes subject to the limits of my vocabulary, the meanings which I have learned to attach to certain words, the connotations with which those words resonate in my own private thoughts, and the choices I make among all these variables, on the fly, and in the heat of creation. While a piece of writing—a novel, a poem, or a news report—may become a crystalline distillation of reality for an enthusiastic reader, it is at the same time more and less than an absolutely faithful recording of the originally perceived reality.

In the same way, when a producer, director, cast of actors, and crew of set dressers, camera operators, and computer-graphic artists set about to recreate a beloved bestselling novel or even an original screenplay, they will be creating a new interpretation based on the particular skills, vision, and choices of each member of the production company. It may be, in both the author’s and the audience’s view, a crystalline distillation of the story, but it will still be different from the act of reading the novel or screenplay, or living the reality as it might have occurred in real life. While a novel or screenplay may have had no existence “in real life,” the same acts of selection and interpretation—of words in a book, or of skills and vision in a movie production—apply to the creation of a biography or a biopic about a person who actually lived and had a historically measurable effect on reality. The result in the telling will be different from the reality of living.

So we, as aware human beings, dance back and forth across the line separating objective and subjective all the time. We live in our minds, but our minds also participate in a reality that affects other human beings.

And now a third consideration has come into our lives. Based on the theories of postmodern literature and our evolving social sciences, we must deal with “the narrative.” This is more than the plot and story line of a single novel or movie. This is a shared construct of political and social—if not indeed physical—reality, put together from the writings of influential theorists, reporters, academics, and others who get paid for expressing their opinions. This sociopolitical narrative becomes the basis for new novels and movies, for the selection and interpretation of news stories, for more editorials, blogs, and opinion pieces, and for the crop of memes—those bits of ideological weed that would seem to crystallize one aspect of the larger narrative in one instant of time—that all reverberate in the social, media, and social-media echo chamber of modern society.

For anyone who bathes in the daily waters of politics, economics, opinion, and written and enacted literature—that is, books and movies—the narrative becomes an intermediate reality. Outside the brainbox of the skull, there exist both the objective reality of blue light at 440 nm or Concert A at 440 Hz and the narrative reality in which certain people are automatically good and virtuous, while others are wicked and demonic; certain historical facts are remembered and cherished correctly, while others are dismissed as damned lies; and certain acts or intentions are considered rational and lifesaving, while others are known to be foolish and cruel. And when a human encounters conflict between his or her subjective experience or memory and an externally shared narrative which he or she has accepted, either cognitive dissonance or a personal crisis occurs. When a society encounters conflict between its public narrative and external reality, a political, social, or economic crisis occurs. And sometimes the narrative wins—to the detriment of personal or social existence.

This is not actually a new phenomenon, although we now celebrate this kind of consensus in earlier times as “the narrative” and consider it either the obvious end product of a cohesive society or a mindless form of manufactured groupthink. Every religion since nomadic herders began coming together in their summer pastures has spun its own narrative, its greater vision, in which the lonely circuit ghost inside each brainbox partakes. The Greeks had their narrative of the Trojan War, a fictitious relic of a forgotten age—not unlike the Arthurian narrative of chivalry for the English—which shaped their ideas about what men and women were supposed to feel and do, how the gods were to be honored or mocked, and how the best of human intentions can sometimes go awry. The Roman world was held together largely by its own narrative, played out in the minds of emperors, generals, proconsuls, and tax collectors.

Narrative is strong in some people’s minds. Control of the narrative is a kind of power. And narratives that have the charm of simplicity, the echo of truth, and the ability to enforce popular desires will eventually drive out any narrative that is too complex, difficult to verify, or particular in its voice and vision. Narrative persists in any social setting until a harsh reality intrudes—the political party collapses or becomes irrelevant; the society is invaded or encounters some physical catastrophe like an earthquake or meteor strike; the economy is disrupted by some new technology or goes broke through a lack of production or an excess of debt—and then individuals are left to grope among the pieces to assemble their own, subjective views of reality once more.

Which of these—objective, subjective, or narrative—is the true reality? None of them. All of them. They are the blend of physical, private, and public or social needs, drives, and obligations that guides and motivates every human being. To choose one among them and elevate it to supremacy is to miss the whole point of being human.

1. But some perceptions remain an artifact of the mind. When I was living back East, I would sometimes sense the sky on a summer afternoon under certain conditions of humidity as green rather than blue. This was not the bright green of a laser pointer or fresh grass, but the muted green of pea soup. It may have been an actual color, reflecting the green plants in the fields up into the sky, or from moisture high in the atmosphere scattering sunlight to disperse the 530 nm wavelength more dominantly than 440 nm. But it sure looked green to me.

Sunday, April 9, 2017

No Utopia

A germ has infected the minds of bright people, deep thinkers, and intellectuals in almost every Western society. Its inception dates back almost 2,500 years to Plato and his Republic. The infection resurfaced again in the 16th century with Sir Thomas More and his Utopia, and once more and with even more virulence in the 19th century with Karl Marx and his writings in The Communist Manifesto and Das Kapital. The essence of these various outbreaks—the DNA of the germ, so to speak—is that humanity can reach a perfect state, a lasting condition of peace and plenty and universal brotherhood, if everyone would just let the brightest people among us order society according to rational, scientific, humanitarian principles.

This notion is in stark contrast with the other two principles of social organization. The first of those organizations goes back to the monkey troop: let the strongest, smartest, and (hopefully) wisest people in the tribe, confederation, or nation fight among themselves until the (hopefully) best person wins; then we will all obey him (or rarely, her) as our alpha male (or female), chief, king, emperor, or führer. This principle saw its heyday in the English Wars of the Roses. The second principle is less ancient and was developed in Greece at about the same period that Plato did his writings: let people come together, offer and consider arguments, and choose among themselves the best course of action, with the process overseen by a committee of citizen caretakers. With later refinements to address issues like dispersed populations, protection of the rights of the minority, and delegation of the popular will to elected representatives, this principle has come down to us in various democracies and in our own republic.

The main difference between the utopian ideals of Plato, More, and Marx and the other organizing principles of either kingship or democracy lies in the nature of time. A king or queen is supported, followed, and obeyed for only a brief moment in history—even if the king or queen and his or her followers don’t understand this and think accession to the crown is forever. The monarch’s decrees and orders are binding only for a limited period and subject to change with changing underlying conditions. Some decrees may become the basis of popular tradition and so live on for decades or centuries beyond their origin. But a new king with a different appraisal of the political, economic, defensive, or agrarian situation, or new experience of changed conditions, can issue new decrees that point social energies in a different direction. Similarly, the proposals and laws that democratic societies or their democratically elected representatives enact are always of a temporary nature. Some of the society’s core values and principles, such as those bound up in the U.S. Constitution, are intended to endure for the ages. But even these sacred documents are always subject to interpretation and amendment. The essence of either the kingship principle or the democratic principle is that law and the social organization which supports it are flexible, subject to the imagination and understanding of the people being governed, and able to respond to changing political, economic, and other conditions.

But the bright minds of the theorists—Plato, More, Marx, and all those who sail with and believe in them—look for a social order that transcends the changing nature of human understanding and imagination. They aim at the end of politics, economics, military aggression, agricultural variation, technological invention, and other underlying conditions. They expect to initiate the end of history itself. Their social and political views are flexible right up until the new and improved social order that they have devised takes over and operates so perfectly that nothing further can change. When you reach perfection, there’s nothing more to say or do.

Of course, human nature abhors perfection. We are restless creatures, always thinking, always plotting and planning, always sampling and judging, always looking for greener grass and trying to breed redder apples, always wanting more, always itching with a kernel of dissatisfaction. This itch is what motivates the bright minds looking for utopia in the first place. But those minds never stop to consider that restless humans, given an eternity of bliss, will soon want to reject their perfect state and move on to something even newer and better. That is the lesson of the Eden story.

The more hard-core, dirigiste thinkers among those bright minds have already concluded that human nature will ultimately need to be changed in order to fit into their perfect societies. People will have to become more self-sacrificing, more contented, less quarrelsome, more altruistic, less greedy, more … perfect, in order for their new and enduring social order to function. Stalin famously supported the researches of Trofim Lysenko, the agrobiologist who taught—counter to the principles of Mendelian genetics—that acquired traits, the products of nurture, could be inherited by later generations. Lysenko was working with plants and seeds, but Stalin and his acolytes believed that the same technique could be worked on human beings: get them to change their behavior, rethink their own natures, become the new Homo sovieticus—and human nature would be changed for all time to conform with Marxist-Leninist principles.

People do change, of course. While genetic mutations—especially among brain functions—tend to be slow and often self-canceling—especially when they work against social norms—people are always susceptible to new ideas. Religions with their coded values and ethical propositions can sweep across populations much faster than any physical mutation can sweep down the generations. Christianity raises the values of love, reciprocity, and cooperation. Islam raises the values of uniform belief and submission to religious authority. Buddhism raises the values of right thought and action in a hurtful world. And, yes, Marxism-Leninism raises the values of personal selflessness and obedience to political authority.

But these core value propositions are still subject to change. Inspiration and revelation harden into orthodoxy, become challenged by disputation and reformation, and succumb to new and different inspirations and revelations. The one exception to this change process would seem to be the scientific method, which is a form of either anti-revelation or continuing revelation. Arising in the work of 16th and 17th century thinkers like Galileo and Descartes, the method values observation and experimentation as the only support for—or disproof of—conjecture and theory. By its nature, the scientific method is flexible and perpetually adjusts its findings to changes in—or new discoveries about—those underlying conditions.1

We’ve been riding a wave of technological change derived from the scientific method for four centuries now. Observation, hypothesis, experimentation, and analysis have variously given us the steam engine, telephones, radio, television, computers, evolution, and genetics as well as far-reaching advances in our thinking about physics, chemistry, and biology. Human life is qualitatively and quantitatively different from what it was four hundred years ago in those societies that have embraced science as an organizing principle along with the Western tradition of personal liberty and free-market exchange of ideas, goods, and services.

And yes, along with our understanding of biology, chemistry, and physics, our understanding of human psychology and social structures has vastly expanded. We have become the animal that studies itself and thinks about its own future—not just on a personal level, but as a social organism. But we are no closer to finding a “perfect” social structure, because human beings are still descended from irritable, distrusting, independent-minded monkeys rather than docile, cooperative, obedient ants or honeybees. No amount of religious indoctrination, state orthodoxy, or applied lysenkoism will remake the mass of humanity into H. sovieticus.

Get over it, bright minds. In the next hundred or a thousand years, we may reach the stars, transmute the elements, and be served by mechanical intelligences the equal of our own. But we will still be irritable, distrusting, independent-minded creatures, half-angel, half-ape, and always looking for greener grass and redder fruits. And that flexibility of mind, combined with our stubbornness and independence, is what will keep human beings evolving and moving forward when more perfect creatures and more orderly societies have vanished from this Earth.

1. To quote physicist Richard Feynman: “It doesn’t matter how beautiful your theory is; it doesn’t matter how smart you are. If it doesn’t agree with experiment, it’s wrong.” Unfortunately, these days some of our lesser scientists—and their lay followers—seem to think that scientific propositions can be “settled” for all time and somehow made immutable.

Sunday, April 2, 2017

Obvious and Mysterious

In life, out here in the real world, I like things to be obvious. Paths are clear, yes means yes, love is forever, a promise is always kept, and hatred is to the death. In that way, I guess, I am like a child and easily fooled. But the world is complicated enough, nature preserves her mysteries, opinions differ, scientific and social theories expand outward like ice crystals, and the truth is difficult to seek and harder to find. Some days, it’s a puzzle just to know what’s going on around you.

But in literature, in the stories we tell for fun, I abhor the obvious. Plain people with pure hearts, villains with no shred of decency, roads that lead straight from one boring place to another, stories where cause leads predictably to effect—these are for children, not for me. The world of literature should be as unlike reality as we can make it.

Most of us live in a world bounded by rules and promises. The basic rules are handed down by our parents, if we are lucky, and become embedded in our psyches like guard rails on a twisting mountain road.1 These ground rules involve the basics of behavior like internal honesty, ambition and effort, personal credulity and cynicism, and other character traits. Some rules are part of our learned social structure: how we greet guests of differing social stations, which fork to use and when it’s appropriate to use our fingers, how to write a thank-you note, and other things we do because to forget them would make us social pariahs. And still other rules are handed down by governments: how to drive and where to park, what we have to pay in taxes, and when and where we can cross the street.

Promises we make every day, to show up for work at a certain time, to be home for dinner, to support the children and provide for their future, to pay off the mortgage, to take a sick day only when we are really sick. This is what a human being does. This is responsibility. These are the things we owe to our families, friends, employers, and bankers. The walls made up of rules and promises are invisible but strong, like armored glass—at least if we are a person of good heart and conscience2—and these walls do not break easily, if at all.

In literature, we expect these rules and promises to be more flexible. Not absent, of course—or otherwise the characters would have no conscience, no society, no state, and no civilization.3 But as readers we expect the walls to be less confining, more like rubber than glass. Of course, many stories are based on the main character suffering a great and immediate reversal or loss in his or her life—of a loved one, a job, a long-sought prize that is finally within reach, or some other disruption. The story is then about how the character reacts to the new situation, where the person’s ordinary state of being is stripped away. And then the old rules, the guides of conscience and social order, either don’t apply or have much reduced force in the character’s life and thinking.4

In my own writing, I strive to mix the two, the obvious and the mysterious. I am not a writer of horror or the supernatural per se; so I’m not talking about the supernatural, ghost stories, or gods and demons—although stories about artificial intelligence and alien life forms would come close. But I believe that the world as we perceive it has hidden depths and dimensions that we do not know or even suspect. I like to present a character in a plain and predictable exterior world that—when he or she gets too close, feels enough frustration, thinks too deeply, or presses too hard on the glass—reveals a new perspective and a new set of opportunities.

A writer can add the element of mystery in one of two ways. One is to take the character and the story into a strange and mysterious—or simply different—setting. Most science fiction stories are like this. The character goes into space for the first time, or lands on a strange planet, or encounters a new culture with mysterious practices. In the two-volume novel Coming of Age, my strange planet and new culture are simply the future, which my two main characters visit by receiving life-extension therapies that take them far beyond the traditional lifespan of “three score and ten.”

In the novel I’m working on now, The House at the Crossroads, which is the prequel to my time-travel novel The Children of Possibility, the story develops a kind of double whammy. First, two of the main characters are from six thousand years in the future, introducing the reader to a domain and culture that are almost like the Europe of our near future, but with some significant differences because civilizations have fallen and risen since then but never strayed far from the European character.5 Then, these two characters sign up with a time-travel service and are trained to go back and maintain a time portal in a medieval Europe that is strange and different to them—and perhaps not so familiar to the modern reader, either. And in the original book, Children, the mystery was to see the reader’s present day in the early 21st century through the eyes of a time traveler from nine thousand years in the future.

A second and simpler way to create mystery for the reader is to leave something out of the story. This is the writer being not so much neglectful—“Oops, I just forgot to mention”—as intentionally deceitful. For example, the reader may not know, but will soon get enough clues to guess, that the main character is actually guilty of the crime he or she is denying. Or that the character is actually blind, or mentally disabled, or psychotic—or a machine. These are always fun—even when the reader knows up front that the character has these attributes, as in my novels about artificial intelligence ME: A Novel of Self-Discovery and ME, Too: Loose in the Network. And again, the reader sees the familiar world of the present through the eyes of a self-aware software program which must navigate both our digital infrastructure and our complex human relations.

That second method—by leaving something out—also works in a novel where multiple characters have their own viewpoints and the story, in the hands of the omniscient author, plays off one against the other. I do this even in my non-science-fiction novels, like The Judge’s Daughter and The Professor’s Mistress. The reader, like the author, knows the motivations and intentions of the characters “on both sides of the door.” The mystery enters when the reader has to figure out how one character will discover the other’s real intentions, or secret, or the trap that has been laid.

But that, too, is a source of mystery in any story. Unlike life, which is regular, endless, and virtually unplotted, we read stories because we sense there is an underlying structure, a plot, a coming climax, resolution, and denouement—but we can’t figure out what it will be. The writer, like the omnipotent god of a world two handbreadths wide, keeps the future a secret and delights in teasing and fooling the reader.

1. When I first came out to California from Pennsylvania in 1970, I was amazed at all the sharp curves with steep drop-offs that had no guard rails to keep drivers from plunging over the cliff. In Pennsylvania, even a modest embankment is buffered with steel. But driving along Route 1 in Marin County, with a vertical slope above you and the Pacific Ocean two hundred feet below, only a little berm of sand prevents an erring driver from slipping over the side. It took me a while to realize that, while Pennsylvania roads get snow-packed and icy, with lots of skids, particularly on curves, the roads in Coastal California are free of ice all year round, not even wet for most of that time, and so safe as a pair of railroad tracks—if the driver exerts a minimal amount of caution. Of course, in lashing rain or dense fog, it would be nice to have those guard rails, but Californians are a tough breed.

2. If you are not a person of good heart or conscience, then you live in a kind of fantasy world compared to those around you. You tend to think nothing touches you, no one will notice your actions, other people have no real feelings or intentions, and the law—loosely enforced and fallible as it is—will never catch up to you. That makes the rest of us fools, in the short term; and you the fool, in the long.

3. Which is the case in most zombie and post-apocalypse stories.

4. I once was told by a production executive in Hollywood that the formula for a modern movie script is: first act, the main character is living a normal life, until something changes; second act, the character tries to get back to equilibrium, but the whole world rises up and fights against him or her, preventing resolution; third act, the character finds a new strategy, a new weapon, or a new ally and wins the struggle; and, finally, the denouement, where the hero or heroine and the villain go mano a mano in a fight to the death. This structure makes a good 120-minute movie, but it’s a little simplistic for a full-length novel.

5. I believe that each geographic region and its embedded population have certain characteristics which change slowly, if at all, over time. This explains why International Marxism produced a mindset and approach to governing in Soviet Russia that was similar to Tsarist autocracy and markedly different from the loosely centralized Imperial bureaucracy of Communist China. And if the United States ever goes full Marxist, the result here will be similarly unrecognizable to either the Russians or the Chinese.

Sunday, March 26, 2017

Death by Assimilation

I get it. For many people, their true identity, their sense of self, and their greatest comfort are all found in their culture, their religion, their ethnic background, and their tribal affiliations. They exist because they are part of this something that is greater than themselves.

But, at the same time, that thing they belong to—their tribe, their gang, their political party—is smaller than the world around it. They are the self-nominated—if not actually inducted by hierarchical vetting, testing, and ceremony—members of an exclusive, select club. They are unique because they and their group are different from the whole, from the larger society, from the seething mass of people that surrounds them. Within the folds of their chosen group, they are safe. The people and things held close around them are familiar. The language, word choices, innuendoes, references, jokes, and antipathies of their group are all known and understood. Within those confines, a person cannot go wrong by accident, only by intention.

Yes, a member of a close-knit group—especially one that is known by its accents, its clothing choices, or its skin color or other phenotypic features—may feel strange, isolated, and defensive when venturing out into the wider world. But the person will also feel … special. The hairstyle, the tattoo, the distinctive dress will mark him or her as a member of the elect, something above with the hoi polloi, partaking of a secret—or not so secret—identity which has depths of meaning unknown to the other people on the bus or in line for the next bank teller. And if the person is part of a group that actually does have secret knowledge—such as who is really going to heaven when they die, or up against the wall in the next revolution—so much the better.

For such a person, assimilating into the surrounding culture is not just an act of surrendering to a superior force. It can be a loss of that comfort, that familiarity, and that secret power. In the first stages of assimilation, in the first generation that is trying to become members of the greater society, engaging in the larger society is to risk becoming painfully visible. The assimilatee’s tongue is not yet familiar with all the new words and concepts. He or she does not know all the norms, the jokes, the transactional processes. Her dress and demeanor may not be quite right in all social situations. He expects others will laugh at him for trying to put on a show of assimilation, just as he quietly laughed at others who were not members of his in-group.

Assimilation is not only painful in the transition process, but also in the long-term effects. To become a member of the greater whole is to become virtually invisible, indistinguishable from the people on all sides. That is, the assimilatee no longer stands out for his or her obvious differences of dress, language, skin color or other physical features, religious preference, and so on. If a person bases his or her sense of self on these differences, then to assimilate, to become just like everyone else, is a kind of personal or cultural death.

This is not a problem in most of the world’s countries and societies. In Japan, a gaijin, a foreigner, will be forever foreign, different, an outsider. Learn Japanese, adopt the culture in every detail, become an honored master in one of the country’s traditional arts like swordmaking or ceramics, even change your name to a Japanese form—and you will still be an foreigner. Even if you are of Asian descent, so that your face and body look the part, you will still not be Japanese. Ask the Koreans who have lived there for generations.

The same is true of China and most of Asia. Most of Europe, too. Even if your ancestors originally came from France or Germany, Sweden or Poland, to live for a few generations in North America, you cannot go back again and be accepted. Maybe, after a generation or two of reverse-assimilation, the locals will forget that your heritage is really American and that you are trying to live in disguise. Maybe, in the big cities, you can blend in through the anonymity of urban, cosmopolitan life. But, at the village and town level, memories—and tongues—are long and sometimes spiteful. The locals will know who you are and from whence you came.

In the Middle East, it is not impossible to assimilate if you are a Muslim or wholeheartedly willing to convert to Islam. The religious-political system of Islam makes generous allowances for adherents of all national and ethnic types. But the power structure of Saudi Arabia will still question your Arabness, even if you have the right look and adopt the keffiyeh and agal cords. The people of Iran will know in a few minutes that you are not really Persian. They might even think you are funny—or despise you—for trying to be what you are not.

This kind of cultural and ethnic disdain has never really been a problem in the United States. Or rather, it has been a problem only for the first generation of new-wave assimilatees, but not in succeeding generations so long as they can “walk the walk and talk the talk.” This country was put together from thirteen separate colonies that by the late 18th century had all come under the control of the British crown. But they started as enclaves of religious and ethnic refugees conscious of their differences: New England Puritans, New York Dutch Protestants, Pennsylvania English Quakers, Maryland English Catholics, Virginia, Carolina, and Georgia planters and slaveholders. They had their differences, but the Revolutionary War and the hardships of fighting and being fought over drew them together.

In the century that followed, the country absorbed the British and Hessian soldiers who came to fight and decided to stay on after independence,1 as well as people willing to emigrate in search of a better life: Irish and Scottish peasants, German and Swedish peasants, Italian and Eastern European peasants, mostly solid Catholics, and then Russian peasants, mostly Ashkenazi Jews. They landed on the East Coast looking for those streets of gold and, after a generation crowded into their Germantowns and Little Italies, enduring some measure of pain and strife, they moved outside those protective regions. In a generation or two, they became indistinguishable from the rest of the Americans, except for some ethnic foods and traditional dress which they trotted out and celebrated on holidays, rather more as an art form than an identity.

On the West Coast the situation was both easier and harder. Easier because, during the California Gold Rush, all those European refugees came overland or by boat into San Francisco and melted fast in working the placer deposits and mines of the gold fields. Harder, too, because Chinese refugees came to trade and work on the railroad, and they were more difficult to assimilate, both because their culture was harder to forget and because their faces and bodies were harder to ignore.

That has been the trouble with the most recent wave of assimilation in this country: the Japanese, Chinese, Vietnamese, Cambodians, the Middle Easterners, and of course, anyone from Africa—even if his or her ancestors have lived here and been citizens for half a dozen generations. They wear on their faces the marks of difference in the shape of their eyes, noses, lips, and the color of their skins. They have been a more difficult set of groups to assimilate. But I have seen in academia, in the biotech world where I worked, and in other places of business—and not always with government prodding—that people can be accepted, valued, and sought out for the quality of their skills, knowledge, and character. And then it is easy to dismiss superficial differences like skin color. They walk the walk, talk the talk, and everyone has come to trust their opinions and actions.2

Of all countries, though, America has made it the easiest for people of other places and other cultures to assimilate. Learn a skill, find a way to add value to the economy or to society, and you will find a place in this country.

But still, assimilation will be psychologically hard for you. You will no longer be distinguished by the easy marks of difference—your faith, your accent, or your skin color—and now you will have to distinguish yourself by the harder marks of thought, effort, and achievement—“the content of your character.” For many people, this is not only harder but simply impossible. If you are not particularly talented, have no particular interests, have no desire to learn particular skills, then you will become Joe Average, the Invisible Man or Woman, and fade into the background of an assimilated society. For many people, that can be a kind of personal death.

1. My own great-great-who’s-counting?-great-grandfather on the Thomas side was a soldier in the British Army and carried a Brown Bess on the streets of Buffalo in the War of 1812. After the war he was demobilized, or simply deserted, and lit out for Michigan, at the other end of Lake Erie. My brother still has, as a treasured relic, the musket he carried.

2. I learned an important lesson working at two different biotech companies. We had many people of Chinese, Indian, Pakistani, and other heritages working at either site, and one plant claimed thirty-seven different ethnicities among its employees. Some were full citizens, spoke fluent English, and had become fully assimilated into American culture. Some were recent arrivals, both holders of H1-B visas and green cards, as well as newly sworn citizens. Sometimes, though, the person with the thickest accent and least talent for telling a joke was the most serious and dedicated technician—and you could take his or her judgment to the bank. And sometimes the person with the most fluent English and the best jokes, even native-born persons of European stock, had terrible judgment and lousy lab skills—and you always had to double-check their work. In such an environment, you quickly learn to look beyond surface differences when saleable product hangs in the balance.

Sunday, March 19, 2017

Rules for Alien Life

You know we’re going to find life elsewhere in the universe one day. It’s only a matter of time—ever shortening time, too, as we send out more probes and, eventually, begin launching expeditions with human colonists. Either here on the planets and their moons revolving around our own Sol or on other bodies orbiting other stars, the chances are good we will find that “temporary reversal of entropy” we call life.

That is, the chances are excellent that life exists elsewhere. To approach the question in any other way would be to assume that our star, our planet, and our own situation are all somehow unique in the galaxy, if not in the entire universe. But, from everything we’ve been able to see with our light- and radio-based telescopes, our spectrometers, and other more recent detection systems over the past hundred years, our Sol is about as ordinary as a star can get. And from the hints we’ve been able to detect over the past decade or so, based on minute stellar wobbles and variations in their light output, planets seem to abound around our type of star—although most of the bodies detected so far have been massive gas giants like Jupiter and Saturn, and we’ve only recently been able to detect the smaller, more rocky bodies comparable to Earth. Still, whether we’ll be able to find life on any of Sol’s nine planets1 with their dozens of moons, or around a nearby star with a planet in the habitable zone that has been able to maintain stable conditions long enough for life to develop chemically, consolidate and maintain its gains, and then evolve into something interesting … that’s a crapshoot.

Unless, of course, such life reaches out and discovers us first …

But until we do eventually find life, what can we say about it? What can we project? Quite a lot, I believe, based on principles we can observe in the life on Earth.

First, anything we would recognize as alive is probably going to be made up of many repeating units working together under some kind of organization. Single, solitary things—other than stars and planets—don’t seem to last long in our universe. Nor are they able to reproduce, except by growing larger and eventually splitting apart into two new-but-identical subunits, like a bacterium. By “repeating units,” I’m not talking about bees in a hive or soldiers in an army, although that seems to be a pattern, too. I mean cells in an organic body or transistor gates in a complex circuit—out of the many, one.

As we’ve found on Earth, single-celled creatures have a mechanically built-in size limit. The membrane enclosing their interior fluid has limited capabilities for stretch and breaking strength. Sooner or later—really soon, in the case of Earthly microbes—the pressure of water volume inside the membrane overcomes both the cell’s ability to add to the membrane’s surface area and the strength of the covalent bonds between lipid molecules in the membrane itself, and everything just spills out. I suppose you might build a tank—or happen upon, say, a volcanic bubble along the shoreline—that could contain the organism’s undifferentiated cytoplasm in large volumes, with all of the cellular organelles floating free and absorbing food particles, transferring energy molecules, continuously transcribing DNA-analogs, translating RNA-analogs, and replicating protein-analogs. What might be the volume of such a bubble? A gallon? Five gallons? A whole swimming pool’s worth of cytoplasm? But aside from the fact that such a bubble would occur only by chance and its acquisition remain outside the organism’s control, eventually a feeding, replicating, expanding organism would outgrow even that huge volume.2

No, any organic life capable of maintaining its stability and organizing its functions on its own would need to be made up of small units, cells or their alien analogs, which would work together while acquiring different traits and capabilities. In the pattern of life as we know it, cells in any advanced life form would develop into dedicated types. They would specialize to become mechanical structures like bones, skin, exoskeletons, teeth, and muscles. Or they might develop into the life-supporting processes of digestion, circulation, and bodily regulation through the manufacture and secretion of enzyme-analogs. Or they would form a complex information-transmission and -storage system for discerning the organism’s environment, directing the action of its various parts, and capturing and ingesting food … or looking up at the stars and wondering about them.

All these different functions are too complicated and require too much chemical and mechanical specialization for a single-celled creature of whatever size to manage them. Small, self-replicating, autonomous-functioning units, working together as a whole, seem to be necessary for advanced life forms. And after you acquire those differentiated units, what then?

Second, most life is going to be three-dimensional. Human beings have made interesting conjectures about two-dimensional life,3 but unless the creature glides across the core of a heavy planet like Jupiter or the surface of a neutron star, never rising against gravity, two-dimensional life is just too hard to maintain. Such an organism would have to develop and articulate complicated linkages and latches to open a volume that would contain anything representing consumed food or waste products. And without an interior, an exterior would hardly be meaningful. No, two-dimensional life is an interesting intellectual challenge, but not a realistic expectation.

On the other hand, three-dimensional beings capable of ingestions and digestion, orientation and locomotion, and other spatial attributes will have a problem with being required to possess simultaneously both a front and a back, a left and right side, a top and a bottom. And any creature in a competitive environment—one where you are surrounded by similar beings who want to hunt and eat the same food, as well as predators who want to hunt and eat you—will be forced to protect itself and project itself in all directions. If the creature is a filter-feeding bottom dweller like an oyster or a clam, covered by a hard shell, or a soft-bodied worm buried up to its mouth and/or anus in deep sand, issues of protection and projection are not so important. But a mobile organism fighting for its life out in the open needs to worry about seeing, hearing, or otherwise discovering opportunities and detecting threats that may lie in any direction, as well as having a choice of directions in which to chase its prey or flee from its predators.

Such conditions forced the explosive evolution of hard body parts, protective strategies, and sensing apparatus starting in the Cambrian period, half a billion years ago on Earth.

These conditions suggest that the organism will have more than one organ of detection. The creature may be a grazer, like a horse or cow, that needs to watch the surrounding environment for predators and so possess one or more eyes on each side of its head, each eye having a wide-angle lens. Then the eyes working together can take in a nearly complete 360-degree view. Or the creature might be a hunter, like a lion or a hawk, with an agile neck that can rapidly swivel the head to scan the environment, and two or more eyes that are set close together to present slightly divergent views, creating a perception of depth and enabling the organism to determine the range to its target.4

The need for multi-directional mobility suggests that the organism will have more than one leg or fin or other limb for propelling itself against the ground, the surrounding fluid, or some other feature of the environment. A single limb, like a pogo stick, is too hard to control for both propulsion and steering. And a single fin is too hard to use for both purposes—although Venetian gondoliers perform admirably in maneuvering their boats with a single fixed oar, and octopi and squids do well with a single siphon in emulating a soaring jet plane under water.

These are just a few of the requirements—multiple organizing units, multidimensionality, with the need for multiple environmental detectors and multiple propulsive appendages—that an alien life form would need to have. They are the minimal requirements upon which evolution—the preservation of accidental changes through improved adaptation to the environment—can work to the organism’s advantage. And that ability to suffer changes in the first place, and so take part in evolution, would be another requirement.

There may be more requirements,5 but these will begin to shape the rules for alien life. They suggest how future human spacefarers might recognize life on another planet and so distinguish its inhabitants from, say, a rock or a pot of chemical sludge.

1. Nine, if you count Pluto—which I, as a traditionalist, still do. Eight, if not. But again nine, if you keep up with the news that analysis of the orbits of icy planetoids in the outer reaches of the solar system suggests a “massive Earth” somewhere beyond Pluto. The science is not “settled.”

2. And this presumes the cytoplasm in so large a volume could still function homogeneously—that is, discrete operations like acquisition of food, conversion and production of energy molecules, and replication of DNA to produce needed proteins could all be maintained uniformly throughout the mass. If not, then some parts of that gallon of protoplasm might be thriving and expanding while others starved and became dead zones. Such a condition could not possibly be healthy.

3. See, for example, Flatland by Edwin Abbott from 1884.

4. This presumes the organism is passively receiving light or some other electromagnetic wave reflected off the surrounding environment and its objects of interest. Otherwise, the organism might detect its environment by sending out a signal and interpreting the return echo, like a bat. Then the creature’s brain would need to distinguish infinitesimal time lags in determining distance to an object. And it would still need two ears or other detection devices in order to gauge direction.

5. For example, a multi-celled organism will likely have evolved from an active, competitive biome of single-celled organisms, as did higher life forms on Earth. As such, the multi-celled creature will need to maintain an analog to the human immune system in order to protect itself from predation by those competitors in the biome on a cellular level.

Sunday, March 12, 2017

The Future of Self-Driving Cars

Everyone seems to think that the self-driving car is just a few years away. And we have already tried them on the streets of San Francisco—although some experiments, like Uber’s, have had a human sitting in the front seat, monitoring its progress, ready to take control if something should go wrong.1 Experiments with self-driving cars are also going on in other cities.

I am not convinced that this is the future. Not that I am a Luddite. Far from it! In the distant future—which I postulate as thirty to fifty years from now, and getting closer all the time—machine minds with mechanical senses will perform many non-repetitive tasks that require the system to evaluate novel situations and make independent judgments. And these minds will still be subject to the same errors of fact and interpretation that plague human beings today. Driving will certainly be one of these difficult tasks—but first, a whole lot of development and testing needs to take place.

What happens when a person drives? The process is much more than visually analyzing the twists and turns of the road and tracking the wheels to follow them. Or measuring the speed of the car ahead and adjusting your own speed to maintain pace. Or perceiving that the brake lights on the car ahead have come on and applying your own. Or detecting an obstacle in the road and swerving, slowing, or turning to avoid it. Those tasks will move you forward and keep you safe for about two car lengths. After that, everything else is a matter of conjecture.

A good driver goes through a sophisticated mental process called “SIPRE,” which stands for seeing, interpreting, predicting, reacting, and executing.2 The driver is not simply reacting to visual cues, but interpreting them as part of a dynamic situation and predicting the course of that situation. Essential to those two acts is the prior development of and continuous referral to learned experience. The human mind interprets a situation based on what it has seen in the past and recalls in the present.3 It makes predictions based on past outcomes. This is the difference between a new driver, who is nervously dealing with the unknowns on the road, and an experienced driver, who knows what to expect and how he or she will react in many different situations.

A good driver also maintains an awareness of the entire flow of traffic, not just the lane and the car immediately ahead. Traffic patterns are a collective reaction, like fish schooling or birds flocking. Each car reacts individually based on the movements of those around it. The person in the car ahead may not immediately react to a change in the pattern; so an alert driver usually watches further ahead—and to the sides in multi-lane and freeway driving—to detect potential slowdowns, lane changes, and other early signs of a breakup in the pattern. And even on a single-lane or two-way street, the alert driver monitors parked cars, objects at the curb, and events on the sidewalk. The driver’s awareness extends beyond the immediate situation and his or her own immediate intentions.

A good driver also understands human responses and reactions behind the wheel. Often, in merging around an on- or off-ramp, two cars will be on a converging course. First one slows, to allow the other into the lane. Then the merging car slows, to avoid hitting the first car. Then the first one slows some more, to give the second car another chance to merge, and so on right down to a dangerous, slow-speed encounter.4 Human minds will sometimes get into this kind of “After you, Alphonse” routine—until one of them wakes up, realizes that the other driver really does mean to defer, and so takes the initiative and speeds ahead. Similarly, when two cars arrive at the same time on the adjoining corners of a four-way stop, they may pause while trying to be polite and defer to the other driver, who might just have arrived a second earlier. They will then wait until one of them either gestures for the other to go or takes initiative and starts forward into the crossing.5 Human beings are aware of other minds and try to read human intentions.

Awareness of the flow of traffic and the intentions and motivations of other drivers represents additional layers of programming that would run in parallel and coordinated with the main steering-and-braking program that guides a self-driving vehicle. These are the “P” part of the SIPRE sequence. These additional algorithms could certainly be programmed in—but not simply as a set of rules. These types of awareness represent a human driver’s judgment based on experience. Certainly, the intelligence that drives an automated car could be loaded with samples from the internal databases of cars that preceded it in production and service. But each car would also need a set of subroutines and memory spaces that allowed it to learn from its own unique experiences based on local driving conditions and then correct errors of impression and interpretation as it develops “street smarts.”

This is because much of the human driver’s experience and many potential sources of error have to do with the “I” part of SIPRE. People look and compare what they see with things they have seen before. Current software and hardware engineers are working hard on digital camera technology and the interpretation of pixelated images in a programming environment. For example, some computer systems are now able to identify faces in a crowded scene. Even the digital camera in a smartphone, which must compete with so many other functions packed into its limited operating system, can detect and focus on human faces in its pre-snap image. But human faces are common for having a predictable set of reference points: two eyes, a nose, a mouth, chin, and so forth and a predictable shape.6

Out on the road, you might think cars and trucks have a similarly predictable set of referents: wheels, taillights, headlights, bumpers, fenders, and so forth. But the automated driving system will have to recognize much more than the components of surrounding traffic. It will need to recognize all sorts of obstacles: fallen trees, boulders, and even something so subtle as the dark spots in the road that represent wheel-damaging potholes and tire-blowing debris. It will have to interpret pedestrians not just as upright, adult figures—like those stick figures on the warning signs—but as people in wheelchairs and people involved with pets, children, and wheeled encumbrances like shopping carts, strollers, and walkers. The system will have to interpret—correctly and safely—images that are obscured by rain, dust, the glare from bright surfaces, and shadows from low-angled sunlight at dawn and dusk.

And the system will have to extend its awareness beyond the road’s shoulder to the general environment as well. A dog on the side of the road might be about to dart into traffic. A dog or a ball bouncing into traffic might be followed by a child. The driving system will have to judge these ranges and speeds just from its camera imagery. For another example, a motorcycle and rider stopped at the side of the road some distance away might present an unfamiliar image and be difficult for a mechanical system to interpret. From certain angles, the image has the same general shape as a cow. But a cow standing beside the road creates an entirely different set of predictions from a motorcycle pausing beside the road. And if either one suddenly moves out into the traffic lane, it will generate its own unique reaction sequence.

All of these conditions and situations will be difficult to test in the software lab before a system is sent out on the road. Traditional software testing tends to involve a regimen of known inputs against which the system passes if it generates the correct—or expected—outputs. This works fine if you feed a math program a number problem and look for it to give the correct—and unique—answer. But it will be impossible to test a driving system against all the strange things a digital camera might pick up while traveling the open road with the scene changing at sixty-five miles an hour and twenty-four frames per second, amid distractions like blowing trees, tumbling debris, flying dust, and angular distortions confusing the image. In short, you don’t know what you’ll find out on the highway until you actually see it.

Driving is more than simply following the road. As a skill set, the level of awareness and prediction it requires can tax even the most alert and experienced human mind. I am sure that, eventually, artificial minds with human-scale awareness will be developed and made small enough to fit into the dashboard of a car. And, as these machines become more ubiquitous, the traffic system will also change to accommodate them. For example, signs will have a digitized radio component for the robot driver as well as visual components for the human driver. Special situations, like construction zones and road blocks will generate their own emergency broadcasts. And other users of the roads, like pedestrians and bicyclists, will be advised to wear armband transponders to help the driving machines recognize them and accurately interpret their actions.

All of this will come one day. But my sense of the technology is that we are not there yet. We won’t get there in a future defined by the next couple of years. And the last thing we all want is a robot car running down a pedestrian with a shopping cart because it thought she was a funny-looking bicycle.

1. That experiment ended in December 2016 over a regulatory dispute: the California Department of Motor Vehicles wanted the Uber experiments to have a special permit required for fully autonomous cars, while Uber insisted it didn’t need one because it still had a potential driver. Also, the Uber cars were not clearly labeled as test vehicles.

2. See SIPRE as a Way of Life from March 13, 2011. And yes, I learned about this in traffic school while expunging a ticket.

3. And note that in this discussion I am addressing mostly visual imagery or measurements that can be taken by radar or laser rangefinding. Auditory awareness and recall is a whole other matter, and not just to hear the squeal of tires and shouts of “Dumb ass!” from other drivers.

4. This, of course, is in an ideal world where people are polite, pay attention to others, and care about their intentions and convenience—rather than honking, gesturing, and barreling past them.

5. In the California Driver Handbook, this situation is supposedly solved by the rule that the driver to the right has the right of way. But not everyone knows their right from their left, and the general rule is to yield your own right of way to avoid collisions.

6. I haven’t tried it, but I wonder if a smartphone would frame a human head from a rear three-quarter view that included only the nape of the neck, ear, cheekbone, and partial eye socket? A human being could certainly tell that this was a person’s head and not, say, a cabbage.

Sunday, March 5, 2017

Ends and Means

“The ends justify the means” can probably be traced back to that ultimate pragmatist about political power, Niccolò Machiavelli, in The Prince. Get your political outcome right, so that everyone is happy and content with it, and no one will look too hard at how you have achieved it. If you asked me to put my finger on the starting point for the downfall of our civilization, I would point to that.

Except … people have been practicing that kind of consequentialist thinking ever since the legions of Rome marched into the rest of the world with the goal of giving everyone better roads, cleaner water, and proper laws. And before that, the Greeks of Macedonia marched into Persia to give everyone the blessings of Athenian democracy. And earlier, the Sumerians of Uruk marched across Mesopotamia to give everyone a clear understanding of the gods, as well as the art of writing.

The 20th century, in my view, started going seriously wrong with the Marxists—apart from that family feud among the progeny of Queen Victoria, which resulted in the first World War, which led inevitably to the second World War. But Marx himself apparently never said that the ends justify the means. It was the Bolsheviks’ favorite tactician, Leon Trotsky, who adopted it as the movement’s philosophical underpinning. After all, when you’re working to increase social development, break the dominance of man (singular) over man (plural), and bring about the blessings of utopia—it doesn’t matter how many banks you rob, lies you tell, and whose heads you crack. “The end may justify the means as long as there is something that justifies the end,” Trotsky wrote in Their Morals and Ours. This was in 1938, after Stalin had already killed millions in the Ukraine and devoured every member of his own party. To me, this dictum sounds like circular reasoning. Trotsky’s head was filled with worms.

There is a huge political problem with “the ends justifying the means.” In the competitive marketplace of ideas, not everyone may agree with your analysis of risks and benefits, and not everyone may hanker after your vision of utopia. When you have to justify those ends—which Trotsky simply assumes will be proclaimed by fiat—then you would seem to be taking an unfair shortcut. And when expediency is your publicly announced credo, you lose all credibility among reasonable, fair-minded people. In a fair political fight, where the issue is not yet decided and the mass of people are still allowed to make up their own minds, you will already have yielded—nay, thrown away with both hands—a politician’s greatest asset: his or her credibility.

At the campaigning stage, when a politician is bargaining with the people, and the people still have a choice, all any politician has to offer is a series of promises. Everything else—determination, performance, action—still lies in the future. If you truly believe, or even hint you believe, that your stated purpose justifies any means you might use to obtain it, then you are admitting that nothing is beneath you. Lying, deception, vote fraud, assassination, armed insurrection—all are permissible if they achieve your ends, but not necessarily anyone else’s. And you are also admitting that you have no personal honor, no sense of integrity or dignity higher than the purpose which you have avowed. You are not a person of principle; you are a person ruled by a particular political policy or doctrine or goal.

There are certain things, many things, that a serious, thoughtful person will not do. Not because they are illegal, inexpedient, or damaging to reputation, but because they violate that person’s internal code. What code is this? It is the internalized rules about truth seeking and truth speaking, about reciprocity and fair dealing, about compassion and caring for others, about personal bravery and self-restraint, among other virtues. In a more chivalrous age, this set of internal rules was called a code of honor. For Christians, it is “getting right with the Lord.”

If you are a purely political creature,1 you are all about the transactions between the individual and the group, and between one group and another. This is the exterior side of life, or living as it may be viewed from the outside. When you look at human beings that way, it is too frighteningly easy to confuse them with animals—or with physical things, like counters on a board or numbers in a column.2 Humans as the object of external study become indistinguishable from cattle or cockroaches. You can analyze them like rats in a maze, or in terms of predators and prey. You can construct theories about them as if they were inanimate objects, like dolls or windup toys. And if you start thinking of people as cattle, it isn’t hard to imagine that they need a superior sort of being, a herdsman, to guide them and give them greater purpose—kind of like a revolutionary vanguard which will impose its greater vision on the faceless mob.

But that mob is made up of individuals. And, whether absurdly or not, each of them—each of us—believes we have our own purpose, our own special place in the universe, our own moral character, and our own, very personal destiny. We may be political entities assigned automatically to family, tribe, state, and ethnic or cultural background. But we are also individuals who feel we have a right to identify with what we perceive to be our own kind and, in most cases, make a personal choice about what kind that will be.

Other people’s ideas about ends and means cut across this sense of personal destiny and affinity like a railroad track cutting through a private park. Unless we agree to the iron pathway, adopt it for our own, and choose to follow and be guided by it—we aren’t going like it. In fact, we will fight it with our last breath.

Individuals may be unreasonable and careless, distracted and incapable of finding and following the good life. But in the end, the focal point of all action, both political and personal, is individual, centered in one mind at a time, one will, and one sense of purpose.

1. Yes, I know, Aristotle’s definition: “Man is a political animal.” We are social creatures, after all, inextricably involved with one another, even more than we are concerned with nature or the gods. But this is the Greek view, which was good at stripping away surfaces to get at the underlying bones. For the Greeks, everything that was personal and private was idiōtēs, the doings of a person without public station or function. This is the origin of our word idiot

2. That was certainly Stalin’s view: “A single death is a tragedy; a million deaths is a statistic.”

Sunday, February 26, 2017

Nothing is Unthinkable

When I was growing up, my parents often had to tell me, “You can’t say that!” I suppose most parents tell their children this as young minds and young mouths test the limits of discretion. But at the same time what I also heard my parents saying was, “You can’t think that.” Meaning certain thoughts and ideas were so dangerous, evil, or just plain wrong that they simply could not be entertained.

In reaction to this, I resolved at a young age that no thought coming into my head from my own perceptions or imagination was unthinkable, no idea inconceivable. Many things should not be said, of course. And most things certainly should not be done. But thinking about them, examining them, trying to understand their basic premises, their true natures, and their consequences—that has to be acceptable. For me, this resolution was an act of personal bravery: I would trust the sturdiness and stability of my own mind against any bad or corrosive thoughts. I would dip in the pool of evil or error, mentally, and still retain my fundamental self.

I was determined to look my enemies in the face and understand their thoughts and ideas. This is why I studied Russian in high school.1 During this time, also, I read several books about Communism and the Russian Revolution. For those of us growing up in the Cold War, this was the face and imagination of our enemies, and I wished to understand them. If I had been in school just before or during World War II, I probably would have studied German or Japanese—if those courses were even available.

It is a premise of mine that you cannot find the truth unless you are willing to entertain falsehood. You cannot understand the good unless you explore evil. You cannot know what works if you won’t dissect and examine the ideas, systems, and mechanisms that fail. And again, you have to trust the stability of your own mind and imagination to come out sane on the other side. You also have to trust your innate sense of truth and your preference for goodness to guide you in these explorations and examinations.

I recently published a blog about the politically divided nature of this country.2 As I worked to articulate each side in the debate—left and right, progressive and conservative—I was conscious of trying to represent the issues fairly from each point of view. I did not want my progressive friends to feel they were being misinterpreted, nor my conservative friends to think they were being dismissed. Fairness is a primary virtue with me, and I can understand and find some truth and utility in both points of view. But please don’t think, just because I have thought deeply about a viewpoint and come to understand it, that I must therefore approve of it and believe it to be right. I have studied the Marxists, understand their utopian dreams, and still think they are utterly foolish—if not downright wicked.

What I never expected, at the time of my decision to entertain unthinkable thoughts, was how useful this attitude would be in teaching myself to write novels. The essence of storytelling is to put characters into emotionally wrenching but instructive situations, which always involve conflicts, risks, and losses. Sometimes the possibilities of risk and loss can be represented by natural events—a tsunami struck, a fire broke out—and the characters can remain good, sunny people—just like the author him- or herself—who are only struggling to survive. But the best stories involve conflict between opposing minds and goals: ambition and vulnerability, freedom and security, greed and altruism, good and bad intentions. Unless the author wants to restrict the story’s viewpoint to the sunny minds and pleasant thoughts of its heroes and heroines, who are more done to than doing, the author must occasionally enter the minds, entertain the thoughts, and explain the motives of assassins, warlords, megalomaniacs, thieves, rapists, psychopaths, damaged children, cold-blooded aliens, and soulless artificial intelligences. Being able to think the unthinkable just comes with the job of storyteller and novelist.

It must also be the job of an actor or actress. In order to play the villain—a role that occurs in almost every story and must be portrayed on stage or screen by someone—the performer must enter and examine the mind of a deranged, damaged, or purely evil person. Certainly Shakespeare, who was both actor and playwright, had this experience when he created a villain like Richard III or a beast like Caliban.

Sometimes, also, the reader and the audience may get confused. Henry James wrote a short story, “The Author of Beltraffio,” in which the wife of a writer of morbid tales determines that his son would be better off dying than growing up under the man’s baleful influence. In the minds of others, the author or the actor can sometimes become confused with the fantasy that he or she is portraying, often with instructive or cautionary intent.

I had a similar experience with one of my novels, Crygender, from 1992. The title character starts out as a ruthless assassin, who then takes refuge in an altered physique and identity and runs an internationally famous bordello, located on Alcatraz Island in the San Francisco Bay of the future. Given the story’s setting and its main character of questionable ethics and sexual orientation, some explicit scenes of both murder and debauchery were mandatory in the telling.3 To this day, the book remains the favorite of some of my readers and the point at which others—my wife included—simply gave up on my writing.

But still, I stand by my decision. Nothing is unthinkable. This is the only way to dive deep into murky waters and, occasionally, come up with a pearl.

1. I give credit for this opportunity to a really excellent school board and high school administration in Warren, Pennsylvania, where I happened to be in those years. They offered Latin, French, German, Russian, and Spanish as language electives. We had a pair of wonderful teachers: John Stachowiak, who taught Latin and Russian, and John Greene, who taught French and German and also offered early-morning enrichment courses in Swahili and Portuguese. Both men were graduates of the Defense Language Institute in Monterey, California, which is the training ground for many in America’s diplomatic and intelligence services.

2. See The Insurrection of 2017 from January 29, 2017.

3. In my defense, the novel originally started as an outline from one of my potential collaborators, but the senior author dropped out of the project when shown the completed manuscript. The book was subsequently published under my own name as a solo effort. To quote from Cardinal Richelieu in The Four Musketeers: “One should be careful what one writes … and to whom one gives it. I must bear those rules in mind.”

Sunday, February 19, 2017

Something Outside Yourself

The other night I was watching a movie about an unhappy woman, actually three unhappy women. It doesn’t matter which movie it was, because it seems that these days all the movies and books are about unhappy people—except for science fiction, where people are usually running or fighting for their lives, and movies based on comic books, where the main characters are superheroes with no personal problems at all who are fighting each other.1

Anyway, at one point in the movie I said to myself, “This woman needs to go write a book.” Not, mind you, not a book about her unhappy life, her frustrations, her poor choices, and her existential ennui. God, no! But a book about something other than and outside of herself, like the conflicts and eccentricities of the Tudor dynasty, or the life cycle of the sand flea, or the probability of finding life on Mars. “Go write a book”—that’s my solution to life’s problems and the greater issue of life’s existential meaning.

This is because, since the age of sixteen or maybe even a few years before that, I have felt the psychological pressure to write novels. No, rather, I have accepted the psychological burden of knowing that my purpose in life was to line bookshelves, to tell stories in the 60,000-word range, to invent unreal people in unreal situations and then observe and report on them. Whatever you want to call this activity, it has been part of my function in life which I have totally accepted and left unquestioned and unexamined. My daily business for as long as I can remember has been to eat and sleep, bathe and groom, exercise for health reasons,2 attend to the paying job and all its obligations,3 take care of the family, read for pleasure, do household chores, walk the dog, plus think about the book currently in hand and make time to sit down and push the page marker forward.

My books don’t have a message; I’m not promoting a political or economic agenda. They are not a reflection of my life; I’m not trying to tell my story or explain my point of view. Other than the fact that the plots and characters come out of my own thoughts and experiences, and so must reflect something of the way I think and feel, these are people and situations meant to stand on their own. The books come through me, like light through a prism. I have no sense of actually originating them, and when they are finished, I feel no special glow of recognition for them.

Whether these books are good or bad is really not my concern.4 I work on the story and the text until they are as good as I can make them. When I’m done, the book is as good as it can be. But whether other people will find it good, or interesting, or worth reading … that’s not something I can control. Whether the book will sell and make money is a matter of speculation, not purpose. If an agent or editor told me I would sell x percent more copies or make y percent more money by writing in a different way, or in another genre, or taking the story in a different direction5 … that’s not something I can really do. The stories are coming through me, originating outside my conscious control—if anywhere, from the depths of my subconscious—and I’m not the master of that process.

I did not intend this meditation to be an explanation of my writing process, other than to show it is something outside myself. The act of writing and completing a book exists, for me, as something more important and more meaningful than what I am thinking or how I am feeling at any one time. If I am feeling lazy, bored, or sad at the current moment, I cannot invigorate myself or cheer myself up by turning to the writing—not unless I already have an outline in hand and the story is active in my brain, insisting that it be written. And when the story is “hot” and ready to be told, it doesn’t matter if I’m sad or tired. I must drop what I’m doing, go to the keyboard, and start telling it. All I know is that, at the end of the day, if I have solved a knotty plot problem with a solid patch of outline, or written a scene that worked and advanced the marker—or, when I was at the paying job, had just finished writing a long and technically important article or procedure—then I know a sense of peace. For that day, at least, I have fulfilled my purpose. The internal word tank is empty, and a new idea has been made real on the screen, in the file, eventually to appear on another screen or on paper for someone else to read.

When I tell the unhappy women in that movie they should “go write a book,” what I mean is they should find something bigger, more important, more involving than their own lives, something outside of themselves which demands attention and offers fulfillment. They should find that thing and surrender to it, dedicate their lives to it, leaving it unquestioned and unexamined. I say this not because they are women—oh, no!—but because they are human beings. For some women, taking care of their children and their families are a greater purpose in life. Just as, for some men and women, building a professional career and making money are that greater purpose. But if someone can take up those burdens and still feel angst and ennui, then I would place children or career among those basic life tasks along with exercising, doing household chores, and walking the dog.

For Michelangelo, the greater purpose was the Sistine Chapel ceiling and, also, releasing heroic figures trapped in marble. For Mother Teresa, it was helping the poor of Calcutta. For some people, it’s volunteering in the community, taking up a political cause, acting in amateur theatricals, or making music. For some, it is becoming a good soldier, or a surgeon, or the Wichita lineman. The point is, the undertaking must be active, not passive, and the effort must be all-consuming, not a pastime. It must be the thing that, when you are doing it, you live more brilliantly; when you are unable or prevented from doing it, you die by inches. For some people, it is enough to ask God to reveal their place in His plan—but for me, that’s like waiting to conceive the plot and characters of the next book, which will take me through the next twelve to eighteen months to complete.

The tragedy of the human condition is that life on Earth has no particular purpose. Your father’s sperm met and fertilized your mother’s egg, and so you are here. At a greater remove, DNA acquired the ability to encode and retain certain protein sequences—or, if it misplaced them, to invent new ones—and so life appeared. For all plants and the vast majority of animals, this is enough. Eat and sleep, groom, fight, and reproduce are all the purpose their lives need; the animal’s occupation and its goal are simply survival. But for human beings—and maybe for dolphins, whales, and elephants, too—with our larger and more complex brains that engender self-awareness and raise questions, mere survival is not enough. Because we conceive of ourselves as unique beings, and not replications of a species type, we seek for ourselves a unique purpose, something of our own to do, greater than survival.

Life itself, mere existence, will not give you this purpose. It is our greatest freedom—and life’s great trap—that each person must choose and decide for him- or herself what purpose, what destiny, this life will fulfill. It does not have to change the world or leave a monument to which others will look up and offer praise. It does not even have to be something that others will understand. And it may not be something that you have bravely and consciously chosen for yourself. My focus on books came, I think, from my grandfather’s love of reading and collecting certain authors. And then, somewhere in there, someone else in the family—perhaps my father—might have praised the works of a particular author. And I do remember reading, at a young age, that Nathaniel Hawthorne once told his mother he did not think he would be much good at the law or medicine, but what if he could give her a nice little shelf of books to read?

In this way, grandparents, parents, families, and teachers shape and point the earliest ideas of young children. They place the notions and start them spinning to become lifelong passions. And those of us who grow up to follow those passions risk everything—love, health, sanity, and even life itself—to fulfill them. And we are thereby fulfilled.

1. All right, the movie I was watching was The Girl on the Train. It’s about three women immersed in their own problems, which they eventually find out are the same problem. Plus, of course, booze is involved.

2. Because I do karate katas as a form of aerobic exercise, the workout also helps maintain my balance, coordination, flexibility, and psychological preparedness against attack.

3. Or this was part of my life until I retired. Now the book writing is my day job. It doesn’t pay much, but that’s not the point.

4. Here I am reminded of the line in C. S. Lewis’s The Screwtape Letters: “A man is not usually called upon to have an opinion of his own talents at all, since he can very well go on improving them to the best of his ability without deciding on his own precise niche in the temple of Fame.” I find that thought comforting.

5. For a while, at Baen Books, I wrote collaborations at the direction of the publisher. That is, I accepted an outline and partial notes—in whatever state of preparation—from a senior author and sat down to write the book; then the senior author would get a chance to read and correct it. I found this process tremendously instructive, because it gave me insight into how other authors think and prepare. These weren’t my books, of course, but like the books I wanted to write, they came from somewhere outside myself, and I could do the work. And no, they didn’t make much more money than the novels that came out of my own head.