Sunday, January 28, 2018

Coming Full Circle

Do-Bee

I finally got to watch the movie The Circle with Netflix. It’s a chilling vision of where immersion in social media might take our culture, and that makes it a cautionary tale. The vision of people living in full public exposure, where “secrets are lies,” where personal privacy is a condition of public shame, reminds me of Yevgeny Zamyatin’s dystopian novel We, which I’m sure The Circle’s original author, Dave Eggers, must have read.

The movie is a depiction of peer pressure brought to perfection. By binding all of connected humanity into one hyperactive zeitgeist, this fictional social media software creates a howling mob. But does my analysis go too far?

I am reminded of the nature of the culture we’ve created in America. We are cooperative as much as we are competitive, especially among the younger generation brought up with the teachings of Howard Zinn and the new “woke” enlightenment. When socialism finally comes to this country, it won’t be with the armbands and jackboots of Nazi Germany, nor with the grim-faced commissars of Soviet Russia. Instead, we will coerce people with the smiling positivism of the Romper Room lady: “Be a Do-Bee, not a Don’t-Bee!”

Smiling, positive peer pressure—supported by the veiled coercion of local laws and proprietary rulemaking—have accomplished a lot in my lifetime.

For example, when I was growing up almost everyone smoked,1 and they smoked almost everywhere. My parents were both pack-a-day Camel smokers. I started smoking a pipe—never cigarettes—during my freshman year in college, and this was three years after the Surgeon General’s report in 1964 on smoking and health. We lit up in restaurants, in cars, in every public space, and in every room in the house. The ashtray was a familiar artifact in every home and office.

To counter this, there was no national ban on cigarettes, no repeat of the Volstead Act enforcing the prohibition on alcohol. Instead, we got warnings citing the Surgeon General printed on every package of cigarettes. We got increased excise taxes on every pack, which made smoking prohibitively expensive for many people but especially the young. We got laws making the sale of cigarettes to minors illegal, in an attempt to stop them from ever taking up the habit. And the age limit in California was recently raised from 18 to 21 years old, comparable to limits on alcohol sales.

To change public habits and perceptions, we started by introducing both smoking and non-smoking sections, at first in restaurants. This was followed—at least in California, but soon after almost everywhere—with complete bans on smoking in restaurants, bars, and other public spaces, including indoor workplaces. As of 2008, you cannot smoke in a car with a minor present, under penalty of a $100 fine. As of 2012, California landlords and condominium associations can ban smoking inside apartments and in common areas. And smokers who go outside a public building to practice their vice must stay at least 20 feet from an entrance doorway, an open window, or even a window that can be opened.

This flurry of taxes, laws, and private rulemaking was accompanied by advertising campaigns that showed the damage smoking did to people’s lungs, hearts, and skin. It was supported by statistics about the occurrence of cancer and other life-threatening diseases, not just for those who smoked but for anyone nearby. Smoking became not just an unattractive, dirty, and smelly habit, it became a form of assault on the health and safety of others. In 1998, the attorneys general of 46 states sued the country’s four largest tobacco companies over tobacco-related health care costs that their states paid under Medicaid; they reached a settlement under which the companies curtailed their advertising efforts, agreed to pay perpetual annual amounts to cover health care, and help fund the efforts of anti-smoking advocacy groups.

Sure, you can still buy cigarettes in this country—if you have the money, and a place to enjoy them, out of sight of every other living person.

In my lifetime, I have seen a parallel revolution in the handling of garbage. It used to be that everything you didn’t want went one or two places. Your food wastes went into the garbage disposer in the kitchen sink, to be ground to pulp and flushed into the sewer lines or the septic tank.2 Everything else went into a paper grocery bag—later a plastic grocery bag—in the can under the kitchen sink, to be emptied into the bigger galvanized cans in the garage, taken to the curb once a week, picked up by the municipal garbage truck, and hauled away to some distant place out of sight and out of mind. Where the hauling ended up was a matter of local practice. Some communities dumped their “municipal solid waste” in landfills. New York City put their waste on barges, hauled it out to sea, and dumped it into the abyss of the Hudson Canyon.

Municipal solid waste was anything you didn’t want or no longer needed. It included food scraps you couldn’t be bothered to feed into the garbage disposer; waste paper, newspapers, wrappers, and corrugated boxes; soda cans and any pop bottles some kid hadn’t already taken back to the store for the two-cent redemption; broken furniture and appliances; old paint and construction debris; and dead batteries, worn-out car parts, and used motor oil and filters. If you were through with it, it went out to the curb.

And then the recycling movement started, prompted at first by environmental concerns about landfills and pollution of the environment.3 At first, we saved our wastepaper to be recycled into newsprint and cardboard boxes—in an effort at saving the trees.4 Then we were recycling those aluminum cans, any glass and plastic bottles that didn’t have restrictive redemption value—like California’s five and ten cents on plastic soda bottles, based on size—and other distinguishable and potentially valuable materials. Originally, each of these commodities had to go into its own bin. But now the recyclers take paper, metal, and various grades of plastic in the same bin, because they will shred it all and separate the bits with magnets and air columns.

In the greater San Francisco area, we began seeing metal tags on storm grates reading “No Dumping – Drains to Bay,” usually with a decorative outline of a fish, to keep people from putting motor oil, battery acid, and other pollutants into the water supply. And it’s illegal now to dump these things—along with dead batteries, discarded electronics, and broken furniture—anywhere but at certified recycling centers. More recently, in 2016, California began requiring businesses and housing complexes to separate and recycle their organics—the things we used to put into the garbage disposer—to be hauled away for composting.

So now, instead of dumping everything down the sink or into the garbage can, as we did when I was a child, we rinse out our food and beverage cans and bottles and keep them in the closet until the recycle truck comes. And we put our coffee grounds, vegetable peelings, and table scraps in a basket—my wife used to keep them in the freezer, to cut down on odors and insects, a practice I continue—and hold them until the composting truck comes. We hand-pick, sort, and pack our garbage to go off to the appropriate disposal centers.

All of this change was accomplished not just by laws and fines but more directly with a change in social norms, similar to the anti-smoking campaigns, intended to make those who failed to sort and wash their garbage not just careless but failing in their social duties—a Don’t-Bee—subject to public ridicule. Of course, the movement has not been without its downsides. People who can’t be bothered to take their garbage to the appropriate recycling centers now dump it at midnight on any vacant lot or in someone else’s yard. And alert criminals, who know what the recycling centers will pay for valuable commodities like refined copper, are stripping the wiring out of public parks and farmers’ pumping stations, also at midnight.

Social engineering through a mixture of taxation, regulation, and peer pressure is now an accepted tool in this country. Smoking and garbage sorting are subject to popular taboos. Similar peer pressure—if we let it happen—will soon govern what, where, and how we can drive; where and how we can live; what we can eat; and how often we must exercise. Social media, with its built-in opportunities for social preening and its chorus of social scolds, will only accelerate the process of making us all Romper Room children.

China, which once sought to ban the internet, has now co-opted it—along with aggressive DNA profiling and new technologies like artificially intelligent facial-recognition software—to institute a Social Credit System as a means of public surveillance and control. In their plans, the system by 2020 will index, among 1.4 billion people, each citizen’s social reputation and standing based on personal contacts made, websites visited, opinions published, infractions incurred, and other individual performance variables. People with high ratings and demonstrated social compliance will be rewarded with access to good jobs, financial credit, travel priorities, and favorable living conditions. People with low ratings and recorded infractions will be massively inconvenienced and their futures curtailed.

I’d like to say it can’t happen here, but I don’t know anymore. I can say that, although I try to live a good and proper life, obey the rules, pay my taxes, and not hurt anybody’s feelings, this kind of surveillance and control is out of bounds in a free society. If social media turns into the kind of howling mob predicted by The Circle, then I am out of here. I will take my socially unapproved weapons and move to some socially unacceptable place to join the next social revolution. I will become the ultimate Don’t-Bee.

1. Except for my late wife. I met her just after I had quit smoking, for the last time, and a lucky thing that I did. Her expressed view was “kissing a smoker is like licking a dirty ashtray.”

2. My mother, having trained as a landscape architect, set aside a place at the far edge of the back yard for her compost pile. (Now that I think of it, her mother kept a place like that, too.) Here we carried out and dumped our grass clippings, coffee grounds, vegetable peelings, egg shells, and anything else organic that would decompose into a nice, rich mulch for the garden. But not the raked leaves: those we piled against the curb and ceremoniously burned as an homage to autumn—except you can’t do that anymore because of air pollution laws.

3. I almost became part of this movement professionally. Back in the 1970s, when I was between publishing and technical writing jobs, I answered an ad from a man who had a system for “mining” municipal solid waste. He had studies showing that garbage was the richest ore on Earth, with recoverable percentages of iron and nickel, copper and brass, aluminum, and glass that exceeded the per-ton extraction at most modern mines. After separating these valuable commodities, he proposed pelletizing the paper, organic solids, greases, and oils into a fuel that could be burned at high temperature to generate electricity. Garbage was, in his view, literally a gold mine. The only hitch was that, instead of hauling dirt out of an open pit to a nearby processing plant or smelter, this new kind of ore had to be collected in relatively small amounts by sending a truck house to house. It wasn’t the value of the thing itself that was in question but the small quantities and the logistics involved.

4. Of course, no one harvests old-growth redwoods and pine forests to make paper, not even high-quality book papers and artist’s vellum. As a young child on family boating trips to the upper reaches of the Hudson River, I saw pulp barges coming down to the paper mills around Albany from pulpwood farms in Quebec. Our paper is grown on plantations as a renewable resource, like Christmas trees. The mantra about “saving a tree” is akin to abstaining from eating corn in order to save a cornstalk. We farm these things.

Sunday, January 21, 2018

Hooray for Humanity

White human

A Facebook friend recently posted a Freidrich Nietzsche quote: “The world is beautiful, but has a disease called man.” When I was in college—which is to say when I was very young and naïve—I had a twenty-minute love affair with Nietzsche’s ideas after reading him in a philosophy course.1 Luckily, I quickly came to my senses. Dyspeptic German philosophers don’t do anybody any good. Although I agree with him that the world is beautiful, the second part is horse pucky.

It is a core of my beliefs that human beings are the best form of intelligence and the hottest thing going for about three parsecs in any direction from here. Many other species on this planet exhibit intelligence: most species of dolphins, whales, elephants, octopi, and a number of smaller mammals. Hell, even my dog has a basic, recognizable, communicable form of intelligence. Intellectual capacity is not an on-off switch but a spectrum. On that basis, humans are at the top of the heap, the far end of the scale, the smartest of the bunch.

Why? Is that my human chauvinism speaking?2 Not at all. We’re at the top because we are the ones doing the studying and evaluating among all these other species. We are trying to understand dolphin and whale communication systems, decipher their languages, and determine what—if anything—they are communicating. So far, we have a lot of tantalizing ideas, but no obvious consensus. And that’s not because we aren’t really trying, don’t really care, or haven’t put some of our best minds into the study.

For other animals—and I use the term advisedly—we are still gathering data and, generally, reacting with surprise. Elephants form well-ordered societies, have strong familial and interpersonal relationships, and exhibit an artistic sense. Octopi, which are the only mollusks we know with a big brain, are clever and almost intuitive. However, all of these attributes are components of human intelligence and social order. So maybe we are anthropomorphizing—that is, seeing and interpreting a human element where none exists—about the nature and skills of these animals after all.

All of these animals have the capacity to react to their environment. They can observe in a limited fashion, analyze to some extent, learn and remember, and move away from danger and toward safety and stability. These are qualities that most other animals—think of the wild panic during a stampede ahead of a brush fire—have only in limited amount and that plants have not at all. But complex as dolphin speech might be, nurturing as elephant mothers might be, and clever as the solitary octopus might be, none of them has developed any kind of technology beyond using sticks to poke at an ant hill or a flat rock upon which to break a seed pod. None of them is going to build a radio or a rocket ship anytime soon.

You can argue that advanced technology needs the right environment and the right bodily morphology. We can make radios because we don’t live in the sea, where the study of electronics is pretty much a lost cause. And we can make sophisticated tools, weapons, and rockets because we have grasping hands, opposable thumbs, and limbs that provide usable amounts of leverage. Dolphins and whales don’t have this. Octopi have dexterity and strength, but nothing as precise as ten fingers working in coordination.

Humans also have self-awareness. We know what we are—at least most of us do, on a basic level—and we recognize ourselves as individual beings separate from each other and from our group. Dolphins and some whales have this: if you strap a funny hat on a dolphin’s sleek head, then provide its tank with a mirror, the animal will go over and regard itself to see how it looks. The dolphin knows itself as unique among others of its kind. But no amount of coaxing can get my dog to see that the “other dog” in the mirror is actually her own reflection. Dogs might have “we” and “us” in their inner vocabulary, but their sense of “I” and “me” is limited to the point of non-existence.

We humans value dolphins, whales, and dogs—indeed, most animals that aren’t trying to eat us—because we think of them as pure souls, without the impulse to break rules, choose wrongful actions, and exhibit variable behaviors including meanness and cruelty. We think that the ability to do evil is the special nature of humanity, and that this calculated ability makes us less than the animals. But in reality, all animals are operating according to their natures and by instinct, not by reasoning. When an elephant goes rogue, or a tiger becomes a man-eater, it is not because the animal has chosen to break a rule or violate some internal code of ethics. It is merely reacting to some environmental factor or hormonal conflict that may be complex but is ultimately natural, not reasoned.

It is the special strength of humans—derived from our ability to see ourselves as separate individuals and reason about our place and our status in the group—that we can have a system of morality, and that we can choose to break with it for non-environmental, non-hormonal, non-natural reasons. We can see another person’s property, understand their rights to it, and still decide that it would be better in our possession. We can see another person take action, think deeply and feel strongly about the implications of that action, and take complementary action ourselves—perhaps supporting, perhaps opposing, sometimes even killing that other person—to effect an outcome we believe to be right and proper.

Human morals don’t come from nature or instinct. They derive from the shadow-play of ideas and meanings, a model of the external world we see around us, which takes place inside our heads. And then this model comes back out into the world, mixes with the models and ideas of other humans around us, and takes shape in the form of customs, laws, courts, and punishments upon which a group of people can agree, under which they can operate, and in obedience to which they may lose their property and even, sometimes, their lives. Dolphins and elephants may have intricate and supportive social structures—so do bees!—but none of them to our knowledge participate deliberative bodies, hold courts to try cases of transgression, and exile or execute prisoners.

This is not a failing of humans, that we can choose to do wrong and must punish our perpetrators, but instead the sign of a deeper intelligence than any other animal displays. The fact that we can have personal failings, and then feel bad about them, and work to correct them, is a form of strength. We are the animal that thinks about itself, judges itself, and strives to be and do better. We are the animal that can learn and evolve its own behavior, its society, and its morality without first experiencing a genetic mutation that drives our adaptability to the environment or the interplay of our brain chemistry and bodily hormones.

It is popular these days to see primitive man, unsophisticated humans, and people who live in the wilderness, as existing in some superior state of being. Jean-Jacques Rousseau believed in the “noble savage,” the human and the social group with no concept of sin, of right and wrong, and so who could live in a state of harmony with each other and with nature. Such a person—who would only be possible in a human body without a functioning concept of self—never existed. Societies living in the distant wilderness, untouched by Western colonialism and Judeo-Christian traditions, are no more pure, polite, loving, and in tune with nature than modern humans living in steel and concrete cities under complex social rules. Tribal clans and hunter-gatherer societies are rife with taboos, tensions, jealousies, and murder, just like any other human association. And any respect they have for nature as a social value comes from the same reasoned sense of self-preservation that drives a modern environmentalist to sort his municipal solid waste and stop dumping his chemicals in the river.

It is also popular to imagine that any other human-scale intelligence which might come to Earth from out among the stars will represent pure souls without the taints of aggression, greed, and anger.3 After all, any species that can cooperate to master the incredible energies needed to drive ships across those vast distances—and not blow themselves up in the process—must be superior to grubby old humankind, right? But again, monocultures, groups that live in perfect harmony, that neither permit nor experience differences among their members, that have no concept of self as different from other, are unlikely to be very creative. Societies advance by having some individuals invent and rebel. Individuals imagine different ways of doing things, question the status quo, and occasionally work against it. Perfect societies are static societies. Static societies do not invent fusion engines and go off on interstellar travels.

Only by the darkness in human nature can you see and value the light. Only by wrong can you identify the right. This is a basic principle. Conflict and compromise among individuals in groups is a basic element of evolution. As it is here on Earth, so it will be out among the stars.

Have human beings sometimes acted barbarously? Have we experienced wars and genocides? Have we been too often careless with our environment and harmed our planet’s beauty? Of course. But along with those who act selfishly, hurtfully, and carelessly, we have people who can observe in broad dimensions, analyze in depth, learn and communicate their findings with sophistication, and move away from danger and toward safety and stability. That we can do this as a group, discussing morality, shaping customs, and writing and enforcing laws, is the special strength of human beings. We can act together by operationalizing that dream-model we make of the world. We can act outside of our basic, instinct- and hormone-driven natures.

We are not a plague on this planet. We are its saviors and shapers.

1. And when I was twelve, I had a brief fascination with Marx’s dictum: “From each according to his abilities, to each according to his needs.” It sounds so good, right, and proper—until you think about, or learn about, how economics actually operates. About human motivations, too. Like the planet Miranda in the movie Serenity, where the atmosphere was suffused with government-supplied G-23 Paxilon Hydrochlorate, a world truly governed by Marx’s principle would simply lie down and die for lack of trying. In the real world, however, it takes a heap of intentional killing to bring Marxism to that point.

2. Well, some chauvinism. I am not part of the current intellectual movement that hates itself, whether because of original sin, white privilege, Western colonialism, or environmental guilt. It is a basic policy of mine not to vote for, side with, or support people who want to see me and mine dead. That’s just common sense.

3. Unless they are a race of brutal conquerors, for whom humans represent some kind of nuisance or vermin to be eliminated, as depicted in alien-encounter movies that also partake of horror movies. Personally, I think anyone who crosses the gap between the stars to land here is going to be just as fascinated by us humans as we will be by them: both of us will have something strange and new to study. Of course, accidents may happen, and misunderstandings and feuds may break out between the two intelligent species. But I don’t think you have to travel a dozen light-years to pick a fight or to find valuable resources and steal them.

Sunday, January 14, 2018

Public Lives and Private Lives

People puppets

A whole slew of public figures—mostly politicians, actors, and journalists, but what other kind of “public” is there anymore?—have recently been brought down by accusations of boorish behavior, inappropriate touching, lewd comments, harassment, and other activities that border on, but usually don’t meet the qualifications for, the Rape of the Sabine Women.

I don’t condone this behavior. I believe that women are not disposable pleasures but fully actualized people who should be treated with respect and courtesy, especially the closer a man gets to them and the more intimate their relationship becomes. If a man wants to interact with a woman at the closest level, it should be as friend, companion, and lover—not an object of pleasure. A man who deals intimately with a woman, entering into a position of responsibility for her, and then mistreats or abandons her is lacking in commitment and a sense of personal honor. A man who willfully injures a woman either verbally or physically shows himself to be a diminished person.

But that said, something has changed in our world having to do with relations between the sexes. The 1960s and the sexual revolution, driven by the convenience of the pill and other forms of birth control, and clouded by a haze of pot smoke, cheapened and trivialized love and commitment in the interest of physical pleasure. Self-restraint, caution, and deeper feelings of caring and responsibility were thrown to the winds. If it felt good, you did it, and thought about the consequences later.

Now, in the 2010s, the diminishment of sex has come full circle. Using sexual activity for self-gratification (on the man’s part) and for personal enhancement and career survival if not advancement (on the woman’s part) has become commonplace. It is the quid pro quo of the entertainment and journalism industries and in power politics. In fact, the circle has turned so far that we have suddenly entered a newly puritanical era. In the space of the past year, dating to the upsets of the 2016 election, but with significant outbreaks from the years before, a man’s sexual history and his boorish behavior have become worse than criminalized. His touches and gropings, his comments, his unwanted moments of closeness—let alone any calculated rapes—have now become grounds for public humiliation, economic execution, and spiritual exile.

When I was growing up, in the decade and a half before the sexual revolution, sex stayed in the bedroom. It was nobody’s business what went on in a private home and behind closed doors. Yes, there were laws about incest, sodomy, and other public taboos, but the cops never broke down the bedroom door looking for infractions between consenting adults. You had to perform the unlawful act in public—or solicit it from a member of the vice squad—in order to get arrested. And although most people weren’t exactly comfortable with homosexual activity and other relationships that were considered vices at the time, they tolerated the situation so long as the acts remained between consenting adults and stayed behind closed doors, along with the rest of human sexual activity.

The press offered public figures a measure of polite silence and turned a blind eye to their personal proclivities and weaknesses. That is a far cry from the howling chorus we have today. Yes, John Kennedy had affairs, most notably with the actress Marilyn Monroe. And who can say that all of those affairs with a popular president were free of the taint of gross or subtle coercion? Kennedy also suffered debilitating back pain for most of his life and probably was dependent on opioid painkillers. Yes, Dwight Eisenhower took a mistress while he was stationed in Europe as Supreme Allied Commander. Yes, Franklin Roosevelt had a mistress while in the White House. Roosevelt was also crippled with polio and confined to a wheelchair that his aides artfully concealed. All of these failings were known to insiders and to the press. But none of them was made explicitly public, because wiser heads decided it was proper to judge the man on his political skills and his public policies, not his private life.

I think the hue and cry started with Bill Clinton.1 His lecheries with staffers, and even women with whom he only briefly came into contact, became the stuff of public ridicule in the media. His reaction to questions about these lecheries became the stuff of political obsession by his opponents, which culminated in an attempt at his impeachment on charges of perjury and obstruction of justice. Clinton’s private life became a political liability.

Was Bill Clinton a sexual predator? Probably. Are the men who are now being publicly tarred and feathered sexual predators? On a case by case basis … probably, many of them. While I think sexual predation is a low form of behavior and personally dishonorable, I also understand how the world works, and how it has always worked. The pattern goes back to the Bronze Age and probably to our earliest hunter-gatherer beginnings—as soon as one man in the tribal band became socially powerful and started being treated as a decision-maker and chief.

In any given social or interpersonal situation, one person will always have the advantage and psychological, if not actual physical, power over another. Usually, the person who wants something puts him- or herself in a subordinate position, and so becomes vulnerable to, the person who can bestow the gift, benefice, or advancement that is desired. Similarly, the partner in a relationship who loves the most subordinates him- or herself to the person who cares the least. Men in public office who can grant favors, even if it is just proximity to the wheels of government, or those who can make or break careers, like a movie producer, will always have power over those who need favors or want to boost their careers.

In the human world, unfortunately, the women who need a favor, are attracted to power, or have a career to make usually end up putting their sexual persona, or sometimes just their personal submissiveness, at the service of powerful men who can grant those favors and build those careers. This is the disadvantage of being a women, not just in 20th-century American society, but in all societies going back to hunter-gatherer tribes. Still, it can also be an advantage because, for most adults in our sexually liberated age, contrived intimacy is no longer that big an issue and need have no lasting consequences. A man in similar need of favor or advancement would have to compensate the power broker with a pound of uncut cocaine, a box of Cuban cigars, unlimited personal loyalty, or performance of some legally or morally questionable service.

There is no way to stop this kind of transaction. It is built into human nature, which derives from the social interactions of all primates and all mammals who happen to gather in groups or herds. The big gorilla, the alpha male—or the alpha female—gets what he or she wants. The only way to stop this transactional relationship is to eliminate all positions of power, all distinctions between people, and the basis of all human interactions. Either that, or impose horrendous penalties on all persons engaging in extramarital sexual activity—and enforce existing laws about cocaine and Cuban cigars.

Or, we could return to a culture that holds a man—or a woman—to a standard of personal behavior and expects him or her to follow a code of personal honor. We don’t have to turn a blind eye to the mistresses and the addictions. But we also don’t have to bribe doormen and sift through other people’s diaries looking for infractions, because we can trust that most members of society, even those in positions of political or economic power, are decent, honorable, and living according to such a code.

You can’t enforce decency by civil or ecclesiastical law. But you can make it personally attractive and … perhaps even expected once again.

1. However, the public shaming of Colorado Senator Gary Hart, over an extramarital affair that surfaced during his bid for the presidential nomination in 1988, was a precursor to the outing of Clinton’s follies. Treatment of the Hart affair set a precedent for the next stage of journalistic voyeurism.

Sunday, January 7, 2018

Silent Movie

Scene from Tol’able David

In early December, I saw the movie Tol’able David from 1921 at the San Francisco Silent Film Festival. The screening was presented with bravura accompaniment—think of playing nonstop, with expression, and mindful of what’s happening on the screen, for one hour and thirty-nine minutes—by local ragtime and concert pianist Frederick Hodges. I’m not really a fan of silent movies, but the experience made me think about developments in my own art, which is fiction writing.

In the early silent pictures, you can see the transition in pictorial storytelling from stage plays to modern moviemaking. For one thing, the actors are heavily made up: eye shadow, eyeliner, exaggerated brows, and darkened lips in otherwise pale faces. This recalls stage makeup, where the actor’s facial features are emphasized so as to be distinguishable in the tenth row back and not fade into a white blur from the vantage point of the balcony. A friend explained that the heavy makeup in the silent screen was also required by the film stock of the time. The film’s emulsion was “orthochromatic” and was good at capturing the blue wavelengths of reflected light. Red wavelengths, such as those coming from ruddy faces and the thinner, blood-tinted skin of lips and eyelids, would tend to disappear; so the makeup heightened and emphasized these areas. It is also no coincidence that the mouth and the eyes are the keys to a person’s facial expression.

Today’s movie makeup works harder to create a realistic scene. Yes, a woman’s lips and eyes get special treatment as elements of modern female beauty—just as a fashion-conscious woman spends time smoothing her skin and highlighting her eyes and lips with cosmetics. But more often the makeup department attached to a set is creating realistic beards for beardless actors, applying mud, blood, and bruises for a fight scene, visualizing alien or zombie features, or making sure that a laborer’s beaded sweat stays in place and shows up well on camera.

Just as silent-film makeup emphasized delicate facial features, the actors also exaggerated their face and body language, relying heavily on pantomime. And again, this was an adaptation from the stage, where a slight lift of the eyebrow or a millimeter dip of the lips in a frown would go unnoticed by the general audience. And so actors of the silent screen squinted when they looked into the distance, struck their foreheads when expressing surprise, put a fist to their chins when deep in thought, and held their bellies when they laughed. These gestures became a visual code or language. An audience sitting at some distance from the stage, or on the other side of the silver screen, understood them and received the right message.

Silent-film actors would also mouth words of intended dialogue to fit the situation, because you can’t carry the whole story with subtitles. For example, in Tol’able David the heroine runs down the street in distress, and you can see her throw back her head and scream “Help me!” If you are paying attention to the action, you don’t need a dialogue card to tell you what she’s saying.

In today’s films, the actor is cautioned to minimize facial expression and gestures. I once saw a wonderful clip in which Oliver Reed gives acting advice to a beginner. He tells a young broadcaster to control his inflections, minimize his accents, and hold his eyes still instead of winking and blinking. The point is, when your face is shot in close-up and rendered forty-feet wide on the big screen, and when your voice is amplified throughout the theater with modern electronics, you don’t need histrionics to get your point across.1

Another artifact of the theater stage that carried over to early movies was the point of view through the proscenium arch. The audience of a stage play is out of the action, looking through this arch—sometimes called the “fourth wall” of an interior set—which frames the view. Even with theater-in-the-round, an audience member sees the action only from his or her side of the stage, not from within the action. Early movies adopted this more distant point of view, framing most shots to encompass the middle distance and all of the action, then letting the audience member look here and there on the screen to decide where the most important bit of stage business was taking place. Only occasionally would the movie focus on an actor’s face in reaction to what was going on, or pick out a specific object, like the gun under the bureau in Tol’able David, to make a point to the viewer.

Modern movie directors and cinematographers use the camera like a roving eye, focusing on details that are important to the story, fixing one actor’s face and then another’s, or isolating a bit of action to give it special meaning. Only occasionally—unless the director is Stanley Kubrick—will the camera pull back and show the whole scene as if viewed through the proscenium arch.

And finally, the silent movies only dabbled in special effects. In the black-and-white Tol’able David, for example, most of the action takes place during the day, and here the film is untinted and rather stark. For a scene at night outside a building, the film is sepia-toned. And for a dance in the moonlight, the film takes on a blue-green tint. Similarly, when a character is running across a field, the speed of the film in the camera is noticeably reduced, which speeds up the action and makes the character appear to run faster than normal.2

Today’s movies, of course, use special effects to portray many places and things that used to be shown with actual scenery and stage props. Why go to Rome or build a fantastically expensive set when you can shoot on a sound stage with a green screen and let matte painting or computer-generated imagery (CGI) fill in the imaginative blanks? In the most recent movies, the same digital effects have been applied to people, as when Andy Serkis played Gollum and Benedict Cumberbatch played Smaug in the Tolkien-based movies. The Star Wars franchise has even used CGI on unnamed actors to recreate the late Peter Cushing as the Grand Moff Tarkin and a young Carrie Fisher as Princess Leia without changing the role.

By comparing the silent movie era with today’s productions, you can see how the art form moved from stage to screen and then reestablished, or more often invented, its own conventions as cues to guide the audience’s experience.

Similarly, in studying English at college, I had the opportunity to read and examine early novels and compare them with other forms of storytelling. My coursework traced the art of contemporary fiction from the epic poems of the Greek Homer and the Roman Virgil, from the plays of the Athenian stage, and from the Celtic legends and the Norse sagas to medieval “mystery” plays and the Elizabethan theater, and then on to the first books that were actually published to tell a new—or “novel”—fictional story as opposed to one based on biblical stories, myths, and legends.

The first of those, as any English major will tell you, is Samuel Richardson’s Pamela, or Virtue Rewarded, from 1740. The story was told through a series of letters which Pamela sends home from her job working on a wealthy man’s estate. This novel was followed by Henry Fielding’s The History of Tom Jones, a Foundling, from 1749. Both books were keen on exploiting lust and sex, thereby establishing a convention—and a reputation—for the full-length book form of storytelling.

Like the earliest stage plays, the poetry of Homer’s Iliad and Odyssey and Virgil’s Aeneid adopted a global, omniscient viewpoint. The reader is carried along by a voice that portrays all of the action equally, from outside the character’s heads. The text might say that Achilles is sulking or that Odysseus is scheming, but the action only describes what each man does and is supposed to be feeling while he is doing it.

The novel Pamela, being told in letters written in the character’s own voice, introduces the first-person viewpoint. Here a young girl tells only what she herself saw, felt, and did. To introduce the thoughts and motivations of the other people in the story, the author must write of them as speculations and assumptions—or fragments recalled from earlier conversations—in the heroine’s letters. This is a confining way of telling a story, but it has the positive effect of taking us inside the viewpoint character’s head in the way that a third-person narrator cannot convincingly do.

When I started out writing fiction, the third-person, omniscient, godlike narrator was the voice of most novels. In some cases, the narrative voice would affect an accent and a personality, make jokes and observations about the story’s situations or the human condition, and act like a human being telling a story around a campfire or in your parlor. But then, after years of stripping away the narrator’s affectations—mostly under the influence of the journalistic style and of Ernest Hemingway, who wrote simple, direct, unadorned sentences—popular fiction spoke with the voice of an invisible, uncharacterized narrator. The words came out of the air and into the reader’s head.

One problem with this omniscient approach is that the story loses all effective viewpoint. Like an audience viewing the described action and the characters’ reactions through a proscenium arch, the narrator’s viewpoint danced into and out of the heads of various characters during the course of a single scene or dialogue exchange. One character would see something and reflect on it, then another character would view the first character’s reaction and think about it. The invisible narrator would offer physical descriptions of all the characters at once, without any distinguishing flavor or the discernment and judgment particular to one point of view. The observations and revelations of the characters would flow back and forth with no control. The viewpoint was both omniscient and omnipresent.

This approach is fine for telling a story fast and completely in one movement. The action takes place once, is over and done, and everything is revealed by the end of the scene. And this can be satisfying for the reader. But it is difficult for such a narrator to conceal anything from the reader except by willful misdirection: the narrator must play tricks to keep the reader in suspense. If that narrative voice chooses to conceal one character’s assumptions or understandings for the moment and only reveal them later, there is no good explanation for why the narration didn’t dip into that head, too, at the appropriate time during the action.

Out of dissatisfaction with this omniscient narrator, I have developed a personal style that some call “indirect discourse” but it is more a form of tight viewpoint control.3 I believe it more accurately reflects the way people think, act, and speak. So, during a scene, my narration is from a single viewpoint and examines the thoughts and feelings of only one character at a time. Other characters may act and speak inside the scene, but then they are viewed as objects, as things outside, by my viewpoint character. It is much like writing in first person—“I did, I said, I thought”—but using the third-person voice.

If I want to show how another character felt or what he or she thought about the action, I must move to another scene told from within that person’s head. For critical action sequences, I break the story down into multiple short scenes passing from one character to another. Or, more rarely, I pause the timeline of the story and go back through the action, telling what was going on from another person’s point of view. Or, if I want to keep the story moving forward, I incorporate in a later scene the viewpoint character’s recalling and reflecting on the earlier action while he or she now attends to something else.

Tight viewpoint control can work through a single main character, who is followed throughout the book as if it were a first-person story. But then every other character’s thoughts, feelings, and motivations must remain outside, as speculations and assumptions by this viewpoint character. And anything that is unknown to the main character—like who is really on the other side of the door, or how his lover is about to betray him—must come as a surprise to both the character and the reader.

It’s more fun—and better storytelling—to reveal stories through various fixed viewpoints from multiple characters. Then I can play one person off against the others. I can also pick one person or group to represent the detached, expectant viewpoint that parallels the reader’s, serving the same function as the Greek chorus on the Athenian stage. The chorus invokes for the audience the justice of the gods, the mores of current society, or simply the common sense of the average person. And I can have one character plotting an action or pursuing a motivation about which a second character has no notion—but I can let the reader can see, anticipate, and wonder how the second character will fare when the trap springs.

Modern storytelling—like the creative focus of the camera lens in modern moviemaking—allows the writer to direct the reader’s attention and understanding in ways both subtle and overt without being obviously or inexplicably manipulative. Like a modern screen actor’s ability to convey emotion through a look in the eye, without pantomime and facial histrionics, it’s just a better, more realistic, more satisfying way of proceeding.

1. When I first met her, my wife had a tendency to raise her voice an octave or more when she became stressed or angry. But this made her sound shrill and nervous, and so she became less convincing when she confronted men in a professional or commercial setting. I suggested—from something I had heard elsewhere—that instead, when she became angry, she should drop her voice and speak very calmly. My wife perfected this technique and could sound downright scary in an argument.

2. The last time I saw this speed-up effect in modern movies—outside of those portraying comic-book superheroes—was in the early James Bond movie From Russia With Love, where the fight scenes were accelerated in an attempt to show the supposedly eerily fast moves of judo and karate.

3. Of course, I am not alone in this. Most modern novels are told with some form of indirect discourse.