Sunday, December 29, 2013

Believer or Seeker?

I took an online survey recently where, in the personal identification section, the pollster asked for my religious affiliation. Usually I enter “atheist,” because I don’t have a personal god I can name or envision or even talk about. But this survey question offered no such category. Along with the usual choices of Christian, Jewish, Muslim, and so on, it listed at the bottom “Seeker.” So I chose that one—and it may have changed my view of the whole religious question.

Some people have made a religion out of their atheism. They don’t believe in any god as described in established religions, but they nonetheless have an organizing principle, a belief system built around it, and usually a rite and a communion. In place of a sentient if invisible and transcendent Presence, they elevate a model of the world they experience and their human response to it based on politics, economics, physics, or some other intellectual pursuit. They center their life on that model and take guidance from its basic premise and its corollaries as faithfully as any Christian or Jew reading the New or Old Testaments or a Muslim reciting the Quran.1

I’m not one of these somewhat militant atheists. I don’t think I’m smarter than the rest of humankind because I’ve “seen through” their various mystery religions. Rather, I feel my body-mind complex lacks some gene or neural network that would let me sense and appreciate the god with whom others so easily commune. I am blind in that dimension.2

So “seeker” seems to fit me. But does that mean I will be satisfied to eventually latch onto one of the major religious doctrines and settle down? I don’t actually expect to “find” what I’m supposed to be seeking. Instead, to me, keeping your mind and your options open is a virtue. While I may entertain many small and varied beliefs (with a lowercase “b”), I reject the idea that intellectual completion requires me to embrace one great belief (with a capital “B”) as my guiding life principle, to which or to whom I submit myself, body and soul, whole and entire, world without end, amen.

In my view, the believer has either created for himself or accepted from others3 a particular model of the world. It satisfies or can be made to fit, through exposition, extension, or explanation, all circumstances. For the religious believer, the presence of a god and his laws, his will, his creation, and his omniscience are the driving forces in the believer’s life and in the universe itself. For the political or economic believer, the principles of democracy, free-market capitalism, or state-sponsored communism are the driving forces of the believer’s personal action and of society itself. God, or Thomas Paine, Adam Smith, or Karl Marx presents the model, the system, the template for future action and creation.

The seeker, on the other hand, either creates for himself or accepts from others models that are merely provisional. What keeps the seeker from becoming a believer is that, for him, none of the models best describes all actions and occasions. The seeker can see viable alternatives to the proposed model. He or she can also see places and circumstances where the model does not fit, and the explanations, corollaries, and work-arounds that the model maker suggests to cover them are either inadequate or defy logic and common sense.

In my thinking, the positions of the seeker vs. the believer are comparable to the underpinnings of inductive vs. deductive reasoning.

With deductive reasoning, the major premise serves as a rule that has been established and is not in question. It is up to the reasoner then to apply that rule to the data as they are found. Consider the logical syllogism “All geese are white; this bird is a goose; therefore, this bird is white.”4 The major premise about the coloring of geese is not in question. The minor premise about the inner nature of this bird is not in question. What is still to be decided is how the one affects the other. The proof is based on certain knowledge. The deductive reasoner works top-down, from the accepted general rule to the specific circumstance.5

With inductive reasoning, the reverse is true. The major premise is not accepted as a rule but as a supposition that may be tested and found wanting. The minor premise may be taken as a direct observation. And the conclusion is subject to testing and statistical verification. Consider the logical syllogism “Most all-white birds are doves, geese, or swans; this bird is all white; therefore, this bird is probably a dove, goose, or swan.” That’s not a rock-solid, take-it-to-the-bank conclusion but, coupled with other generalities and analyses, it will take you in the right direction.

I would maintain that most scientists begin with a set of general observations—for example, about the nature of white birds. From that, they form a hypothesis—do all of these birds actually belong to just one or two limited, definable types (doves, geese, swans)? And then they set about testing that hypothesis by trying to find one or more cases where it is wrong. If the scientist finds an all-white sea gull, the premise must be expanded. If he or she finds all sorts of all-white birds—without interference from a singular genetic mutation like albinism—then the premise may have to be abandoned.

Scientists on the verge of discovery are always working inductively, like the scientist trying to establish the nature of white birds. Scientists who have previously established the truth of, or at least the preponderance of evidence in support of, a rule or theory or mathematical or physical law are working deductively, like the scientist trying to establish the known attributes of a particular goose. Scientists may also work from deduction to define the limits and fill in the blank spots of the reality they are examining, in order to prepare for the next inductive leap. But true science—the sort that discovers new knowledge rather than categorizes and classifies what is already known—progresses by proceeding from observation to hypothesis to testing. It cannot begin with a major premise that is accepted as true without questioning how that truth was established.

I like to think that the seeker has the better grasp on the true nature of the world. It is necessary to accept some premises as proven either so solidly as to be true or by a preponderance of the evidence as to be likely. Those accepted facts are like the stones that stick up above the surface of the river: they provide stable places for you to put your feet for taking the next step. But if the river surface is all stones, side to side, like a pavement or a highway, then you have nothing more to learn, nothing to observe, hypothesize, and test. If the river is flowing on all sides with no stones at all, then you are left with an unknown surface that may hide a bed only inches deep or holes that go in over your head.

The world—and by that I mean the complexity of all human relationships and actions, observed and potential, the complexities of life on Earth, and possibly out among the stars, the complexity of the human mind which observes and analyzes it, and the complex nature of physical reality extending to the edges of the cosmos, if such there be—is more unknown than known, more watery surface than stable stones. So the safe bet is to limit the number of revealed truths and philosophical positions you’re willing to state positively and to allow for a maximum of doubt ameliorated by a working theory of probability.

It’s less comforting that way. You don’t get to pound the table and declare your beliefs as much as the true believer. You will live on a planet filled with wonders and question marks and under a sky that harbors unknown dangers and delights. But at the end of the day, you will get fewer rude surprises. It’s also more fun to dig in and enjoy the intellectual game of piecing this world together. Because that’s what the human mind is for—or so I believe.6

1. Truth be told, if I wanted to elevate a doctrine of scientific knowledge to the status of a religion, I might choose molecular biology. Its basis, the DNA molecule, with its associated mechanisms of RNA transcription and then translation of amino acids into proteins, is a majestic thing to contemplate. Coupled with the principle of evolution, DNA explains—at least for me—the creation of life on Earth in all its complexity. It provides a pattern—in principle, if not in exactly the same chemistry—for life elsewhere in the universe, too. But then, the molecule and its mechanisms are so complete and ubiquitous on this planet that I have argued elsewhere that they may not have originated here at all. (See, for example, DNA is Everywhere from September 5, 2010.)

2. This is not to say that I lack moral guidance. I tend to favor Buddhist principles, because they guide thought, speech, and action from rational ideals of complementarity and reciprocity, instead of from warnings of eternal judgment by a transcendent father or mother figure. Buddhism also relies on the individual, rather than dogma, to show what is the right thing to do in any particular case. The Eightfold Path has few specifics about what might be “right” or “wrong” action compared to, say, the Old Testament or the Quran, where injunctions on choice and preparation of various foods, styles of hair and dress, choice of pets, larger lifestyle choices, and other commonplaces abound. (See Gautama and the Godfather from August 5, 2012.)

3. Parent, teacher, religious practitioner—priest, pastor, rabbi, imam—political or economic guru, philosopher, literary lion, scientific thinker, or some other external source.

4. Okay, this is a silly—though still valid—example, because you can tell the bird’s coloring at a glance without reference to its genus and species. But consider the syllogism “All vampires feed at night; this creature is a vampire; therefore, it feeds at night.” Eating habits are not readily observable outside of direct action and its consequences; so it is useful to have a rule suggesting potential actions and a definition of the creature before you—then lock and bar your doors at night.

5. Notice that the syllogism does not work with the attributes of the major and minor premises reversed. You cannot say, “All geese are white; this bird is white; therefore, this bird is a goose.” The fallacy lies in the rules governing set theory. While “geese” may be a subset of the category “white birds,” it does not follow that the category “white birds” is limited solely to the type “goose.”

6. If I did believe in a god, he or she would be subtle, discriminating, and would allow for a world with more than one right answer. Everything I’ve experienced just seems to work that way.

Sunday, December 22, 2013

The Power of Personal Honor

If I have a sense of personal honor, I got it from my parents. From my mother mostly, as I remember it, although my father’s influence was almost as strong. Mother was around more, and it’s her voice I hear in my head.

If I did or said something wrong when I was growing up, she would say, “We don’t do that,” or “We don’t say things like that.” In every case “we” was the family, the closest human association I knew,1 the only club I could join at the time, because I’m talking about a child of four, five, or six years old. The family included not just my parents and my brother, the nuclear family, but stretched to the august heights of my distant grandparents, and to the breadth of my far-flung aunts, uncles, and cousins. When I did or said the wrong thing, I was letting them down, embarrassing them, and separating myself from what they held to be important. The idea that these interpretations of right and wrong might just be my mother’s own rules didn’t occur to me. The invocation of family heritage and watchfulness was the main thing.

My mother’s second line of defense was, “You’re better than that.” This came later, at the age of seven, eight, or nine. When I did or said the wrong thing, I was letting myself down, abusing an image of the proper person that I was supposed to be, the image I was supposed to carry inside me. It was a one-two punch: you’re letting us down, and then you’re letting yourself down.

And I have to say it worked. It socialized the wild demon that lives in every child. It set boundaries and limits. It made me conscious of my actions and speech. It built a conscience inside my head.

My father worked in a different way. He did not so much admonish as demonstrate. He always—as far as I could tell—spoke the truth. He did the right thing. He stepped outside of his comfort zone to preserve and protect what he saw as good and right, as belonging to the natural order.2 And if I asked him why, he would try to explain it all. He showed that he believed in a larger world beyond himself. In that larger world, in the society of which he partook and tried to demonstrate for us boys, telling the truth and keeping your promises were rewarded, law and reciprocity were the natural mechanism and the way that society worked, and the world made sense. The idea that this model of the world might just be my father’s personal view didn’t occur to me. The invocation of rightness and its reasonableness was the main thing.

Believing in this rightness built confidence. It helped me see that the world was a place in which right action and right speech worked to my benefit. This was not necessarily a safe or an easy world to inhabit, but one in which a human being could function. It reassured the timid child that there was a purpose to life. It made me conscious of what the world expected of me.

These weren’t acts of calculation or genius on my parents’ part, but simply the way their fathers and mothers had socialized and taught them in the past. And yes, I also experienced not a few angry shouts and spankings along the way. Still, I remember the admonitions and the demonstrations more.

In Freudian terms, these parental interactions constructed the superego which oversees the ego and suppresses the id. In terms of Transactional Analysis, they became the inner “parent” which instructs the “adult” and corrects the “child.”

Perhaps this sense of honor and restraint can grow naturally in a human being, without the influence of parents and their admonitions and demonstrations. Yet I tend to think not. A conscience, a sense of personal honor, the positive positioning of self in the world, a personal code that describes things one will always do, things one will never do—these are different from instinct. Instinct is what we inherit from our long evolutionary development and share with the animals. Conscience and honor are what we learn from our parents and teachers, and reinforce through our associates and nearby social partners, during our relatively short childhoods and then share with other fully developed human beings.

Perhaps a child can learn a conscience and a sense of honor by bumping along in contact with other children and the über-society of distant, uncaring adults. William Golding’s Lord of the Flies, about children organizing their own society on a desert island, and Leon Bing’s Do or Die, about the socialization of gang life in Los Angeles, suggest otherwise. Children’s view of life, unguided by caring adults, is necessarily fragmentary and self-centered. And what one can learn by brushing up against the larger society’s laws and institutions without patient guidance is usually how to avoid exposure and exploit the cracks, rather than how to navigate the channels and make a contribution.

Conscience and personal honor are part of a two-way mechanism. They compel you to say and do the right things, to keep your promises, to live and perhaps die by your personal code. At the other end of this mechanism are the people who have developed for themselves an unspoken sense of your inner constraints and compulsions. These are your high school and college teachers, corporate supervisors, early mentors, army sergeants, teammates and opponents, lovers and spouses, business associates, competitors, and casual acquaintances. They expect you to tell the truth, turn in your homework, show up for appointments, honor your commitments, fulfill your contracts, do your job, and perhaps go up that hill and die for your country. Or not. If you cannot or will not do these things—or perhaps do not even know that the obligation exists—then these people will know it. They will no longer hold you accountable and will avoid being in your company.

Conscience and personal honor are the flip side of human trust. As a coin cannot have only one side,3 so trust cannot exist without an anchor point within the person who is asking for, expecting, or receiving another’s trust. That anchor point is adherence to a code that rises above the animal instinct of “I’ll get mine and to hell with you.”

Curiously, people who have a code of honor are also more likely to extend trust to others. Not blindly and not to everyone or just anyone, of course. A person of honor looks for it in others. It’s something about the focus of the eyes, the quality of the smile, the underlying assumptions of casual talk and humor, the outcomes of observed actions. You sense it. You know it. And you put your faith in it. Or not. Those who do not live by their own code of personal honor cannot pass invisibly and seamlessly through society. They leave a trail of disappointment, if not of outright betrayal and disaster. For all their jokes and smiles, they cannot hide what they are. Honest people—people who are honest with themselves and others—will know them and step aside, turn away, find others in whom to place their trust. Or so I believe.

The personal code, the sense of honor, is at the core of every decision, the pivot point upon which every choice turns. It’s buried so deep in the psyche that the person of honor does not actually have to think “This is wrong” or “That’s something I won’t do” or “I shouldn’t say that.” The impulse itself comes wrapped in a red haze, a warning sense, a visceral repulsion. The sense of honor is buried so deep that the person will suffer deprivation and indignity, go to torture and death, rather than violate it. It becomes what the person is and does.

It’s not that someone is always watching, even if you can’t see him, and will tattle on you to authority or to your friends. It’s not that God is always watching and will dispatch you to Hell the moment you’re so careless as to die. It’s not that mother or father is watching—from the other side of the room, from the kitchen, or from somewhere up in Heaven—and will feel shame at your words or deeds. It might start that way, but the obligation to parents usually fades by the end by childhood. No, it’s that you yourself are watching. And this is the one person you cannot evade, cannot successfully lie to, cannot convincingly cheat, and who will know that you are untrue and found wanting.

The power of personal honor is the underpinning of social interaction. If I lose it, I become something like an animal. If too many around me lose it, we slip back not just beyond the teachings of our enlightened ones or the wisdom of the ancients, but beyond the social cohesion of the hunter-gatherer tribe. We become a monkey troop—and that’s a hard place for any individual to exist.

1. The dogs were another category of association, not so demanding but equally filled with responsibility.

2. As I’ve told elsewhere (see Son of a Mechanical Engineer from March 31, 2013), on a trip to Canada when I was very young, my family was out walking on a promenade along a shoreline protected with armor rock. A beautiful mahogany speedboat, totally unattended, had just broken away from the dock at a marina out on the point and was drifting down onto this rocky shore. My father climbed down over the rocks, waded into the water, and held off the boat until someone could get in touch with the marina to send a launch to retrieve it. He broke a finger wrestling with that errant speedboat. This is a memory I’ve carried to this day, and it taught me something about civic duty.

3. I suppose early markers of trade—scratches and dabs of paint on flat stones used for counting sheep and such—had only one side. But by the time coiners were pressing images of kings and gods onto precious metal, both sides were struck. Like the milling around the edge of a coin, it reduced the places where a person of dubious intent might shave a bit of the metal for himself.

Sunday, December 15, 2013

Comedy or Tragedy?

Soon after my novel First Citizen came out, my agent at the time showed it to a friend who was vice president of development at a major film studio. He liked the book immensely but, because it was so vast and sprawling—after all, it is the autobiography, with commentary, of a modern-day dictator patterned on Julius Caesar—he felt the entire novel could not be made into a movie. Instead, he urged me to take the book apart and find the single story line that would make an acceptable screenplay. Because I had recently read and practiced with the Syd Field books1 for another project, I agreed to have a go at it and write the screenplay on spec.

The vice president of development then proceeded to tell me exactly how it should be structured. The first act must show the hero happily living his everyday life. At the end of that act, something happens to destroy that life, strip the hero, and leave him—and you can read “her” throughout here—metaphorically lost and alone, out in the cold. The second act shows the hero struggling to regain what has been taken. “But everything must work against him,” my mentor said. “Not just the villain, but other people, society, the government—the ground itself must literally rise up to oppose him.” The second act ends when the hero discovers the key to winning his fight. The third act shows the hero using that key to succeed and put things right. And, in this final act, the hero “must meet and beat the villain by going mano a mano, like Holmes and Moriarty at Reichenbach Falls.”

That was the sum of his advice. All I had to do then was find the part of my story that fit the formula, and I would have a movie he felt he could produce.2

You can see this structure quite plainly in thousands of movies, mostly of the “action” variety. Sarah Connor in The Terminator has her everyday life, her roommate, and even her sense of normality stolen by a homicidal maniac who isn’t human and gives no reason for his hostility. No one will believe her, not even the police, as she runs from the monster. Not until she teems up with the equally frightening soldier from the future, Kyle Reese, does she begin to learn the truth and build a strategy to win her struggle. And in the end she is left alone to kill the robot with a power press in an automated factory. … I’ll leave it to the reader to recall other examples and dissect them.

Of course, what my would-be mentor was describing is technically a dramatic comedy. This is not because it’s filled with jokes and laughter, but because the hero is an innocent. Bad things may be done to him, but he has no responsibility for setting them in motion. They are simply circumstances that involve him. But in the end he triumphs through his own wit and skill, overcoming these dire and unjustified circumstances. The film executive was suggesting that all movies these days must be structured as such comedies. Nobody wants a downer. Nobody wants a lesson. They want to walk out of the theater feeling good. The hero has to win—and win big—against natural oppression. So tragedies are ruled out as a medium of popular culture.

This is a very limited view of human relationships and exchanges, of human nature, of the human condition, and of what constitutes “feeling good.”

Tragedies are not just sad movies where the hero suffers and dies. That would be simply pathos, where we pity the main character in his misfortune. Or bathos, where the suffering and loss—and the bloodletting—are so overdone that pity becomes sick or maudlin, mere sentimentality. Instead, tragedy must invoke something deeper and more involving in the human spirit. Tragedy shows what is noble in human beings by dispensing with the frivolity of everyday life and cutting to the bone. In tragedy, the main character confronts his own nature, the consequences of his actions, and the bedrock level of the human condition.3

The hero of a comedy only has to be smart and quick, and by the end he may have learned nothing except a one-time trick for beating the present odds. The hero of a tragedy has been to hell, seen the faces of fire and death, learned what is really going on in the deepest, most hidden parts of his life and condition, and come back wiser for the experience—or died in a state of spiritual release and redemption.

The hero of a comedy can remain passive in the story until the villain—or simple bad luck—catches up with him at the end of the first act. The hero of a tragedy usually initiates the action, sometimes long before the story even starts. The hero of a tragedy might be unsuspecting, but he is never an innocent.4 In the best stories, the action that precipitates his problems comes from the hero himself and reflects some essential part of his character. The hero cannot resolve the problem without coming to understand the true nature of long-cherished assumptions and/or relationships as well as past actions and/or reactions. At the end, the hero’s life cannot simply be restored to its former success and smoothness, because some faults cannot be fixed nor some complications disentangled. And the lessons learned from them cannot be unlearned. But even though the hero is financially, physically, or mentally broken, even though he dies, he ends in a better state for knowing the truth rather than a happy fantasy.

In the play Oedipus Rex, the king of Thebes is disgraced and deposed. The end comes not simply because he murdered his father and slept with his mother—although when he learns that he has in fact committed those sins, he gouges out his own eyes. Those sins were all along the subject of prophecies made about Oedipus both as baby, which caused his biological father Laius to have him exposed on a mountainside, and as a young man, which caused him to flee his adoptive parents. His “tragic flaw” is not even his hot temper, which caused him to kill a quarrelsome old man—whom he did not know was his own father Laius—at a crossroads. His flaw was persistence and a stubborn, perhaps ever self-righteous, pursuit of a different truth.

His new kingdom, which he inherited when he married the dowager queen Jocasta—whose husband at the start of the play has recently and mysteriously died—is suffering under a curse because the killer has not been brought to justice. Oedipus determines to pursue the story, find the killer, and lift the curse. When the prophet Tiresias, through whom the gods are working, warns him to leave well enough alone, Oedipus refuses and tracks the mystery to its conclusion and to his own destruction.

In another play in the series, Antigone, the daughter and half-sister of Oedipus comes to grief for much the same reason: stubborn persistence in doing what she thinks is right. Her brother Polynices has been killed in battle while revolting against Thebes. Her uncle Creon has declared that his body will not be given funeral rights and must lie exposed, which means his soul can’t rest in the underworld. Antigone buries him anyway, and Creon punishes her sedition by having her walled up, a dreadful punishment that causes her to hang herself. In grief, Creon’s son Haemon, who was her fiancé, kills himself. Creon’s wife Eurydice, who can’t stand the pressure, also kills herself.

In Hamlet, the prince of Denmark is undone by his own indecision. Having learned that his uncle had killed his father—from his father’s ghost no less—and then took his mother as queen, Hamlet vows revenge. But taking action is hard for him. He tests his uncle by putting on a play that reenacts the murder, and that’s proof enough of the guilt. Perhaps deranged by grief, perhaps to throw off suspicion, Hamlet becomes or pretends to be insane. During the course of madness he rejects his fiancé Ophelia, then kills her father by mistake, and she drowns herself in grief. Everyone dies at the end of this play: Ophelia’s brother Laertes, Hamlet’s uncle and mother, and Hamlet himself. It’s supposed to be Shakespeare’s greatest play, but for the life of me I can’t think why.5

In Richard the Third, the newly crowned king of England is defeated and killed in battle. His flaw is his despicable character. To get to the throne, he has betrayed or murdered everyone who stood in his way and a few others besides. When the final battle comes, no one will stand up for him.

The point of these stories is not that everything turns out badly. But in each case the ending is intellectually and emotionally satisfying. It either teaches or confirms something about human nature and the human condition that the audience either needs to learn or already knows. That the pursuit of justice is a risky business, and that bold actions can have irreversible consequences. That honoring the demands of conscience and the gods against the commands of civil authority will also have dire consequences. That indecision and delay can be worse than taking no action at all. That succumbing to the temptations of the easy, obvious path—and so becoming a thoroughgoing, self-serving bastard6—will end badly.

Whether comedy or tragedy, the audience and the reader must feel that the story has been resolved and that the resolution is justified. Comedy does this by everyone winning through the struggle and going home happy. Tragedy does it by having the main character’s loss and suffering explained, so that he or she arrives at a state of balance, justice, and purification, while the audience achieves catharsis—the purification of our temporarily clouded emotions.

Not all successful modern movies are dramatic comedies, however. And not all of the filmed tragedies are simply retreads of Shakespeare and other classic playwrights who worked that side of the aisle.

In the John Sayles movie Lone Star, the young sheriff, who has virtually inherited his father’s position, pursues an old murder and finds out more about his father, his family, and his personal relationships than he knew before. No one is happy at the end of the story, but they have achieved peace and the audience is satisfied.

The Chicago Black Sox scandal, retold in the movie Eight Men Out, another Sayles production, shows the consequences of the players holding a grudge against the team’s owner, trying to get even with this skinflint, and accepting bribes to thwart his ambitions and throw the World Series. None of the guilty players—who are supposed to be playing for love of the game—comes out of the experience whole. Summary justice is done and, while the careers of sympathetic people are wrecked, the audience understands and is satisfied.

Curiously, even the lightest of action flicks can have a tragic element. In the three recent James Bond movies with Daniel Craig in the title role, the villain is happily killed and disaster averted each time. But that’s almost incidental, because Bond—through his own blindness and impulsiveness—loses someone he has come to care about. James Bond, the solid soldier of the Ian Fleming novels, the devil-may-care playboy of a half-century of earlier movies, has become a tragic figure. He is learning the bitter truth about himself and his job. He accepts it coldly but not casually.

Not every story has to end happily. But then, not every one has to end in the bloodbath of a Hamlet or Antigone, either. What audiences and readers look for is a sense of rightness and completion, the feeling that justice has been done, that the character is made whole, and the world is set back to spinning west to east. Do that for the reader every time, and it won’t much matter what kind of structure underpins your story.

1. See his Screenplay: The Foundations of Screenwriting and The Screenwriter’s Workbook.

2. I did find what I thought was a story line, centered around the turning point of the war in Mexico, and wrote a complete screenplay based on the main character’s launching of the second American civil war and his rise to power. But it wasn’t what the studio executive was looking for, and we never talked again. Working on spec—that is, for free—is often like that. Now I take the attitude of the masked executioner in the movie The Four Musketeers: “And two more pistoles for rowing the boat. I’m a headsman, not a sailor.”

3. And if you think tragedies won’t appeal to modern young audiences, remember that Shakespeare wrote a great many of them which played well with the groundlings, the lowest-paying audiences of the period, which were composed mostly of young apprentices and day laborers.

4. Granny Corbin, the main character in First Citizen, certainly isn’t innocent. He is as much or more doing as done to.

5. But then, I don’t have much sympathy with indecisive, hesitant characters. To quote Tuco’s law, “If you’re going to shoot, shoot. Don’t talk.”

6. Darth Vader, please take note.

Sunday, December 8, 2013

The Thomassian Jihad

Recently my daughter-in-law1 posted on Facebook a link to “5 Reasons to Date a Girl With an Eating Disorder” with horror and disbelief. She hoped it was meant as humor but feared it was not. I clicked on the site and read the author’s reasons: bulimia improves a woman’s overall looks, makes her less costly to date, makes her fragile and vulnerable, implies she comes from a wealthy family with access to ready money, and makes her better in bed because of that same fragility and vulnerability.

My reaction, posted in a Facebook comment, was: “Seriously, each of these reasons treats a woman as if she were some sort of accessory to the male lifestyle. You might hear such reasoning applied to a new car: looks good, holds its value, low maintenance, good gas mileage, great performance. …” The author of the web posting wasn’t just treating a woman as if she were an object. You can do that simply by looking at a pretty woman in high-styled makeup and a gorgeous dress in the same way that you would look at a beautiful vase. This author was going beyond the casual look of appreciation to categorizing a certain type of human being and his relationship with her in a utilitarian, mechanistic way.

Something in all this struck a sore spot with me, a natural aversion that I have carried since childhood. It comes out whenever I encounter modes of thinking that treat human beings—those wonderfully unique, growing, aspiring, yearning, achieving focuses of potential, who are born out of complexity, half animal, half angel—as simple, utilitarian modules, as undifferentiated nodes in a system, as meat machines.

As stated elsewhere, I’m a great fan of Frank Herbert’s Dune books as an act of imagination and vision.2 However, his far-future, galaxy-spanning society was shaped by a single event in its distant past. After the rise of thinking machines and automation nearly wiped out humanity by coddling the human spirit almost to death, a war took place—the Butlerian Jihad—with the clarion call “Thou shalt not create a machine in the likeness of a human mind.” In the Dune worlds, the surviving human beings have learned to take on the functions of higher-order reasoning and projection that computers had once performed. Human beings were forced to become machines in service to others—Mentats—but with the particularly human characteristics of free will and choice.

Interesting as that backstory was, I personally have no problem with our current technology and its drive toward robotics and cybernetics. I believe advanced machines will facilitate, rather than hobble, the intellectual and emotional power that is innate in human beings. Advanced machines and the science in which they are embedded will enable humans to see the universe in all its complexity, study our bodies and our minds more deeply, achieve greater social goals, solve persistent problems, and free individuals from the daily grind of mechanical work and poverty. I really can’t imagine human beings becoming so swaddled and smothered by over-helpful machines that we give up our human intelligence and the will to go on living.3 But then, we aren’t quite there yet.

Instead, I see us as still living with the legacy of social structures under which human beings have coexisted for millennia with only the simplest of machines: the wheel, the lever, the wedge. In such societies, if there was a wheel, then someone had to be told to turn it; a lever, and someone had to pull on it; a wedge, and someone’s muscles had to push it to split wood or plow a field. For 4,800 years of recorded human history, up until about the 18th century and the invention of the steam engine, human beings—along with draft animals and natural forces like falling water—provided the only motive power in any society. And in many industries still today, from picking peaches to assembling iPhones and iPads, human beings are treated as meat robots whose backs, brains, and hands are chained to the simple machines they operate.

We still have the tendency, even in developed societies, of treating other people as convenient machines. The habit is strongest in the author of the “5 Reasons” website, who sees women as accessories and—to quote from a recent James Bond movie—“disposable pleasures.” But that habit also lurks behind the assumptions of every personnel professional and union organizer who sees factory workers as interchangeable parts. It echoes in the language of every sociologist who studies society as if it were a vast machine and people functioning as the cogs within it.

And so, with the blindness of the fanatic, the rigidity of the fundamentalist, and the carelessness for consequences of the social reformer, I want to launch my own jihad, the Thomassian Jihad. And my clarion call will be “Thou shalt not reduce a human being to the likeness of a machine.”

Human beings are not machines any more than beautiful women are fragile vases, or flowers and butterflies exist as the basis for pretty pictures. The reality is more complex, mysterious, engaging, and personal. The reality must be granted its power and purpose, its unseen web of relationships, its ability to grown and change. A human being is not a thing but a spark of unknown potential in play with all three phases of time—past, present, and future. A human being belongs only to him- or herself.

The Thomassian Jihad attacks all forms of human ownership, from race and wage slavery to sexual trafficking. It attacks all social and economic theorists, from Plato to Marx, who would reduce human beings to simple, malleable productive or consumptive units and treat the mass of human beings as social clay from which to mold “more perfect” societies. It rejects the modern idea of marketing, which reduces the universe of human desires to simple, predictable “needs” and “demands.” It rejects all methods of productive coercion, from the assembly line to productivity enhancements and time-and-motion studies, which reduce human will and creativity to simple, predictable actions. It rejects all scientific reductionism—even for the purposes of study—from Skinnerian behaviorism to neurological assessments, which treat the complexity of human interactions as simple, stimulus-response mechanisms. It attacks all principles of hierarchical dominance, from traditional views of marriage to modern states and corporations, where one person may take the humanity of another for granted.

Humans are unique and remarkable beings, arising out of complex and irreproducible neural networks, exercising autonomy and free will, able to recall and learn from the past, anticipate and build for the future. We are the first animals—with the possible exceptions of elephants, whales, dolphins, and some of the higher primates—who are able to escape the ever-present now of time.

The Thomassian Jihad transcends all politics of either the right or the left. It declares that human beings are not to be treated as parts of a whole, cogs in a machine, counters in a political game, or units in a larger functioning entity such as a society, a nation, a factory, a church, or a brothel. Humans must be treated individually and their unique qualities, abilities, intentions, and desires taken into consideration in any relationship or exchange.

The Thomassian Jihad envisions a time when any repetitive, unthinking, operationally directed task can be done better, faster, and cheaper by a machine. By using actual machines to do this work, we will free human beings to do what we do best: create, imagine, dream, strive, seek.

The Thomassian Jihad envisions a time when human labor of muscles and minds will be directed toward and invested only in objects of love. It promotes the creativity and imagination of a painter or sculptor to make a never-before-seen image, a composer to make a never-before-heard melody, a writer to make a story within his or her own unique vision of the world, a chef to make a meal to delight the human senses, a cabinetmaker to build a unique piece of furniture that satisfies the human touch and serves a human purpose, or an engineer to investigate a human need or problem and create an actual machine to serve it. Human effort must be prized as a special type or work, above what can be made by rote or by a machine.

The Thomassian Jihad is the struggle for the individual human spirit. Come and heed my call.

1. Truth in advertising: she is actually the wife of my wife’s second cousin. But we helped raise him when he was orphaned as a boy and think of him as our son. What’s thicker than blood? Family!

2. See The Dune Ethos from October 30, 2011.

3. We may all be deeply involved with our iPhones, iPads, and computers and entranced by the connections they enable. But we are simply connecting with people located in another place and time, rather than standing before us. None of us has lost the ability to eat, talk, or move. Human beings used to go into a similar trance when engrossed in a book. Civilization survived.

Sunday, December 1, 2013

Showing vs. Telling

I really want to like Jane Austen. Really. But as charming and amusing as her stories are, I can’t get over her mode of telling them. And that’s the problem—too often she tells them.1 This was all the fashion in the early 19th century, when the novel as a vehicle for storytelling was still in its infancy. But the modern sensibility is shaped by later practices and influences from other art forms, as I will show.

In creative writing classes, they make a big deal about “showing” vs. “telling.” This is sometimes hard for beginning writers to understand. In both cases, the medium is still just words, the verb tense is still in the past, and the pronouns are still he, she, and they—unless the novel is told first person, in which case it’s I and we. None of the bare mechanics of writing actually changes, and yet the feeling is very different.

In short form, the difference between telling and showing is the difference between writing “Roger was fat,” and writing “Roger’s chin rolled over the edge of his collar and the buttons tugged the cloth of his shirt sideways across his belly.” Showing is less economical, requires more words, and takes longer. It slows down the story. But it also paints vivid pictures and pulls the reader deeper into the reality of what is happening.

For the literal-minded, the above passage is still just telling, but with more descriptive detail. We don’t say “fat”; we show what fat looks like. And, if you’re willing to grant the reader a smidgen of sympathy and willingness to play along, writing “fat” can be as good as any number of descriptive jelly rolls to get the point across and keep the story moving forward. So showing must be something deeper, more structured, less obvious—right?

My own personal approach to showing is to embed the narrative in the point of view of a single character, at least for the unit of time, place, or action required to present what I think of as a single “scene.”2 This disciplines my writing. I have to work within a framework of what that character can see and hear, know and remember. I adapt the story to that character’s perceptions, assumptions, likes and dislikes, and level of understanding—that is, I place the reader’s awareness inside the character’s head. I try to use the language and experience of the character as much as possible, so that a lawyer will be thinking in terms of legal relationships and positioning, while an engineer will think in terms of mechanical relationships and possibilities.3

This discipline obviously limits the scope of my storytelling. So, to get a rounded view of what’s actually going on, I use different characters who know different parts of the story. Their views may be in conflict, and their intentions and assumptions usually are. I trust the reader to weigh the evidence and decide who—if anyone—is closest to the truth.4

Because I limit the narrative to a single person’s perceptions, feelings, and memories, my writing tends to be more showing than telling, even though the mechanics of the language are the same. Dry facts and past findings may be presented in a scene’s first paragraph or two, to set up the reader and character for what is to come. But even there, I try to give these bits of background emotional weight and actively perceived implications.

The opposite of this point-of-view writing is the “omniscient narrator,” who sees all, knows all, tells all, hovers over the story, and stands outside time—and the story’s emotional implications—rather like a god. The omniscient narrator has the viewpoint of the audience in an old-fashioned stage play. There everything is shown and known, open to inspection, and perceived by everyone on stage during the scene. Even private intentions and realizations are delivered through the technique of an actor putting a hand against his face and speaking an “aside.”

Writing as a medium for storytelling only started to achieve universal appeal with the advent of Gutenberg and movable type in the 15th century. As soon as printed books became widely available, people acquired the impulse toward universal literacy and private reading for pleasure. But before that, stories were told by bards and troubadours after dinner, or presented by players on a stage. This was the medium for 2,500 years or more, going back to Greek and Roman playwrights, and before them to the mystery plays of shamans and priests. Readers of early novels, like those of Jane Austen, were given the omniscient viewpoint of an audience sitting beyond the stage’s “fourth wall,” outside the proscenium arch.

Writing in the 20th century was strongly influenced by the competing medium of film, just as earlier reading for pleasure was influenced by the theater. I believe the modern approaches to narration—and to showing vs. telling—have followed the growth in sophistication of the movies.

Early movies used a stationary camera that approximated the field of view and viewpoint of an audience sitting outside the arch. The action took place as on a stage, at a distance, with an implied omniscience. Soon, however, directors were engaging in occasional closeups on one character or another to show particular nuances of intense emotion and dramatic reaction. In the old silent movies, sudden bursts of realization or reflection that might have been carried in a theatrical aside—as well as dialogue content too complicated to be communicated by facial expression and gesture alone—would be handled with a bit of text written on a card and inserted into the action. But the viewpoint was still outside the action, with the audience.

That sort of moviemaking feels dated to modern audiences. Now the camera moves around, pointing its focus like the attention of a character in the action. The microphone is directional, picking up and emphasizing some sounds, muting others. The point of view does not necessarily track just one character, and it certainly does not remain fixed inside his or her eye sockets, as does my fictional narration, because actors are paid to be seen, not become invisible spectators. But the function of the roving camera is to direct the audience’s attention, and if one character notices the bulge of a gun in another’s jacket, the camera will zoom in, isolate the perception, and set up that part of the story.

It has become a commonplace of moviemaking—good writers take note—that if the camera shows any setting as more than just background scenery or any object as more than just an everyday prop, then that scenery or prop will have acquired meaning, gained importance to the story, and will play a part later on. The roving camera has become a method of showing rather than telling. It replaces the old-fashioned way of communicating, where one character says to another, “Say, that’s a nice gun you’ve got there,” or “I didn’t know the old mill was on this road.”

Modern storytelling adopts this cinematic focus, and point-of-view writing is highly adaptable to such a focus. The omniscient narrator does not write “Cynthia had a gun and she knew how to use it.” Instead, Roger sees the gun in Cynthia’s hand and—by the firmness or wobble of her grip, if not from his personal knowledge of her skill level as a gun enthusiast and hunter—gauges how likely she is to fire it. That’s showing rather than telling.

This is not at all to say that writing a modern novel is like writing a screenplay. Studying screenplays as an art form can be useful for any writer as one more tool in the kit—and in case his or her novel gets picked up in Hollywood. But the form itself is very different from writing for a reader’s pleasure. Screenplays are technical documents, instructions to the director, actors, and backstage departments. Screenplays have a strict format, with indents that distinguish dialogue from action and that scale the duration of both, so that one page of script roughly equates to one minute of screen time. Screenplays are barren of description. Dialogue is unadorned, with just enough words to convey useful information with a hint of character. Exact phrasing and delivery are left to the actor and director. Action is terse, with just enough detail to guide the actor’s movements and changes of focus—as well as providing the location manager, property master, and wardrobe mistress with any necessary details. “Roger was fat” is not for the benefit of an imaginative reader but a cue to the casting department.5

Still, the cinematic approach to storytelling informs and advances the art of writing stories and novels. Good writing, I believe, has become more visual, aural, and sensual. Explanations of relationships have become demonstrations of the character’s embedded love or hate, trust or wariness. Action has become graphic and sequential, focusing on step-by-step development rather than after-the-fact summations. The reader experiences the story in real-time action rather than just hearing it described in generalities such as a bard might use.6

The art of writing is much more than just telling a story. Instead, you must direct the reader’s awareness, attention, expectations, and emotions. You must reveal the nuances of character and the specifics of the world they inhabit. You must make the reader live in that world and care about what happens to those people. And to do that, you must show what’s going on rather than just talk about it.

1. If you don’t believe me, go back and read the first chapter of Sense and Sensibility. It’s a cross between a genealogy and a legal document to show the entanglement of an estate in order to leave a mother and three daughters dependent and destitute. A playwright would at least have communicated this history, in more compact form, through a breakfast-room conversation somewhere in the first act. A modern novelist would have shown the moment of discovery, when the will is read—which I believe is somewhat like the opening of the Emma Thompson screenplay in the 1995 movie version. The disembodied narration of the actual novel is unfashionably antique.

2. See Writing for Point of View from April 22, 2012.

3. In a recent conversation, one of my friends who had just finished reading The Professor’s Mistress chided me for using “propeller” to refer to the motive power of the steamboat Galatea when everyone knows the nautical term is “screw.” Yes, of course, except that scene was written from the viewpoint of William Henry. He is a classics scholar and a landsman who falls in love with the boat for her cutter-like bow, her mahogany woodwork, and her link to a more gracious past. Before this, the closest he ever came to yachting was a scholarly article in the debate about ancient warships—trireme vs. quinquireme. So he would use the more formal term “propeller.”

4. To see this played out, take a look at my novel First Citizen, where two first-person narrators tell two slightly different versions of the same story.

5. For more on the art of writing screenplays, see Syd Field’s excellent manuals Screenplay: The Foundations of Screenwriting and The Screenwriter’s Workbook.

6. Think of how using opening screenfuls of background narration and “voiceovers” has gone out of fashion in modern movies. The most recent productions don’t even use place names superimposed on a wide-angle shot to identify the location, but instead the director chooses that shot so that even the least informed viewer can identify Washington, D.C., Paris, or London. And if the place is Anytown, U.S.A, it may not even be identified but rather shown for what it is by familiar features and actions.
       Similarly, the film opens and proceeds without a theatrical dramatis personae. Movies in the 1930s and ’40s would give the credits up front, and this served the dual purpose of identifying the major characters for the audience as well as acknowledging the actors. Nowadays, unless a character is important enough to be mentioned subsequently in dialogue, he or she may never be identified by name at all but only by face, character type, and a role which the audience must intuitively grasp.

Sunday, November 24, 2013

Quality in a Writer

Recently a Facebook friend1 posed the question of whether quality still mattered in the publishing market. Considering the number of unremarkable, badly imagined, and badly edited books that have made it into prominence—through both traditional paper publishing and independent epublishing—it’s a good question. My response was and is: “Quality of thought, imagination, and expression is all a writer has to sell.”

The quality of a writer’s thinking is, for me, the most important aspect of what he or she puts on the page. Thinking starts with having access to robust and interesting facts, which shape the essence of the writer’s views and gauge how close he or she comes to describing some kind of truth.

I say “some kind” of truth, because facts and truth both tend to be slippery things. Some facts are concrete and checkable, like who won the World Series in 1938. Get that wrong in an article or story, and it becomes difficult to climb out of the hole your error has dug for you. But many facts represent views of the world and the nature of reality that have not yet been proven or may have no undisputed referent. Every one of us is constantly building mental models of reality covering aspects of life such as human relationships, politics and economics, science and religion, and the various art forms.2 We refer to these models when we make decisions about what might be the right or wrong thing to do, what is fair and just, whom we will trust or distrust, whom we love or hate, and what the future will bring.

When a writer uses a fact with a concrete referent—like the Yankees having won the World Series in ’38—he or she had better check and confirm it before publication, because any reader can easily go to sources and check for himself. But when a fact has unclear status—like how much carbon dioxide actually contributes to global warming3—then the writer must choose carefully and buffer his or her statements with qualifying words that acknowledge uncertainty, like “might,” “could,” “probably,” and “possibly.”4 The structure of a piece of writing changes with the quality of the facts supporting it.

Having robust, influential facts on hand implies that the writer has read widely, considered deeply, weighed one view of reality against possible others, and made honest, logical, defensible choices. The depth of a writer’s information determines the versatility and viability of his or her writing.

But facts are only the starting point of a writer’s thought; more important is how he or she uses them. Every piece of writing must have an argument—in the older sense of the word, as “a coherent series of reasons, statements, or facts intended to support or establish a point of view.”5 The argument depends on linking of those reasons or facts in some logical order. One can argue from the particular to the general, or from the general to the particular, or chronologically, or by order of importance or weight, ascending or descending. The choice of logical structure depends on the nature of the argument being made and how best to overcome the reader’s natural resistance toward, or hesitancy about, the writer’s point of view or the article’s thesis. Without such logical structuring, the writing will be incoherent and the reader will not be persuaded.

These considerations obviously apply to nonfiction, where the whole purpose of the work is to engage the reader in learning something new and interesting, or agreeing with an established viewpoint or proposition, or taking some kind of action. But facts and their logical structure apply equally to fiction. Here the structure is most likely chronological—this happens, then that happens—but the writer must still be aware of how the action comes together. The writer and the characters he or she creates must function in all the tenses: what is happening now, what has happened in the past and is either known or unknown to the character, and what will happen in the future through the character’s hopes, guesses, and expectations. Where cause and effect are not clear, the writer must think through the passage from the reader’s point of view and supply any links or supporting information necessary to making the leap from what the character did or said to what might be the appropriate or expected outcome.

In the best fiction writing—whose practice I strenuously advocate—the writer must not only think like the reader, as he or she follows and absorbs the story, but also think like the character from whose point of view the story is told, as he or she experiences the action, makes decisions about it, and suffers the outcome. In my fiction, I try to stay entirely within one point of view throughout any one scene, even when other characters who have functioned as the viewpoint in other scenes may be present.6 This limits the amount and type of information I can reveal through one character’s eyes, ears, and understanding. I have no “omniscient narrator” standing by to explain the action and fill in the blanks concerning what other characters might know and remember. So, for me, fiction writing is a bit like organized schizophrenia: pushing the story forward while being simultaneously mindful of what the reader can absorb and what the viewpoint character can know and think.

The depth of the writer’s understanding and the subtlety with which he or she handles all these variables contribute to the quality of thought. The writer sits in the middle of the writing process and its choices like an airplane pilot sitting in a cockpit full of levers, switches, dials, and gauges. The skill with which he or she manipulates these tools and devices determines the success or failure of the flight … or the writing.

Quality of imagination is similar to quality of thought in that it reflects the amount of effort the writer has put into obtaining knowledge, reading in an informed manner, and sensing and reflecting on the world around us. Imagination goes beyond the structure of thought and argument to the quality of perception and the structure of dreams.

The writer of nonfiction must live not only in the closed world of his or her reasoned argument but also in the larger universe of opposing viewpoints, negating facts and contrary data, competing choices, and random occurrences. The writer of fiction must live not only within the story he or she wants to tell but also in the larger world where actions and choices by the characters have other possible outcomes, good and bad intentions and consequences, and probable or improbable occurrences. The writer must be aware of and take into consideration these larger worlds, just as every one of us must be aware of and take into consideration the actions of others and the accidents of nature as we go about our daily business. Perception of and allowance for other possibilities give a piece of writing depth and texture.

The fiction writer has special burdens in terms of imagination. With fiction, the writer is creating a “pocket universe”—a complete and functioning world that can fit between the covers of a book, between the two palms of the reader’s hands, and within the finite number of words being used.7 And yet the world of the story must suggest so much more than is simply stated on the page. This invented world may or may not conform in shape, color, and size to the world the reader inhabits outside the book. It may either be exactly the historical world we know or an alien planet inhabited by an alien race with subtle and strange customs, thoughts, and assumptions. In either case, the writer must provide enough detail to make this new world come alive in the reader’s mind but without using so much detail that the story bogs down in mere perception and visual, aural, and tactile experience. Oh, and it’s important that any concept or physical detail which the writer describes too lovingly—like the gun in the main character’s pocket—must at some point in the story be made real and used in the action.

The writer following the story’s thread is like a pilot flying a single route. Yet it’s a poor pilot who never takes his eyes off the instruments and the alignment of navigational beacons to see what might lie to left or right outside the canopy. Mountains and their updrafts, thunderheads and their turbulence all will shape the choice of route. The skill with which the pilot perceives these dangers, and with which the writer creates this just-glimpsed world of possibilities, will determine the success or failure of the flight or the writing.

Quality of expression is similar to quality of imagination in that it draws on the writer’s sense of other possibilities from the one he or she has in mind. A writer must use language flexibly, like a fencing master who knows all the possible attacks he or she might use against an opponent, as well as all the possible parries to use against a specific attack, and chooses from among them the appropriate way to start or end the exchange. Varying attacks and parries will keep the fencing match fresh and surprising and keep the opponent just slightly off balance. Similarly, a reader who can see where each dull sentence is coming from and the point it’s plodding toward making will quickly grow bored. The writer must vary the pace of the article or story, the structure and length of its sentences, the speed with which new ideas arise, so as to keep the reader refreshed, engaged, and just slightly off balance—so that the reader keeps guessing at the writer’s ultimate intent and wants to move forward.

The writer weighs each word and phrase for its ability to convey an exact meaning without bogging the reader down with repeated questions or sending him or her on trips to the dictionary. Quality requires the writer to use verbs that paint pictures of action and nouns that make each object glow with its own nimbus of recognition. Economy requires that the writer pare away excess words until those he or she does use vibrate with strength and tension. But mindfulness of a reader’s groping toward understanding sometimes requires the writer to use structures that slow the pace of revelation, so that key thoughts stand out in high relief.

Like the pilot of an airplane, the writer takes off fast, with flaps fully extended and landing gear retracting as soon as it leaves the pavement. And the point of the article or story comes home with precision, like the tiny but definite jolt of impact as the wheels touch down and spin up to speed. A good start and a good ending are worth half the points of the actual trip in both flying and writing.

But none of these qualities—of thought, imagination, or expression—is subject to anything so obvious as a set of rules.8 The skilled writer, like the skilled pilot, operates as much by instinct and inner sensibility as by experience of what has worked, or didn’t, in the past. That’s why every writer and every piece of writing is different. That’s why writing is an art, not a science, and never subject to formula.

1. Duncan Long, who writes science fiction and is a prolific artist, including book covers. His stuff has quality.

2. Constant readers will note that the latter three are also my favorite blog topics.

3. Everyone would agree CO2 works as a greenhouse gas, passing incoming full-spectrum solar radiation through the atmosphere but blocking a certain fraction of the outgoing infrared which is radiated from Earth’s surface. However, other gases—chiefly water vapor and methane—also work that way and usually with stronger effect. Given everything else that’s going on with the planet—sunspots and the flux in solar radiation, absorption of CO2 by green plants, natural cycles and variations based on ocean currents and the mechanics of Earth’s orbit—the exact contribution of human-made CO2 in relation to global temperatures can only be studied with computer models. Computer models, like the imaginary models we build in our heads, depend on the depth and complexity of the relationships traced and the data that are used as a starting point. Computer models may represent good scientific understanding, but they are by their nature incomplete and arbitrary—otherwise they would have to track every molecule of air and water in real time, which is impossible. A computer model, like my understanding of my relationships with other people, is an educated guess and not a proven or even provable fact.

4. Of course, an intermediate classification of facts exists. It includes approximations, conjectures, and fictitious examples that cannot be checked but “feel right” to the average reader. An example would be the claim “There are eight million stories in the naked city.” Well, maybe 7,990,000 … or perhaps 8,215,000. Populations are fluid and change all the time. The point does not suffer for presenting a rounding error.

5. From my Merriam-Webster Unabridged Dictionary. Curiously, the word “argument” functions similarly in mathematics: “one of the independent variables upon whose value that of a function depends.” Garbage arguments in, garbage value out.

6. By playing off one character’s understanding and perceptions against those of other characters, I can let the reader triangulate on the truth and eventually come to realize what’s actually happening in the story. In my personal view, life is like that: a synthesis of what we know, what we don’t, and what others may know or perceive. It’s a mystery … and a tangle!

7. See A World Between Two Palms from September 8, 2013.

8. See Rules for Writers from October 6, 2013.

Sunday, November 17, 2013

Art and Irony

I once heard Alexander Calder’s large, rounded, liquid mobiles criticized as “lacking in irony.” Presumably, the critic felt that Calder wasn’t making enough of a point about large, rounded, liquid shapes and their relationship to society, social justice, political awareness, green conservation, or something else de rigueur. Simply being graceful, serene, and soothing to the soul apparently wasn’t sufficient to qualify as art.

Occasionally, I find what I call “cleavage issues” or “cleavage questions”: notions that, like a diamond cutter finding the fracture plane in a stone, shed important light on facets of the human condition.1 I believe the question of irony as a basis for art may be one of these.

Art used to be representational. When our earliest ancestors painted bison and horses on the cave walls at Lascaux, they were depicting what they saw. Presumably, they liked and admired these animals and wanted the people looking at the paintings to feel the same way. They may also have been making a propitiatory gesture with the paintings: sending a message in red ochre to some deity somewhere, “These are the things your people need. Please make more of them. Please fill the plains with these animals.”

The representational and propitiatory natures of art came down to us through Classical times to the Renaissance. When the sculptor Phidias made a gigantic statue of Athena for the Parthenon, the message was, “Our city is named for a great and wise goddess who watches over us,” as well as, “Goddess, please remember and protect your people!” When he carved the frieze of the battle between the stone-age Lapiths and the Centaurs, he was portraying a myth known to the viewers. Similarly, when Michelangelo and the rest of the Renaissance sculptors and painters depicted a madonna and child, they were celebrating motherhood and telling a story of the Christ’s humanity that was familiar to the viewers. It was both representational—“Isn’t this a lovely mother?”—as well a propitiatory—“Madonna, watch over us as you watched over your child.”

Even before the Renaissance, western art in the form of illuminated manuscripts and stained glass windows told the story of religion. That was about the only story people had time to hear, and the work was funded by a church with the excess cash with which to pay artists and artisans to tell it. Yes, there were occasional grim warnings—those images of demons tormenting the sinners in the aftermath of Judgment Day or the agonies of saints which proved their blessed nature—but these were meant to be instructive.

Sometime after the Protestant Reformation, a sense of edginess and wise-guy humor came into religious painting. Pieter Brueghel the Elder painted village scenes and landscapes, but he also painted and drew comic allegories such as The Fight Between Carnival and Lent and Fight of the Money Bags and Strong Boxes. He was using satire to comment on social values. After that, you might still get artists whose work was sincerely representational—many of the French Impressionists come to mind here—but the art world would start to call them “naïve.”

Satire is an old literary device, going back at least to the Greeks and Romans. The playwright or poet will focus on some vice or supposed virtue or other human tendency and expose it to ridicule, often with a sense of suggesting how to avoid or correct it. Satire uses laughter to bring home the point. Irony is a closely related device, again drawn from literature. In Plato’s Dialogues, Socrates would feign ignorance of some question or proposition—using “Socratic irony”—in order to get his conversational foils to commit to a position and then examine it through questioning. In literature, irony means “the thing not being what it seems.” When Oliver Hardy tells Stan Laurel, “Isn’t this a nice kettle of fish,” he means the exact opposite. The juxtaposition of words saying one thing and meaning another invokes the suddenly perceived incongruity, the dissonance, the unexpected lurch which is the basis of all jokes and most humor.

As I said above, this is a literary device, played out in words. As applied to the visual art forms, irony becomes trickier. Think of Rodin’s She Who Was the Helmet Maker’s Once-Beautiful Wife. The sculpture shows an old woman borne down with age, tired, and presumably near death. Rodin is making a comment on the transitoriness of beauty by showing the viewer something ugly. Through her we confront the passage of time and our own mortality. He does something similar with The Fallen Caryatid, showing a strong young woman crushed by her burdens. These are superficially ugly images that, underneath and with reflection, show something noble and enduring in the human spirit. That’s the work of a master.

Somewhere along the road from Pieter Brueghel to modern art, the gentle, sober, and astounding irony of Auguste Rodin subsided into rude satire and ugly mocking. I remember seeing an exhibition of student artwork when I was an undergraduate at Penn State. To me, one of the most memorable pictures—and not for its beauty—showed a life-size, seated nude wearing a surgical mask and a look of pure agony in her eyes because she was stuck all over with pins. What stayed with me about the painting was that I had no idea what the artist was trying to represent or even comment on. “Does that hurt?” comes to mind as possible meaning. “Isn’t this just awful?” is a close second. But the difference between this image and, say, the martyrdom of an early saint is that the agony being portrayed had no justification, related to no historical precedent, was not the result of any kind of social injustice, war, or overt malevolence—other than the artist’s own hatred of the subject—and was not apparently meant to be instructive. What did the artist want me to do? I turned away in confusion.

Much of modern art—not all, but a depressingly large amount—has this same vague or intentional bitterness. It’s not meant ironically but is simply mean-spirited. “Doesn’t life suck?” It doesn’t seem to mean anything except, “I dare you to look away. You’re a coward, a bourgeois, a naïf, a child if you look away.” If there’s an underlying message, it’s not ironic but simply rude. Think of the Andres Serrano photograph Piss Christ, showing a crucifix dipped in urine. The artist may be making a statement about Christianity—in which case he’s resorting to infantile gestures. The deepest meaning of the work seems to be “I can offend you—and you’re a fool who’s not in on my joke if you get offended.”

Too much of the kind of art that’s meant to be ironic seems to me just belittling, snide, intentionally offensive, and cruel, with no deeper meaning than “I can laugh at you, and you out there can’t do a thing about it.”

I’d like to be charitable and see this kind of artwork as a psychological cry for help on the artist’s part. I’m afraid, however, that the artist who could conceive of these works would find my pity offensive. And who am I to impose my values on another who is so sure of him- or herself as to commit such atrocities to film, canvas, or clay?

I’m content to see artistic “irony” as simply a fracture plane. On the one side is the artist who photographs, paints, or sculpts because he or she has found something beautiful, interesting, strikingly instructive, transcendent, or even merely whimsical and, like the voice in The Book of Revelation, calls to us, “Come and see!” On the other side are people with an artistic bent but not much vision, who don’t much like their life or the world they were born into, and who can only comment, “Doesn’t life suck?”

I’ll spend my money on the former kind of artwork. More importantly, I’ll spend my time trying to understand the underlying revelation. But for the latter, not one penny!

Sunday, November 10, 2013

The Labor Theory of Value

For the past year or two, I’ve been trying to define why I’m so bothered by much of the economic thinking on the leftward side of the aisle. In particular, I revolt at the concept of the economy as simply a great pie—that is, if my slice is bigger, then yours must necessarily be smaller. For a while now, I have championed the concept that an economy is more like an ecology. Activity begets activity, creates more niches and opportunities, and expands the base of wealth, in the same way that the canopy of a rain forest is more productive—creates a bigger pie—than the sandy floor of a desert.1

The driver of the rain forest—the foundation for all biological activity, there and in the desert as well—is sunlight, which is captured through photosynthesis in green plants as carbohydrates and then spread throughout the ecology as food energy. In this analogy, the driver of the economy is human energy, which is captured both as the direct effort of making products and offering services, as well as through subtler channels such as delayed gratification and future mindfulness. By delaying the immediate spend of his or her earned energy, a person may invest for greater productivity in the future, or make loans through savings accounts, bond purchases, and other instruments to share in the productivity of others. As carbohydrates are the solid form of captured sunlight, so money is the solidified form of human energy.

However, this treads awfully close to the “labor theory of value,” most often attributed to Marx. The theory says that the value of a commodity can only be objectively measured by the number of hours of human labor needed to produce it. If shoes take twice as long to make as gloves, then shoes are more valuable than gloves. This aspect of the analogy to a rain forest troubles me.

Shoes and gloves aside, the labor theory of value is easy enough to refute on the face of it. Let’s say our village has two cobblers. One is skilled and can make a pair of shoes in about three hours, with precise cuts and strong stitches. One is less skilled and needs six hours to make a pair of shoes, with sloppy cuts and weak stitches. The first cobbler's shoes will last you several years. The second's will fall apart in six months. But according to the labor theory of value, the second cobbler's shoes should be worth twice as much as the first's. Uh-huh!

A Marxist might reply that I’m confusing the particular with the general, that on the whole, or on average, or by some other leveling device, we should look at the output of cobblers as a class and not the skill, effort, and manual inputs of individual cobblers. This is certainly the claim of a unionized labor force, that only time in the job classification—which is supposed to equate to greater experience and skill—has any meaning, and that in all other respects one worker’s output is equivalent to any other’s of the same classification.

But I adhere to market principles. The value of a commodity or service is related, not to what the provider put into it, but what the consumer appreciates about it. In a free market, if people like what they’re buying—for any reason, frivolous or not—they will pay more for it. The reason can be based on real perception, like shoes with good cuts and tight stitching, or on imagined qualities, like the cachet that comes with a stylish brand or label. If they don’t like what they’re buying—for any reason, such as wrong fit, wrong function, wrong color … wrong smell—they will pay less, or only pay the asking price under duress, or refuse to buy at all.2

In dealing with commodities that cannot be as easily differentiated by individual elements of style and quality as shoes or gloves—here we’re talking about graded and fungible commodities like oil or wheat3—or commodities that people must have to survive, then issues of supply and demand dictate value. If a large supply of oil or wheat comes on the market, no matter its quality as a particular product, the people will value it less and the price will go down, even if they desperately need it. If oil or wheat is scarce in the market, then people will value it more and the price will go up, even if they complain that they’re paying too much.

This is not just a nice theory about economics, created by a wise man sitting in a cozy library, but simply the way human beings in large masses with no reason to love or trust one another actually work. What you can get for your effort in making a pair of shoes—or gloves, or finding oil, or growing wheat … or writing a novel—depends on the needs and wants of the consumer. In the transaction, it’s irrelevant how hard you worked on getting or making the product, or in offering the service, except in your willingness to let it go and do the work at the market price. If your needs as a producer cannot be met at that price, then there is no sale, you take your wares and go home. You cannot force the consumer to pay more for your labor.4

So, does the rain forest analogy still stand up? I believe it does.

In the forest canopy, not all carbohydrates are created equal. Some plants put intense effort into creating fruits with interesting flavors and high sugar content, which then attract birds and animals that will eat the fruit, consume the seeds, and spread them far and wide. On the other hand, some plants don’t taste very good and will only be eaten in extreme need or by voracious and unpicky eaters.5 But the effort the plant puts into manufacturing the flavorful sugar is not the driver of the transaction; the bird or animal must be genuinely attracted. The value placed upon the carbohydrate comes from the consumer, not the producer. And in fashion similar to a free market, if it’s been a good year, with much sun and rain bringing forth lots of berries, the birds will be picky about which ones they eat, taking only the fattest and juiciest. If it’s been a poor year, they will eat even the smallest, most shriveled fruits. Value is related to scarcity as much as to taste, but still not to the production cost.

Value, utility, and necessity—like beauty—are in the eye of the beholder and consumer. And this is so in terms of both the ecology and the economy.

1. See The Economy as an Ecology from November 14, 2011.

2. I once met a man who had spent twenty years writing an epic poem about the Spanish conquest of Latin America. It was 700 pages of iambic quadrameter rhyming couplets. “For they had come upon this shore/To see what fate had put in store.” All those years of labor had not made his great poem either readable or publishable.

3. “Fungible” is a fancy word for “all the same and nothing to choose from.” When a commodity or service is fungible, you don’t care which quart of oil out of a barrel, peck of wheat out of a bushel, kilowatthour out of a generator—or fry cook at McDonald’s out of a low-skill labor force—you get in the transaction. One’s as good as another.

4. Well, not at the point of sale anyway. Of course, producers can work on people’s needs, wants, and desires through their belief systems. This is branding, and it can make a Louis Vuitton handbag worth, not just a few dollars but hundreds or thousands more than a bag of similar capacity made with similar materials. The same goes for cars and other luxury goods, and even for products as ephemeral as food prepared by a famous chef or at a popular restaurant. But the underlying quality must still exist in some measure. If a Louis Vuitton handbag had sloppy seams and smelled like rotting fish, it still wouldn’t sell.

5. Like the deer in your garden.

Sunday, November 3, 2013

Fun with (Negative) Numbers

Disclaimer: In high school I took math up through algebra and geometry, thereby missing trigonometry, analytical geometry, and calculus. I’m an English major by training and predilection, not a mathematician. I was, however, trained in logic and remain fiercely loyal to clear and open reasoning. Over the years I have worked as a technical writer alongside engineers and scientists, as well as written novels of “hard” science fiction—all of which required me to go back and learn mathematical principles and gain some facility with manipulating numbers. I just don’t always like them very much.1

Mathematics is an intellectual exercise, carried out in the human brain. It is a species of logical reasoning. While we all subscribe to the belief—at least since the Greeks of ancient times, and more recently with the Enlightenment of the 17th century and beyond—that the universe and the physical world around us are defined and best described by mathematics, this remains a matter of conviction and belief. It is based on the fact that mathematics is the dominant way in which we have come to perceive and study this part of reality. Because math has worked in describing many of the phenomena that we can see, we now believe it’s the best way. Still, that does not preclude some other means of achieving a deeper understanding.

Just as it’s possible to write nonsense in perfectly grammatical English sentences, it is possible to write equations with perfectly valid mathematical operations that do not represent actual relationships. For example, I can represent the speed of an automobile in miles traveled per hour, and the rate of my metabolism in calories burned per hour. I can then equate the hours and compare calories burned per miles traveled. That doesn’t mean the one has anything to do with the other.

That’s a ludicrous example, of course. It’s logical inconsistency would be obvious to a child. But many scientific calculations—especially those in modern physics—extend to dozens if not hundreds of computations involving abstractions that are supposed to reference real situations. The logic is not always easy to follow and check. Great minds may work over these calculations and agree about what’s going on. But logical fallacies and incautious comparisons are sometimes difficult to trace and discover at this level of theorizing. Many great minds may take the same wrong turn together and follow each other out the window.2

One obvious rip in the fabric of mathematical logic that makes me uneasy has to do with the extension of the number system into both positive and negative territory. There it becomes difficult to keep track of the difference between what’s mathematically possible and what’s going on in the real world.

By the laws of mathematics, you can add or subtract positive and negative numbers to get a positive or negative result. So, if I start with five apples and subtract two apples (i.e., add -2 apples), I have three apples. Or, if I start with five apples and subtract seven apples (i.e., add -7 apples), I end up with two apples less than none. Weird but logical. In this case, I owe somebody—probably the person who took all seven apples—two more apples than I started with.

Of course, without positing some such obligation as “I owe you two apples,” the negatives of a real object are difficult to identify. One cannot point to the space where two apples might exist and say “there are -2 apples.” You can’t point to holes in the air to find missing quantities. Of course, you can imagine an apple crate or some other framework with two empty slots and say “those empty holes are where two more apples would probably fit.” That’s somewhat like working with a negative number. But without such a framework or crate, the whole remains abstract. “See the two apples that I just don’t have!”

But when it comes to multiplication and division, the result is not even so intuitive. The rule says that when multiplying or dividing numbers of the same sign (i.e., +2 times +2, or -2 times -2), the result is always positive. But when multiplying or dividing numbers of different signs (i.e., +2 times -2), the result is always negative. This may make a tidy rule and easy for a mathematician to remember, but what does it have to do with representations in the real world? Multiplying or dividing with negatives does not seem to have any corollary with what you can do with physical objects.3

This leads to problems within the intellectual structure of mathematics itself.

Take, for example, “imaginary numbers.” These are numbers whose square root (i.e., the number multiplied by itself) is a negative number. So, by the above rules, the square of -5 is easy to understand: -5 times -5 equals +25, just as +5 times +5 equals +25. But while we can imagine that a negative number like -25 might have a square root (i.e., the number which, multiplied by itself, equals -25) we can’t validly calculate it, because the only way to get -25 as a square is to multiply +5 times -5—which is not multiplying the same number by itself. The square root of +25 is may be either +5 or -5, but the square root of -25 is, according to the laws of mathematics, gibberish.

But let’s close our eyes to the mystery of imaginary numbers. Back to just dealing with positives and negatives …

What becomes nonsense with apples makes perfect sense in the relationship of apples or money to the activities that we call commerce and finance. If you have contracted to sell me five apples and yet have only produced three apples, I will take your three apples and agree that you owe me, sometime in the future, a further two apples. You have acquired an obligation to provide me with more two apples. These are not apple-holes-in-the-air but apples-yet-to-be-produced.

The notion that two apples exist to be provided is an intellectual construct. You may obtain the apples which you owe me through purchase from another apple supplier, or we may agree to discharge your debt through payment of money. But the transaction is carried in our heads and communicated through our speech and perhaps in written form. Negative apples do not exist in the real world. If you obtain them from a supplier, he still has to grow and make them real before they can be traded.

In similar fashion, we can make a difficult fit of negative numbers to notions of time. Suppose that at one o’clock you promise to deliver something to me by a deadline of two o’clock—or one hour’s time into the future. I wait. You do not arrive on time. And then you show up at 2:15—or 15 minutes after the deadline. I can say, “You should have been here 15 minutes ago.” That’s a perfectly valid grammatical construct. It’s also valid mathematically: 2 hours 15 minutes plus -15 minutes equals 2 hours. But the concept of “15 minutes ago” is physically impossible. You cannot take back those minutes and arrive on time.

Our experience of time, like our experience of physical apples, only works with positive numbers. We can play with negative time as an intellectual exercise and as a form of reproach,4 just as we can contract for a debt of apples-to-be-delivered. But we cannot directly experience either.

This is why I am leery about much of today’s physics, which is so heavily dependent upon mathematics. In particular, string theory seems to be based entirely on mathematical reasoning absent concrete observations.

Quantum mechanics may find a mathematical relationship between a proposed particle like the Higgs boson—which is too heavy to exist in our spacetime and may only ever have existed at some micro-instant following the Big Bang—and the proposed field conditions which this boson would generate, were it to exist. Without the Higgs, all the other particles in the Standard Model of quarks, leptons, gluons, and the rest would not have mass, and so the measurable quantity—although indefinable quality—of gravity would not exist in the universe. I’m still trying to wrap my head around a field that exists in nature without an actual particle being present to create it. It sounds like experiencing electric or magnetic fields without the actuality of a moving photon. Would just the possibility of the photon create a measurable field? I think not.5

String theorists have an elegant proposition all worked out, where every particle we can see and measure in the Standard Model is actually a tiny bit of energy, drawn out in a tiny loop like a bit of … string vibrating at a particular wavelength. This doesn’t actually work in the three dimensions which we can actually see and experience: x, y, and z, or “side-to-side,” “up-and-down,” and “in-and-out.” But if you propose a universe infested with a kind of spacetime that adds eight more dimensions at microscopic levels which we can’t see or detect, it all works out fine. Uh-huh.

These theories are all valid mathematically. For the rest of us, to whom the proposed situation makes no sense, or who suspect a possibly missing step that might hide a logical trap, we are told that’s just because we don’t understand the mathematics involved. But, for my money, it’s also possible that the tightly knit community of particle physicists is spinning a shared fantasy of elegant, complex equations that just might be equating calories burned to miles traveled.

I’m not anti-science. I love science. I believe that the scientific method and scientific reasoning have given us a quality of life and an outlook on reality that have never before been shared by human beings. But that doesn’t mean I automatically accept whatever a scientist says. And when what a group of scientists says seems to defy logic and common sense, or violate observed reality somewhere else, I become cautious.

Science and mathematics are still human endeavors. Even when pursued with the most fine-grained instruments and powerful computers, I remember also that these machines are still products of the human mind. While two heads may be better than one, people in pairs and groups are still susceptible to hopes, dreams, illusions, misinterpretations, and stubborn folies à deux—and sometimes folies à plus.

I want to see test results that can be shown concretely, without invoking negative apples and holes in the air.

1. For two similar entries on this theme, see: Fun With Numbers (I) and Fun With Numbers (II) from 2010.

2. In this context, I’m drawn back to Stephen Hawking’s explanation about why the universe does not abound in micro black holes. See If You Can Believe … from February 17, 2013.

3. For that matter, multiplication and division appear to be simple shorthand techniques applied to addition and subtraction. If I want to give each of three people five apples apiece, I can add five apples three times, or I can simply multiply five times three. Similarly, if I want to determine how many apples each of three people gets when I have fifteen apples and want to distribute them fairly, I can divide fifteen by three.
       Signs and negative numbers don’t seem to come into it, except in the abstract sense that 15 actual apples divided among three people who aren’t there would be -5 apples, while giving five apples that don’t exist to each of three living people is likewise -15 apples. Somewhere apples are owed that do not exist, and so the debt is canceled.
       That’s a case of multiplying or dividing a negative by a positive. But the rule doesn’t make any more sense when multiplying or dividing a negative by a negative. If two people less than nobody each take three apples less than none, the mathematical result becomes six real, positive apples. Huh? Is that like the double negative in English, where “I won’t never go to the store again” grammatically comes out meaning “I will always go to the store”? But, of course, the ungrammatical person using the double negative still means “no, not ever.”

4. Of course, we can’t even deal with positive expressions of time except in our imagination. I might imagine that I will be somewhere else in the next two hours, but until those hours pass and that instant of time becomes my reality, it’s all just make-believe. This is why I classify science fiction novels involving time travel, including my own recent The Children of Possibility, as a kind of science fantasy. Fascinating to think about, impossible to do.

5. Oh, yes. The people at CERN’s Large Hadron Collider just “discovered” the Higgs boson. From what I understand, they did no such thing. They smashed together two protons moving near the speed of light and got a flash of energy and detected a shower of particles. They traced back the decay lines of at least two of those particles to a common origin—which means they started out as something much bigger that briefly appeared and immediately disintegrated—and came up with a mass of ~125 GeV (billion electron volts), which correlates to the theoretically proposed mass of the Higgs. Clearly, they did this more than once and came up with answers each time that satisfied everyone all around. But that’s not like they put the thing in a bottle.