Sunday, February 22, 2015

The Assumptions of Social Engineering

Several items in the news recently have addressed the desire of government authorities and agencies to ban public smoking of e-cigarettes—or “vaping,” because no real burning or smoke is involved, just heating of nicotine-infused water—as a potential health risk. This view is gaining ground as a “precautionary” measure, even though no clear evidence showing the dangers of inhaling nicotine vapor has yet been produced. For these people, e-cigarettes are a gateway drug, a ploy by the evil tobacco industry, marketed to encourage full-fledged tobacco smoking. So the goal of a ban is to eliminate the smoking habit entirely from American society. On the other side, which admittedly the makers of e-cigarettes promote, are people who see the devices as aids to help people stop smoking—rather like a nicotine patch, but offering something familiar to do with your hands—and therefore an issue of “harm reduction” from the dangers of actually inhaling and spreading smoke.1

I’m not a smoker anymore. I started smoking a pipe when I went off to the university, which was a couple of years after the Surgeon General’s report on smoking in 1964, and continued smoking for a number of years after graduation. Out in the real world I realized I was smoking far too much each day, feeling generally ill and nauseous by evening time, growing dark stains on my teeth, and riddling my clothes with tiny ember holes. Smoking had become a lot of fuss, trouble, and expense for no very good effect; so I quit cold turkey. Actually, I quit several times, but the last one seemed to stick. I can say now, forty-odd years later, that I’m healthier and stronger for that decision.

So I don’t really “have a dog in this fight” over e-cigarettes. And I refuse to play the part of the reformed sinner, thumping his Bible and demanding that the familiar sin now be rooted out and punished. I remember taking up pipe smoking as a choice. It was partly a desire to express my newfound independence from parental guidance and suggestion, partly for the relaxation response of nicotine, and partly to cultivate the image of the tweedy scholar now fully engaged with English literature. I was well aware of the dangers. I never fooled myself that pipe smoking was less harmful than cigarettes, because I knew the incidence of lip, tongue, and throat cancer was higher. And yes, I was fully inhaling that rich, fragrant, tarry smoke. My tobacco consumption—when it finally leveled out—was about an ounce of Douwe Egberts Amphora blend a day, the equivalent of two packs of cigarettes.

My point is that, attractions of personal imagery and sense of independence aside, smoking was a choice for me. Only after the action had first satisfied my physical senses and psychological needs did the addictive properties of tobacco take control of my consumption. And when biology and psychology were sated, and the substance’s image fulfillment and sensory stimulation were no longer needed, I could break the addictive bond relatively easily.2

The assumptions of the social engineers, those who would ban e-cigarettes as a precautionary measure or permit them only as a harm-reduction measure, seem to miss this point. Neither side appears to understand that the market for these devices—or for any illicit pleasure—is driven by independent minds that make choices from an array of needs and desires. We smoke or vape because it makes us feel good, or celebrates our independence, or makes us seem sophisticated, or gives our hands something to do. We drink because we like the taste, or it relaxes us, or makes our troubles go away for a while. We drive fast motorcycles because we love speed, or can penetrate traffic jams, or prefer the open sky around our heads and knees. We are not idiots, because only someone who lived under a rock—or had never have tasted the pleasures of smoke, alcohol, or a fast bike in the first place—would fail to recognize the dangers inherent in these vices. Pleasure always comes with the risk of pain and loss. A mature person understands this.

Part of the human condition is to succumb to temptation, taste sin, make choices about it, and eventually through strength of will and purpose reject it. A person who has never been tempted and tried tobacco, alcohol, adrenalin, sex with strangers, or some other forbidden experience must remain as a kind of child, a pure spirit, who never had to exercise his or her own willpower to break the addictive bond. True, such a person will not be sadder in the long run, but neither can he or she become wiser. Being told about a danger and believing in it is not the same as experiencing the danger and learning to escape it. Trying, experimenting, failing, and recovering are part of the human pattern of development.

Social engineers seem to believe that human beings are a kind of simple stimulus-response mechanism. That we are social automata which will see, react to, and make life choices based on package warning labels and gruesome public service announcements about smoking. They need to believe that these measures are effective in the fight against tobacco. Conversely, they also need to believe that advertising by evil tobacco manufacturers effectively dupes otherwise intelligent people into lighting up the product in question. As if advertising were an infallible “on” switch to create demand, and government package labeling were an effective “off” switch to kill it.

Now, it’s clear that if advertising were not effective, companies wouldn’t spend so much time, money, and effort on it. But the return on any one ad or campaign is at best awareness and sometimes a flickering desire to try something new, if and when the choice is presented. But no amount of promotion can overcome a product that fails to deliver on its implied promise. Consider the Ford Edsel, New Coke, or Apple’s Newton personal digital assistant. If the value proposition is not there for the customer, the product dies.

Consumers tend to tuck the information gleaned from advertising and public service announcements alike into the back of their minds. Sometimes that information influences a conscious decision at the point of purchase. Sometimes it merely sets up a vague attraction or sense of repulsion. But the consumer is neither hypnotized into taking action nor coerced into rejecting it. The human brain, the mind, and the persona that inhabits them are a lot more complicated than the behaviorist’s stimulus-response model constructed out of Pavlov’s experiments with dogs.

So most government regulators and social engineers must live in a kind of fantasy world. They’ve learned from experience that they can control companies and their commercial actions. That if, for example, they make it illegal and actionable for a bank or a manufacturer to conduct business in a certain way, that mode of business will generally not take place. Regulation works in the world of prescribed law and enforcement with enterprises run by groups of responsible human beings who believe they are acting for the good of the corporation. Make it illegal for a bank to invest a depositor’s account money in the stock market, follow up with auditors and regulators, and the practice is effectively stopped. Require a car manufacturer to install airbags in all new vehicles, follow up with inspections and embargoes, and the devices will appear on schedule.

But personal choice and action are not the same as corporate activity. Make it illegal for a man or woman to consume alcohol, marijuana, or methamphetamines, or acquire a handgun, and the realm of human desires, needs, and choices takes over. People will have heard about the dangers inherent in all of these, as well as the foolishness of speeding on the freeway or practicing unprotected sex. But individual choice—including the evaluation of potential harm, the assumption of personal and social risk, and the acceptance of consequences—drives individual action. And indeed, every personal choice will eventually be met by a legal or illicit supplier, whether it’s alcohol during Prohibition, cocaine at the height of the War on Drugs, or handguns in jurisdictions with an absolute ban on ownership.

Still, the kind of person who goes into public service with the intention of improving people must believe that humans are simple behavioral mechanisms, that advertising creates irresistible subliminal impulses, that warning labels can suppress bad habits, that precautionary bans can effectively eliminate threats, and that permission is needed to promote harm reduction. Social engineers believe that human nature is malleable, a tabula rasa upon which they can write the kind of behavior that promotes good action, public health, safety, and fair cities. They have staked their livelihoods on the proposition that bad influences drive us to sin, and good influences will preserve us from temptation.

It’s fortunate for the social engineers that no one ever quantifies their results and takes the cost of failed programs out of their paychecks. Personally, I couldn’t live in a world that narrow and blinkered.

1. See “Smoke and fire over e-cigarettes” from Science magazine, January 23, 2015. (This is a summary; the full article is behind a firewall for subscribers.)

2. I did much the same thing with my addiction to alcohol a decade later.

Sunday, February 15, 2015

Writing as a Nonlinear Exercise

Questions have been floating around among my Facebook writing friends as to whether it’s best to write for flow or to write to edit, and whether the first, second, third, or later drafts are the most valuable and where the writer should do his or her most valuable work. In this discussion, “write for flow” means to sit down and bang out the story as it comes, ignoring the niceties of word choice, spelling, and grammar; just let it flow and fix it up later. On the other hand, “write to edit” means concerning oneself with all these details on the first draft, letting one’s internal “editor voice” determine the next word or keystroke; move slowly and get it right the first time.

I have to say I’m in the camp of the “write to edit” people—but then, I’m a special case. My first three jobs out of college as an English major with an honors degree were as a book editor, first for a university press, then at a trade book house specializing in railroad histories and Californiana, and finally as a technical editor with an engineering and construction company preparing reports and proposals. I wanted to write novels, too, and I wrote one whole manuscript during that time by getting up at four o’clock in the morning, staring at the wall until the blood came, and then pushing down typewriter keys. But my day job was sitting in a chair for eight hours at a stretch—except when I got up to check the dictionary or another reference source—going over the lines of other people’s writing with a blue pencil.

An editor’s function is to be responsible—morally, legally, economically, and spiritually—for the quality of another person’s writing. The copy editor reads, understands, and evaluates every word, grammatical structure, punctuation mark, sentence construction, paragraph flow, and checkable fact. The copy editor—which is what I was—is not so much concerned with the author’s viewpoint, political stance, the manuscript’s narrative arc, or its overall marketability,1 but he or she does care that every sentence meets the canons of appropriate literary quality and that every fact stands up to external scrutiny.2 A professional copy editor will give the same good service to an author who’s a raving Nazi, a convinced Communist, or an ecstatic Evangelical.3 The editor’s first loyalty is to the text and its potential readers.

The editor functions simultaneously as the author’s “eyes behind,” picking up on and correcting those grammatical, puctuational, and factual errors and infelicities that the author may have overlooked, and as the “fresh eyes” of the first reader, exploring the text in all its possible dimensions, misinterpretations, and petty confusions. The editor corrects the obvious mistakes and asks the obvious questions that would bother the “interested and informed general reader.”

Spend eight hours a day doing all that for about ten years, and it changes you. For one thing, you know most of The Chicago Manual of Style—the bible of the publishing industry—by heart, and you can deal with issues of punctuation, capitalization, numbers, word treatment, citation, and all the other ways of making a piece of text look, read, and “feel” right simply by reflex twitches of your internal blue pencil. You also have years of experience seeing a word that does not quite fit the context, tone, or intent of a passage and instantly thinking of at least three alternates or variants. And you can untangle a confused sentence structure in your head faster than a rat can run a maze.

So … is my “editor voice” at work on the first draft? Oh, you bet! In fact, the little machine or piece of circuitry inside my brain that spits out words in order to follow the flow of my thoughts has been inoculated by the Chicago Manual virus and filters for the look and feel of good text. I tend to write in complete sentences and punctuate, capitalize, and check grammar and spelling along the way. After a lifetime of putting words into print, it’s just not that hard anymore.

The personal computer has also made this process fantastically easier. I wrote my first novel at the age of sixteen4 by doing the first draft longhand on a white, lined tablet, then typing the second draft on my grandfather’s ancient Underwood using two sheets of bond with a piece of carbon paper between them, because I had heard that authors always make two copies. I also used an erasure shield, because I was learning to type at the same time and wanted that second draft to be perfect. This experience—write it out, then type it up—taught me to be precise and economical with words and thoughts, because typing was slow and painful for me, especially working with all that carbon paper; so I learned to edit, pare down, abridge, and abbreviate as I turned my handwriting into typescript.

The computer and word processor have freed writers from the linearity of handwriting or typing out line after line as they move down the page. Instead, my writing process has become more like a wave front, rolling forward in time and space, coming up from behind the crest, and continuously realigning words with thoughts. If I find myself getting tangled up in a sentence, I can move the clauses around with click-and-drag, invert passive-voice sentence structures to active, and eliminate lazy mental constructions like “there is a [subject] that [verb] …” almost as fast as I can type. My days as a copy editor make it impossible for me to just spit out a lousy first draft and hope to improve it later.

But all of this has to do with only the words and how they will appear on the page. A deeper level of the brain controls my writing talent, and that mimics the role of the structural or story editor. If the character viewpoint in a scene, my slant on the subject matter in an article, or my understanding of the action in the novel is wrong, then my internal story editor shuts down the writing process. I know it will take more work trying to unthink, unravel, and undo the damage that a wrongheaded approach to the story or article will create in my mind than simply waiting and getting it right the first time. If I sit down at the keyboard and nothing comes—the word-generating circuitry goes strangely inert—I know it’s because I haven’t yet worked out some crucial part of the plot or answered some critical question about the character and his or her actions or intentions.5 Of course, I might also simply have been lazy and not bothered to prepare my mind, give thought ahead of time to plot, character, or action, or focused on my need for a starting point—the image, sense impression, or piece of action that I call the “downbeat.”6

My writing style allows for some vagaries, of course. I can leave the name of a minor character in unsettled form, insert a placeholder for a bit of nonstructural description, and add “[CK]” for “Check” to a fact that I’ll want to clear up later. My internal story editor knows these details will be flagged and get fixed in a later read-through. For everything else, however, I keep a window with the Merriam-Webster Unabridged open on my desktop alongside the word-processor, and I keep a second browser window open to check facts or word treatments on the fly through a search engine like Google. But my first draft is usually about ninety percent of what I want the story to be.

My approach—the wave form method of writing—is chaotic, but it’s a controlled and goal-oriented sort of chaos. I think of my writing as a kind of blacksmithing: hammer on each word, sentence, paragraph here, hammer on it there, see the hot metal become straight and smooth, and make it strong through continuous testing.

As to whether the first, second, or nth draft is the best, I really don’t do drafts anymore—not in the sense of putting aside the text that was written in the last sit-down, reimagining and rewriting the scene, and hoping to improve it by a second writing. Individual drafts have been replaced in my process by spaced read-throughs of the developing text. Usually, I do one review immediately, at the end of the writing session, to catch any obvious errors. Another will come the following morning, before starting on the next scene or chapter. Then I will read through a chapter or section a few days after finishing it, when it’s had a chance to cool off in my mind and show its flaws. And I will give the whole book a final read-through before letting anyone else see it. Is that four drafts or five? And does it matter? I keep hammering on the text until it becomes bright and hard, like a good piece of steel. I don’t move on to the next chapter or section until I know that the structure I’ve already built is solid and will bear weight.

And if the steel doesn’t ring at all? Then I know I have to discard the entire story line, let my head cool off, let the pools of my subconscious become dark again, and think the story out with a fresh perspective. But that’s not another draft. It’s more like doing an entirely different book!

1. That’s the job of the acquisitions editor at a publishing house, who deals with the manuscript’s content, structure, and fitness for the house’s established distribution channels and readership. For an overview of the editing process and types of editors, among much else, see Between the Sheets: An Intimate Exchange About Writing, Editing, and Publishing, which captures an email exchange I had with an old colleague and first-time author, Kate Campbell.

2. The editor is not concerned with any kind of “universal truth.” The editor does not ponder metaphysical or philosophical mysteries. But if the author writes that the American Civil War started in 1860, or that Alfred Einstein died in 1954, then the editor gets curious, springs out of his or her chair, and looks up to confirm or correct the matter. To fill this role adequately, an editor needs the kind of ready-reserve knowledge base that plays well on Jeopardy.
       If there is any doubt or question about a fact or a sentence’s meaning, the editor pencils a polite note to the author, asking for his or her consideration and correction at the time of manuscript review. This is the main reason that Post-it® notes were invented.

3. As I’ve sometimes said, “I don’t care if I’m editing the Devil’s own book. At least he’s going to get the spelling and grammar right.”

4. Don’t ask. It was a wretched space opera about an interstellar empire and an academic-turned-revolutionary—a character based somewhat loosely on Leon Trotsky—who managed to overthrow it. That and the next two and a half novels I wrote were just a waste of black marks on paper. Every writer has to throw away three books before he or she produces one that is worth even showing to another pair of human eyes, let alone an agent or publishing house. If you’re reading someone’s “first novel,” know that it’s actually their third or fourth attempt. Every overnight success is about ten years in the making.

5. See Working with the Subconscious from September 30, 2012.

6. See Getting Into the Zone from February 2, 2014.

Sunday, February 8, 2015

Intelligence or Consciousness?

People seem to be afraid of “artificial intelligence”1—but is it machine intelligence or machine consciousness that we fear? Because we already have examples of several kinds of intelligence.

For example, the computer program or system called “Watson” can emulate a human brain’s capability of assembling clues and storing information on a variety of levels—word association, conceptual similarity, sensory similarity—to play a mean game of Jeopardy. Watson is remarkably intelligent, but no one is claiming that the machine can think in the sense of being conscious. For another example, the artificial helper Siri in your smartphone is almost able to pass the Turing test2—if you’re willing to believe you’re talking to the proverbial “dumb blonde”—but Siri is neither particularly intelligent nor was she ever meant to be conscious.

Intelligence is a spectrum. It measures an organism’s ability to perceive, interpret, determine, and act. And this process can be graded on a curve.

Consider the amoeba. It can perceive and identify the chemical trail of a potential bacterial food source, follow it, and consume it. The amoeba doesn’t make a decision about whether or not to follow the trail. It doesn’t decide whether or not it’s hungry. The amoeba’s choice of food and the decision to hunt it down are determined solely by chemical receptors built into the organism’s cell membrane.3 The amoeba’s hunting strategy is the most basic form of stimulus-response mechanism.

You wouldn’t call an amoeba smart, except in comparison to the bacteria it hunts. Bacteria are opportunists whose survival strategy is that of flotsam: multiply like hell and hope some of your daughter cells land on a food-like substance. If they land on barren ground, they die. Or, if the substance isn’t all that food-like but has some potential for nourishment, hope that maybe some future generation will evolve to digest it. This level of intelligence gives new meaning to the term “passive aggression.”4

With multi-cellular organization came multi-tasking. This new kind of creature developed about 500 million years ago, during the Cambrian explosion, probably by diversification of cell types within colonies of single-celled organisms. With some cells taking on specialized perception roles, such as light and chemical seeking, while others took over the functions of digestion and reproduction, the organism became more efficient. It also needed an executive function, at first to communicate between these activities and ultimately to coordinate and control them.

An ant can see a leaf with its compound eyes, approach it on six functionally coordinated limbs, and cut it with hinged jaws. Moreover, the ant can evaluate a number of nearby leaves and make a selection for right size, weight, and tastiness. It’s still a question whether an ant can see or sense food and choose not to take it.5 Certainly, the ant has a built-in “hierarchy of needs,” whereby attack by a hostile species or imminent danger of, say, drowning in a raindrop will override its duty to forage. How much free will the ant has to decide “Fight first, forage later” or even, “Kick back and take the day off” is a matter of debate and subject to the human tendency to anthropomorphize other species. But it’s clear that insects can learn, remember, and communicate. Bees can find a field of flowers, remember its location, fly back to the hive, and communicate to other bees the direction and distance to this potential food supply. That’s a pretty sophisticated stimulus-response mechanism!6 These activities and capabilities are shared by many animals, even among human beings.

On the spectrum of intelligence that runs from amoebas to humans, dogs are clearly somewhere in the middle, but tending toward the human end of the spectrum. Dogs can coordinate their activities through communication and even form social relationships and bond with one another on the basis of loyalty and affection. Within these groups they develop expectations, engage in disputes about hierarchy, and then may either submit or choose to leave the pack, depending on their predetermined natures and their accustomed status as either alpha or beta individuals. In isolation, a dog can make its own decisions about liking and distaste, trust and distrust, safety and danger. Dogs raise their young through the shared responsibilities of a family subgroup: mothers nurse while fathers hunt. They can choose to alter their territorial behavior, such as by migrating with a herd of prey. And they can develop trusting relationships with other species, such as by becoming domesticated and forming a pseudo-pack with human beings. If an alien spaceship landed on a planet whose highest life-form was the wolf pack, the aliens would have to conclude that they had discovered intelligent life.

But the question of free will still remains. Can an ant or bee decide to subvert the social order and challenge the colony’s queen? Can it decide to leave the hive after a dispute or in order to find a better life? Can the insect override its instinctual—perhaps even hard-wired—drives to forage, fight invaders, or serve its hierarchical position because other members of the colony have abused it or hurt its feelings? Obviously not. But dogs, cattle, and other social animals can make these choices, although perhaps not willingly or eagerly, and usually only under strong compulsion or in response to immediate need. Humans, on the other hand, practically live in this meta-world of individual choices, personal feelings and preferences, and divided allegiances.

Now we come upon the issue of consciousness. Unlike intelligence, which seems to be a spectrum from simple stimulus-response mechanisms to complex, multi-valued reasoning, consciousness would appear to be a step function. An organism either has it or not, but its awareness may present itself in varying degrees.

If you obstruct an ant or bee in its pursuit of a leaf or flower, it will persist, repeatedly bump up against you, and try to get around you. If you keep blocking it successfully, however, the insect will eventually lose interest, turn aside, and pursue some other food source. What it will not do is take your obstruction personally, get angry, and plot revenge against you. If you cut off an insect’s limb, it will register visible distress and feel some analog of physical pain, but it won’t face the dejection of a life in reduced circumstances, deprived of the opportunities available to healthy, six-legged insects. If you kill it, the ant’s or bee’s last sensation will be darkness, with nothing of the existential crisis that death evokes in human beings.

If you frustrate or disappoint a dog, it will register anger or despair.7 If it becomes injured or sick, it not only registers pain but also demonstrates a negative emotional state that any human would recognize as depression. If a canine companion dies, the dog exhibits a sense of loss. When faced with sudden danger and perhaps the imminence of death, the dog exhibits a state we would call fear or even terror. The dog has an awareness of itself and the creatures around it. The dog is conscious of being alive and has some elemental notion of health and sickness, life and death, that an ant or bee does not register.

But is this awareness also self-awareness? It’s a commonplace that dolphins, elephants, some apes, and all human beings will recognize themselves in a mirror. If you place a mark on a dolphin or adorn it with a piece of clothing, the creature will go over to a mirror to check out how it looks. Elephants can use paint and brush to draw pictures of other elephants. These animals understand the difference between themselves and others of their kind. A dog, on the other hand, cannot not comprehend a mirror. If it sees itself in reflection, it thinks it has encountered a strange new dog. So while a dog has a first level of consciousness compared to an ant or bee, it is not fully self-aware, which is the second level of consciousness possessed by dolphins, elephants, apes, and humans.8

It is this ability to consider oneself apart from all others, to reflect upon one’s own thoughts and desires, to have hopes and fears and also to think about them, to consider one’s actions and their consequences both for oneself and for one’s group, and to ponder the nature of existence that is at the core of human-scale intelligence. A human being is not just intelligent but also knows he or she is intelligent. A human naturally worries about how his or her mind, nature, opportunities, and chances compare with others, and cares about his or her place in the society or hierarchy. A human being understands relative time states like past, present, and future because the person can see him- or herself in conditions and situations that no longer persist but did once, or that have not yet arrived but toward which all current trends point. A human being is constantly self-referential, considering his or her own life and nature, while a dog is merely happy to be alive, and an ant or bee—or an amoeba—has no conception of the difference between life and any alternative.

Any computer program yet written my emulate, simulate, or even exhibit the qualities we associate with mere intelligence: perception, interpretation, decision, and initiation of action. None so far has reached the scale of internal complexity where dog-like awareness arises, let alone the self-awareness that would allow the machine to consider its own actions in the abstract and make choices based on self-perception, feelings of pride or shame, or anything like a moral stance in the universe.9 But I don’t say that this level of awareness can’t happen, and I believe it may arrive sooner than we think.

And if—or when—it does, then we will no longer be dealing with a machine. Then the question of carbon-based versus silicon-based life form will no longer apply. We will be dealing with a fellow traveler who will behold the infinite with a sense of wonder. We will be dealing with a creature much like ourselves.

1. See the last part of my blog post Hooray for Technology from January 4, 2015, discussing the meme that artificial intelligence will be detrimental to humankind.

2. The Turing test involves a human being asking or writing out any set of questions he or she can think of, passing them blindly to an unseen and unknown subject, and evaluating the subject’s answers. If the human cannot tell whether the respondent is another human being or a machine, then if it happens to be a machine, that machine might as well be—by Turing’s definition—intelligent.
       It’s a fascinating problem, but quite soon after Turing proposed the test in a 1950 paper, several people were writing computer programs like ELIZA and PARRY that could pass it with a human interlocutor, and none of the computers of the time had the capacity to actually approach human-scale thinking. None of the machines available today does, either.

3. See Protein Compass Guides Amoebas Toward Their Prey in Science Daily> from October 26, 2008. Interestingly, a similar mechanism drives cells of the human immune system to track down bacterial targets.

4. But compared to a virus, the bacterium is a genius. Viruses can’t even breed or evolve until they happen to land on a host with a working genetic mechanism they can hijack. Viruses are pirate flotsam.

5. That’s a question with some people, too.

6. For more on insect intelligence, see Insect Brains and Animal Intelligence in the online resource Teaching Biology.

7. My wife tells the story of her first dog, a little poodle, and a rainy day when she was pressed for time and had to cut short the dog’s daily walk. She may even have yelled at him when he balked at getting back into the car. Upon returning home, he walked straight into her bedroom, jumped up on the bed, and pooped right in the middle of the bedspread. If that wasn’t a calculated act of revenge, I don’t know what else to call it.

8. However, a dog can be made to feel foolish. My aunt was a poodle breeder, groomer, and competitor at prestigious dog shows, including the Westminster Kennel Club. Once, to compete in a Funniest Dog contest, she clipped one of her white poodles in oddly shaped tufts and dyed them red, green, and blue with food coloring. She always insisted that dog acted depressed because it knew how foolish it looked.

9. Such as viewing humanity as an enemy and, like Skynet, “deciding our fate in a microsecond.”

Sunday, February 1, 2015

The Roots of Religious Anger

After the riotous outcry against Jyllands-Posten and the massacre at Charlie Hebdo for publishing satiric cartoons, the fatwa and death threats against Salman Rushdie for writing a speculative novel, and similar cries of death for insulting and blaspheming against Islam, one has to wonder about the nature of this belief system.

For most people in the West, religion is a private thing. It’s a matter “between a man and his maker.” To quote Elizabeth I, who inherited a bloody struggle between Protestants and Catholics that her father had unintentionally ignited, “I would not open windows into men’s souls.” Yes, the West has experienced various spasms of inquisition and pogrom. “God wills it!” has been the call for several crusades, and remains a rallying cry up to the present time. But since the Enlightenment—which appears to have been a response to growing scientific understanding, widespread literacy and the availability of printed books, and dawning notions about individuality and a man’s mind belonging to himself1—most Westerners have sent religious certainty, canonical authority, and persuasion by violence to the back seat of their social and political thinking. Religion still matters, of course, but on a more personal level, and not enough to make us disrespect—let alone kill—one another.

Because I’m a forward-thinking person, a writer of science fiction rather than historical fiction, I find it difficult to place myself in the pre-Enlightenment mindset. But I can appreciate that the followers of Islam who participate in or approve of such massacres, fatwas, and jihad in the sense of “religious war” rather than “personal struggle” take their religion to be a statement of political belief and ethnic, or even tribal, unity. Doubt, perspective, and compromise are not permitted in this belief system and never openly entertained. Opposing views are never given the respect inherent in the realization that they might just possibly be right. Opposition equals error equals sin equals death.

And yet … Might not people who are so touchy about the dignity and reality of their truly, deeply, dearly held beliefs be exposing … well, a hint about their own doubts? Compare this with the deep, smoldering anger you feel when someone reminds you of an act or behavior that you yourself know to be wrong or about which you feel guilty. You hate to think of the error you’ve made, but you hate even more being reminded of it by someone else. On the other hand, when you're absolutely sure of your reasons and know you’re right, then accusations just roll off your skin, leaving your core mind untouched. By their anger shall you see through them.

Perhaps the social forces that coerce the average Middle Easterner to believe in the unerring word of God as received by Muhammad—and to speak, act, eat, fast, dress, and pray five times a day accordingly—arouses some latent resentment that cannot speak its name. If you and everyone you know must follow the same codes—down to the way you cut your hair and beard—not just at the risk of social disharmony and shunning, but on pain of actual, physical violence, extinction, and eternal damnation, then you might feel personally repressed. Oh, sure, purified and sanctified at the same time, but also moderately badgered and harried. The desire for freedom of expression, for a day of relaxation, for a chance to break the bonds and cut loose is not just a Western cultural attribute but a reflection of human nature and the spirit that keeps us all sprinting toward a long life.

People living within such strictures, where to revolt or even to criticize is death, will become massively angry when confronted with co-religionists who dare to flout the rules, or with competing societies which deny that the rules exist or have any value. In the pressure cooker of a straitlaced and fearful life, condemnation of the unrepentant sinner is an alternative form of emotional release.2

In the Western view, having crossed over into the secularism of the Enlightenment, such a society is not stable. Repression of natural human emotions and instincts may work for a time, or in a closed and limited society. But it is not a model for world domination and governance. One mind can remain tied off and closed, and perhaps even a whole family and tribe can exist that way, but not a dynamic, viable culture or society.

However, extricating the Muslim societies from their trap will require the same long and difficult road that Christendom traveled: from consolidation of authority to individualistic reformation to secular Enlightenment. In the meantime all that we in the West can do is watch and hope and wait for the request for assistance—if it ever comes.

And during that waiting, what is a gentleman to do? I would take comfort in three general guidelines for good behavior. First, a gentleman does not mock another man’s religion. Second, a gentleman recognizes that one must sometimes respond to deep insult with an act of calculated violence.3 But third, a gentleman also expects other reasonable people to adhere to the words of Captain Malcolm Reynolds of Firefly fame: “If I ever kill you, you'll be awake, you'll be facing me, and you'll be armed.”

So the least a decent person can expect from their religious anger is a fair fight.

1. Not to mention the introduction of coffee and tea to European society. Since no one dared drink from the river—or even their own well water, because the well usually sat downhill from the privy—people up through Shakespeare’s time started the day with cider and small beer, then went on to wine and brandy at lunchtime. Fermentation and its resulting alcohol killed most of the bugs in the water but left everyone well plotzed by mid-afternoon. Coffee and tea were prepared by boiling the water rather than through fermentation, and they had the added benefit of being natural stimulants rather than depressants. People stopped wandering around in a fog and got serious about ordering their society, its politics, and economics; invented modern concepts of risk, insurance, banking, and the time value of money; and created our modern world. See Coffee Took Us to the Moon from February 23, 2014.

2. For more on this, consider the Salem witch trials.

3. Thrashing a mocker at dawn with sword or pistol once was the ancient right of any gentleman. Or, as Robert A. Heinlein would have it, “an armed society is a polite society.” It wouldn’t work today, of course, because pistols are now more reliable, semi-automatic, and don’t need the skilled and steady hand that a matched pair of flintlocks once required. And, in our underhanded society, any brawl that started with the finesse of swords would quickly degenerate into a shootout with backup weapons.