Sunday, June 16, 2019

A Preference for Weakness

Food allergy

It appears that no one wants to appear to be—and act as if they were—strong anymore. Has our culture so decayed that we actually admire—and aspire to be—weak, defenseless, vulnerable, a victim?

Item one is the sudden outbreak of food allergies among the middle class. Yes, some people have acquired allergies that may have something to do with plastics or hormones in the water. Yes, some food allergies are so severe that the person can go into anaphylactic shock and die. But that is not the phenomenon under discussion here. It seems that suddenly everyone is allergic to, or sensitive to, or just afraid of, the protein in wheat known as gluten. People with certain identified autoimmune disorders, such as celiac disease, do get sick when they digest gluten. And people whose traditional diets did not include wheat, such as those who come from areas dependent on rice cultivation in Asia, may experience a reaction to gluten. But that is not what I’m talking about.

Far more people with a Northern European background than just a few years ago seem to be claiming gluten sensitivity, almost as if it were fashionable to throw up your hands at bread and pasta. Couple this with our wandering dietary pronouncements—meat’s bad, butter’s bad, cholesterol’s bad, butter is better than margarine, fish are good, fish are bad, carbohydrates are energy, carbohydrates are bad, avocadoes are bad—and you get a population that is suddenly exquisitely picky about what everyone is eating, and no one can adequately explain why. That’s a deplorable state for a human digestive system that was tempered and hardened by a hundred thousand years of hunter-gatherer existence and can usually bolt down and profit from almost anything organic except tree bark and dead leaves.

Item two is the sudden outbreak of reactive offense. Yes, some people say and do truly mean, stupid, hurtful things, and they should be either quietly confronted or politely ignored. And yes, sometimes people intend those things to be offensive in order to get a reaction from the public at large. But it seems that suddenly everyone is incapable of taking the polite route and refraining from noticing or reacting to the rudely offensive. Now everyone almost seems to hunger for opportunities for taking offense. To quote from one of my favorite movies, The Duellists: “The duelist demands satisfaction. Honor for him is an appetite.” And so, it would seem, for many people today the satisfaction of reacting with horror, scorn, and outrage at remarks and gestures, whether meant offensively or not, has become an appetite.

In my view—in the view of my parents, who raised me in their beliefs—to give in to such failings, to demonstrate such vulnerability, is a weakness. That succumbing to precocious food sensitivities and minor discomforts was to make yourself vulnerable to the world. That reacting with offense at every slight and injury was to allow yourself and your emotions to be played upon by potentially hostile forces. They believed in strength and wanted their sons to be strong.

As a child, I suffered from remarkably few allergies but had a bad reaction to mosquito bites. Let one bite me on the wrist, and my lower arm would swell until my hand was fixed in position. If one bit me on the cheek, that side of my face would swell. As a boy growing up among the wetlands surrounding suburban Boston, mosquitos in summer were an inevitability. My mother sympathized with my condition, but she didn’t agonize about it. I never was taken to the emergency room, and no one suggested any medications to counter the allergy. My parents believed I would grow out of the affliction, and I did.

My parents tolerated few childish food dislikes. My brother and I had to eat what was put on our plates, like it or not—and we mostly liked it, because my mother was a good cook. I had one real aversion, to cheese. I suppose that, in later life, I could excuse this as my sensing that cheese was “rotted milk,” but as a child I just hated the taste, smell, and texture of the stuff. It wasn’t until pizza became popular that I would eat even the mildest of provolones and mozzarellas in any form, and never just as a chunk of the stuff on a cracker or melted into a sauce. My father, being a cheese lover, disdained my aversion and tried to beat it out of me as an example of childish attitude. My mother, being of a kinder heart, would make separate portions without cheese for me when preparing cheese-heavy dishes. But she still considered my aversion a sign of personal weakness.

The Protestant ethic was strong in my family. You were supposed to work on yourself, learn as much as you could, eradicate your failings, take up your responsibilities, be dependable and loyal, work hard at whatever you undertook, and be ready to account for your actions. Claiming extenuating circumstances when you failed at something was just not allowed: a properly prepared mind should have foreseen those circumstances and worked to overcome them. Claiming to be a victim of other people’s actions, whether intentional or not, was unacceptable. We were the people who paid our own way and made our own fate. We helped others along as we could, but we did not succumb to their malice and their schemes, if any. We anticipated traps, spotted them in time, and side-stepped them. We were in control of our own lives and not anyone else.

In another of my favorite stories, Dune by Frank Herbert, the female school of political agents and manipulators of bloodlines, the Bene Gesserit, had as an axiom “Support strength.” I do not take this as some kind of class statement, such as favoring the oppressor over the oppressed. Instead, it means finding what is strong in each person and developing it, helping to create people who are capable, self-aware, resilient, brave, and strong. It is an attitude of which my parents would have approved.

The current preference for weakness in our popular culture—expressed in accepting every allergy and food phobia as a sign of personal sensitivity, and accepting every cause for offense as a sign of spiritual purity—is a dead end. It creates people who are incapable, self-serving, brittle, scared, and weak. This is not a people who will take up arms against an invader, or volunteer to land on the Moon or Mars, or do the hundred other daring and dangerous things that previous generations have been asked to do and responded without a whimper.

But we may not be that nation anymore.

Sunday, June 9, 2019

On Rational Suicide

Line of suicides

The human mind is a mechanism of many moving parts. The human brain has several major components and many subsystems or neural circuits, all built up over millions of years of evolution, based on the rudimentary brains of the early fishes. In the most fortunate of us, the mind can be a fine-tuned instrument, capable of analyzing the mathematics of splitting the atom or taking the vision of a sunset and reimagining it in words or music. For many of us, the brain is a stable platform that enables us to live happily, deal with setbacks and anxieties, and savor moments of pleasure and triumph. But for some of us, the mechanism goes out of whack, the parts don’t cooperate, and the result is a mental life full of misperceptions, missed cues, and chaos.

And then there is the issue of suicide. Most people would like to consider suicide as a form of mental illness, a breakdown in the brain’s systems. For them, it is something that might happen to other people, like clinical depression or schizophrenia, but nothing that might be waiting around the next turn of events to grab them, spin them on their head, and kill them. For these people, suicide is never a rational act, never what a sane or well-balanced person does.

This view is reinforced in our society and in Western Civilization with cultural shaming—that the suicidal person was not strong enough or did not consider his or her own value or responsibility to family, friends, and community—and with religious prohibition—that the act of suicide, self-murder, is contrary to God’s law and a usurpation of God’s will, which alone tells us the direction that our lives will take and the point at which they will end. But these cultural arguments and prohibitions against suicide are not universal. For example, in feudal Japan—at least among the samurai order—ritual suicide was linked to notions of obedience and personal honor. A samurai who failed his lord in any significant way could only atone by taking his own life, and this was considered proper and good.

In my view, which is based on notions of evolutionary biology, suicide is the unfortunate triumph of the prefrontal cortex—the center of thinking, organizing, projecting, and deciding, as well as expecting, hoping, and dreaming—and its supporting circuitry in the hippocampus and amygdala, which coordinates both memory and emotions, over the hindbrain or cerebellum—center of autonomic functions like breathing, heartbeat and circulation, swallowing, and digestion—and the reticular activating system in the brainstem that coordinates consciousness. Suicide is the temporary—although ultimately final—triumph of human thinking over the brute, animal functions of the body.

Suicide is the collision of the brain mechanism that creates expectations, performs planning, perceives hope, responds to dreams, and does whatever else drives the mind and the soul forward into the future, with what the mind perceives as reality, the end of hopes, plans, expectations, or whatever was driving it forward. And when that forward-thinking part of the mind gives up and dies, when it contemplates all the damage and despair that continued living might cause, what going forward without hope or dreams might mean, then the rational mind can overcome the brain mechanisms that keep the body breathing, eating, and living. The decision to act in harming the body, whether by self-starvation, asphyxiation, overmedication, cutting of blood vessels, or other ways of knowingly causing irreparable and permanent damage, is the temporary but permanent defeat of those systems that keep the organic parts functioning and moving forward blindly in time.

And again, there are cultural traditions and physical mechanisms that work to delay or even nullify this collision between the mind and the body.

On the cultural level, good parenting will teach a child that, first, we don’t always get what we want or deserve in life, and that disappointment and denial are a part of the human condition. Common sense will teach a person eventually that, while dreams and expectations are nice, they are not the basis of reality, that a person must work for what he or she wants, sometimes sacrificing dreams for mere survival, and that plans sometimes go awry. Wise teachers and friends will tell a person that hope does not live in one shape or future but is a free-floating quality, and that a person can find strength in the smallest moments of peace, visions of beauty, or gestures of good will. And many religions, especially the Zen parables, teach that expectations and assumptions are phantasms of the mind and not a reflection of reality.

On the physical level, most of the methods for ending a life—like starvation, asphyxiation, and cutting—are painful and create their own immediate need for the forward-thinking mind to stop, reconsider its decision, and actively reverse course. Other methods—like a drug overdose—take time to create their effect, and then the busy mind focused on the blankness of the future may unmake its decision to choose death and so choose medical help. Only the methods which put in train an immediate and irreversible course of events—like jumping out of a high window or pulling the trigger on a gun aimed at the head—offer no immediate pain and allow for no useful second thoughts.

Why am I going on about suicide like this? First, as an act of individual courage—and in defiance of most social taboos—I long ago, even as a child, decided that no thought would be unthinkable for me. If it can be imagined, it can be explored rationally and soberly, as a fit subject for the human mind. Second, there have been times in my own life—not often and not recently—when I have felt the pressure of lost hope, of a great, gray blankness lying ahead with nothing to counter it, no expectation to fill it, and no reason to avoid it.

And then my dear wife, my partner in forty-one years of marriage, died a year and nine months ago. When you lose someone with whom you have shared the majority of your life, aside from the immediate grief, you also have moments of simple emptiness. Everything the two of you did together—the events, memories, pleasures, and inside jokes you shared—are irretrievably gone. The daily routines you built up in living together are now meaningless. The habits you yourself curtailed and the actions you avoided because you knew she did not like them are now empty gestures. And when so much of your life has been taken away, you can sense that great, gray void in the years ahead. In many respects, you become a ghost yourself.1

There have been times since her death when I thought I would simply “go under.” That it would not matter if my hindbrain stopped the automatic functions of pulling on my lungs, beating my heart, and cuing my desire to eat, and that I would just fade away. Now, this is not a warning to my family and friends. I am not going to follow any of this up with positive or passive action, because I share those same cultural traditions and physical mechanisms designed to prevent suicide. Or not, that is, until my own brain is so far gone with dementia that I become disabled, unable to care for myself, and so mentally isolated that I cannot move forward. And that’s a vow I made long ago.

But what I am trying to say is that I can understand the impulse toward suicide. It is part of the human condition. It comes about when an animal brain grows so advanced, so complicated, and so beholden to its own vision and hope for the future that the denial of that vision and hope leads to irremediable despair with no alternative.

Suicide is not an irrational act. Its very possibility is part of what makes us human and not animal.

1. Some days, as I move around our old apartment, I have the sense that perhaps I have entered an alternate reality: that I was the one who died and now drift through my daily routines as a ghost trapped in this empty place, while she lives on, somewhere else, in a future filled with hopes, plans, and expectations.

Sunday, June 2, 2019

The Downside of Gutenberg Economics

Pipe organ

18th century pipe organ
at the Monastery of Santa Cruz, Coimbra, Portugal

A friend of mine had a long professional career as an organist and singer in jazz clubs and cocktail lounges. Upon retiring about a dozen years ago, he spent the healthy sum of $18,000 to purchase a top-of-the-line Hammond electronic organ for his home so that he could continue practicing and playing. This organ has now developed problems involving its computer chips and, despite the efforts of various expert mechanics, including the company now owned by Suzuki Musical Instrument Corporation, is considered unrepairable. An instrument that, if it were a human being, would hardly have passed puberty, is now a nice wooden cabinet enclosing a couple of dead circuit boards. And therein lies a tale of the late 20th and early 21st century.

The original organ, familiar from your neighborhood church or a 1920s-vintage movie theater, is an arrangement of pipes connected to valves and an air box. Some of the pipes have notched throats, like a whistle, and some have embedded reeds, like an oboe. All draw breath from the blower feeding the air box or plenum. When the organist presses a key on the console, it activates the valve, letting air into the corresponding pipe and making it sing. This is a design going back 2,300 years to the Greeks, and the oldest extant organs go back about 600 years. If one of these organs breaks, it can be repaired. The job takes an expert who generally has apprenticed himself—because this is rarely a skill undertaken by women—with an older generation of experts and devoted his life to the job. Maintaining a church or theater organ is expensive, probably about as much as building one from scratch, but it can be done.

The original Hammond rotary organ was not based on pipes and blowing air but on something called a tone wheel. Each key on the organ corresponds to an iron disk that has a series of notches cut into its edge. The number and spacing of the notches are mathematically configured so that, with all the wheels rotating at a constant speed on a common shaft, a sensor facing the edge of each wheel picks up a signal in impulses per second that matches a musical tone—for example, 440 impulses per second generates the sound of the A above middle C. To recreate the harmonics that a pipe organ makes by pulling stops to bring in the voices of pipes in various relationships above and below the note being keyed, the Hammond organ uses drawbars to pull in more or less of the electric signals from each tone wheel above and below the wheel being sampled. If one of these original Hammonds breaks, it can be repaired. Again, the job takes specialists who may have to fabricate some of the parts, but the system is mostly mechanical and can, for a price, be restored to working order.

But the organ my friend has, and all the Hammond organs you can buy new today, are electronic. There is no pipe, no tone wheel, nothing mechanical and touchable. The sound of a pipe organ or the cherished buzz of the tone wheel organ has been electronically sampled as a wave that is then encoded in a computer chip as a piece of digital information, the same as if it was a number in a spreadsheet, a word in a document, a line in a graphic, or a pixel in a photograph. When you press the key on such an organ, it calls up the corresponding note from digital memory. When you adjust the drawbars, they pull in the harmonics of other sampled notes. But it’s all just bits and bytes. If your electronic organ has a burned out chip, and that chip is no longer made or available somewhere in stock, your organ is dead.

So, the irony is that with the right people who know how to straighten and fabricate the right parts you can fix a 200-year-old pipe organ or a 50-year-old tone-wheel organ, but nothing can resurrect a 20-year-old electronic organ, piano, keyboard, computer, cell phone, or other digital device if the defunct chips are not available.

And the chips are not available because computing technology—under the force of Moore’s law1—moves forward so rapidly. Designing, plotting, masking, and photoengraving computer chips is a function of what I call “Gutenberg economics,” where the creator makes a single template for a book page, a chip design, or any other intellectual property and then prints off as many as are needed at an ever-diminishing cost per unit.2

The downside with all of this, of course, is that once you have performed all these preparatory steps and finished your print run, you are ready to move on to the next project. If you made computer chips with capacity X and capability Y a dozen years ago, and today’s chips have a thousand times that capacity and a million times that capability but operate slightly differently—say, with different inputs and outputs on the contact points in modern circuit boards—you are not going to go back and do a limited run of antique chip designs just because someone somewhere has a board in an organ, desktop computer, cell phone with a burned-out chip. Like the cost of makeready on a printing press, the costs of setting up for the fabrication of a semiconductor design are the largest part of production. No one pays to start the process over again for just a few copies of the printed book or newspaper—or for a couple of hundred computer chips to repair outmoded products.

So, while the computer chip itself might be virtually immortal, the device in which it’s installed is susceptible to the least defect in just one of the many chips that power it. Burn out a memory component that, on the scale of a large-run chip fabrication, might originally have cost less than a dollar to make, and your $18,000 Hammond organ is electronic waste in a nice cabinet.

In the same way, you can still buy a 1929 Model A Ford and get it serviced and repaired, because there is a small but loyal following for these cars, and gaskets, bearings, carburetors, and other parts can still be sourced, refurbished, or fabricated. You can even restore a 1929 Duesenberg and have yourself an elegant town car—if you have enough time, patience, money, and access to a good machinist. But let a chip burn out in the engine management or fuel injection system of your 2009 Toyota or the charging system of your 2019 Tesla and the chances, over the years, become vanishingly small of finding a replacement. Once the car company notifies you that it will no longer support your model year, you are pretty much on your own, and your treasured vehicle becomes a ton and a half of useless scrap.

In the same way, a document hand-lettered on parchment or printed on acid-free paper can survive for centuries. It will still be readable, even if you require a linguistic expert to translate the words. But recovering the early draft of your novel from a twenty-year-old 5.25-inch floppy disk or a ten-year-old 3.5-inch floppy can now take expert help and, within a couple of years, may not be possible at all. Despite all the standardization the computer industry has made in disk formats and plug sizes and capacities, it’s a sure bet that one day your two terabyte hard drive or 128-gigabyte USB thumb drive will be unreadable, too.

In the same way, many of us music lovers have had to recreate our collections as vinyl gave way to tape, gave way to disk, gave way to MP3. And our movie collections have moved from VHS—or, spare me, Betamax!—to DVD, to Blue Ray, to MP4.

This is not planned obsolescence or some evil scheme by the record companies to make you buy The White Album again. This is simply the advance of technology. Last year’s hip replacement is just not going to be as good as the metal or ceramic joints that orthopedic surgeons will be installing in the year 2028. You might remember fondly that 1970 Ford Mustang you once had and want to own again, but its engine mechanics, gas mileage, and service life will not be as good as today’s model. And when your 2019 Mustang bites the dust because of a dead computer chip, the models that will be available then will be even better yet.

As I’ve said before,3 once the Industrial Revolution got underway—then morphed into the Information Age, and is now fast becoming the Automation Revolution—we all got on a fast escalator to the future. Some of it we can already see. Some only science fiction writers can imagine—and sometimes they get it wrong.4 But it’s all going to be amazing.

1. Moore’s law says—or used to say, starting back in 1970—that the processing power of computer chips doubles every two years. I don’t know if that still holds, because the effort to cram ever more transistors onto a silicon wafer is now approaching physical limits. And new concepts in quantum computing may bring along even greater advances in computing power.

2. See Gutenberg and Automation from February 20, 2011.

3. So many times I won’t bother with a reference.

4. See, for example, Robert A. Heinlein’s early fiction, where we travel in space but still have to program electromechanical computers the size of a room.

Sunday, May 26, 2019

Allocating Scarcity

The Human Condition:

Allocating Scarcity – May 26, 2019

Teasing with apple

In economics, the primary facts of life—and the first set of curves you learn—involve supply and demand. The value of any good or service is fixed by the intersection of the supply curve, from great availability to great scarcity, with the demand curve, from scant interest to intense need or desire. Dirt is of little value, because you can pick it up anywhere and nobody—other than gardeners, and they require a special quality of organic content—has much use for it. Diamonds have great value because they are relatively rare and almost everybody—at least those who love jewelry or seek a store of value—wants one. The physical space where supply curve (the seller) and the demand curve (the buyer) meet is called a marketplace.

One of the enduring problems of human society is how to allocate scarce and highly desirable goods and services. A system of pure barter doesn’t cut it. How many bushels of wheat must a farmer grow to trade for a diamond to put on his fiancé’s finger? And what is the local jeweler going to do with the suddenly acquired hundreds or thousands of bushels after the exchange is made? To facilitate the trade, human societies almost immediately invented the monetary system, establishing physical markers to represent relative value. And those values, again, were set by supply and demand. Diamonds trade for a lot of markers; bushels of wheat trade for comparatively less—unless there’s a farming crisis and famine, in which case wheat might trade like diamonds because everyone has to eat.

Interestingly, it doesn’t matter what the markers are made of. Gold and silver coins have worked for some societies, because their metal content is relatively rare and desirable and so represents a store of innate value. The tribes of eastern North America used beads made from white and purple shells, called wampum, that traded like money—having innate value only because the purest colors were relatively hard to find and the beads were pretty. The Chinese emperors from the 4th century BC to the 20th AD struck copper and bronze coins, called cash, that had value only because the government would redeem them. And the Chinese were the first to use paper printed with special markings as money. The point is, money has value because other people will accept it in exchange for goods and services.

In the Star Trek universe, there is no scarcity. All the energy a space-faring society could need is supplied by matter-antimatter reactions. All the goods they want are provided by pattern replication, from food and clothing to formed metals, and presumably gold and industrial diamonds—everything, curiously, except for the dilithium crystals needed for the energy conversion. Those crystals have to be mined and traded. And in Star Trek they don’t use money, either. All the matter-antimatter converters are presumably owned collectively and the replicators are found everywhere. I suppose people might make some kind of living by inventing and selling unique replicator patterns: a new flavor of ice cream or a new kind of fabric for clothing. But no one aboard the Enterprise seems to be dabbling in this, and they would probably give the patterns away for free as a community good.

The existence of scarcity and the rise of market values and money systems driven by need or desire bother many people. Inevitably, throughout history, those who have been able to garner and stockpile scarce or desirable commodities, or the value markers needed to buy them, have gained power over others in their society. And, always, this garnering and stockpiling have carried the whiff of unfair practice. The person who saves up canned goods and water in a time of disaster, when their neighbors are starving and thirsty, and will only trade them for more money than they can command in good times, is generally despised as a hoarder. The person who gets a government contract to operate iron and bauxite mines in wartime and to build tanks, planes, and ships, while everyone else is scrimping and saving under government rationing, is regarded as a profiteer. Even the movie producer who creates a colorful fantasy full of imaginary lust and action and then, through advertising and the manipulation of public tastes, convinces millions of teenagers to part with their $10 for two hours of vacant, mind-numbing entertainment, is regarded with suspicion.

At various times in our history, small societies have tried through enforced sharing to overcome the advantages some people may gain from scarcity and the opportunities they can reap by manipulation. The feudal manor and the Israeli kibbutz are such places. The cobbler in the manor does not trade the shoes he makes for a certain amount of the farmer’s wheat; he just commits to make shoes for anyone in the community who needs them and then takes his share of the community’s supply of food, housing, and clothing. The same for the miller and baker, the mason, and the weaver. If the community is self-sustaining in the raw materials required to support this communal exchange, everyone is happy. If the community lacks some things—for example, not enough cowhide from the current consumption of beef to provide leather for all the shoes that are needed—then people may do without, or wear last year’s shoes, or go barefoot when the season allows. Fat times and lean are amicably shared.1

All of this works on the small scale, where everyone in the manor, village, or kibbutz knows everybody else. People in these close-knit communities don’t have to look at their neighbors’ dinner tables to see what they’re eating, at their feet to see what they’re wearing, and in their cellars to see if they are hoarding either food or shoes.

But once your society grows larger than, say, the county level, things start to change. Then you don’t know what other people far away have or need, or what their advantages and disadvantages are, and whether or not they may be hoarding the goods you require to survive. The human impulse to share dwindles remarkably the farther you move beyond family, friends, and neighbors. The willingness to sacrifice and do without for the presumed benefit of strangers is not part of the human psychological makeup.

Marx thought that in the perfect society, once the lesson of “From each according to his ability, to each according to his needs” had been learned and absorbed into the human psyche, the need for a state to direct allocation of resources would wither away. People would just go on indefinitely producing whatever they could, sharing the abundance in good times, and enduring their hunger pangs in lean times.2 But human nature does not change with a few economic slogans—or with an armed revolution, state enforcement of quotas, and the liquidation of recalcitrant classes.

Every system of distribution that tries to ignore market values and monetary systems, and that grows bigger than a small town, requires a stern hand to measure regional and national need, set production quotas, and align distribution. And there are two problems with this.

First, it makes no allowance for human variability, for differences in taste and perception. “You like pistachio ice cream, comrade? Too bad, because the popular flavors—and all we have the resources to make—are chocolate and vanilla.” People in the aggregate are numbers, ciphers, zeroes, not real human beings with actual likings and loathings. In a marketplace with available capital in the form of unallocated money, someone would be inspired to figure out the percentage of people who actually like and would be willing to pay a bit extra for pistachio ice cream, then he would build a plant to make enough to serve their desires. That inspiration would create a market niche, provide jobs, and make people … happy. But in a socialist state with production experts and yearly quotas, everyone is overworked just trying to figure out how much chocolate ice cream to make and ship to California—if we even have enough dairy production in Minnesota to support such a frivolous commodity, let alone the refrigeration capacity to keep it cold in the summer—to bother with any marginal taste groups.

Second, those production experts and planners are human beings, too. Even when their work is supported by spreadsheets and computers, they are susceptible to the same frailties and failings as anyone else. The inspiration to make a market niche in pistachio ice cream wouldn’t occur to them, because the state and the other production planners would not let them benefit from the impulse to serve that market segment. Worse, they could too easily see that what would benefit them was offering a small favor to, and accepting a small gratuity from, a particular industry or region or segment of society. It is human nature to look out for oneself, one’s family, and one’s friends before taking care of some hypothetical public good. Unless the production and distribution are run by wholly artificial means—computer intelligences, dispassionate angels, or clockwork mechanisms—the possibilities and opportunities for graft and favoritism will undermine the purest instincts toward fairness and equality.

Scarcity will always exist. Even in a world of unlimited energy generation and pattern replication, there will still be material goods like artworks and handicrafts, services like medical specialties and innovations, and experiences like ziplining through the rainforest or trekking to the peak of Mount Everest that someone will want more than anything else and more than his neighbors might want. There will always be objects of desire that people will crave and fight for, no matter how stoic and rational they are supposed to be.

The only way to accommodate human needs and desires fairly, in a way that will make people happy, is to let the marketplace work. Let people make decisions for themselves about what they will value in life, how hard they want to work for it, and how they will spend their money. Do we occasionally need a regulating hand to see that some people without resources are not left to each out of dumpsters or starve? Of course, that is the purpose of the safety net, and no one disagrees with it. But there is a huge economic leap from letting public servants watch the marketplace and occasionally make necessary corrections—to giving them total control of the whole mechanism.

Scarcity exists not only in the belly or in physical resources. It also exists in your mind and your desires. And that is a corner of the world and the human spirit that no dispassionate, socialist governmental structure can reach or touch—or satisfy. Nor should it try.

1. And no one in these communal societies needs luxuries like gold and diamonds—except the feudal lord, who can always go raiding for them. If the community does come upon such valuables, the people generally consent to use them to decorate and enrich the local church rather than bedeck only a select few members with jewelry.

2. And note that Marx predicated this method of distribution on the abundance that would be created by “unfettered” production in a fully developed socialist society—presumably without the restraints on labor placed by limited capital, human laziness, and human greed, “after labor has become not only a means of life but life’s prime want.” If this isn’t magical thinking, I don’t know what is.

Sunday, May 19, 2019

Classic Comedy

Confederate soldiers

Tragic mask from the Pio Clementino Museum

When I had finished the draft of my second novel, which became First Citizen, my agent at the time showed it to a friend of hers, who worked in development at MGM studios. He loved the novel—actually taking a copy of the manuscript with him on a trip to China so he could keep reading it—and thought it could be made into a movie. But first, he said, I would have to find a central story line through my long and convoluted plot. The book form follows the main character from birth, through his days as a strong, idealistic, and innocent young man, to his final victory as a corrupt warlord in a broken country, who ravishes his own daughter at one point, and is also at the end a bit crazy.

I never thought of this book in terms of Greek drama as either a tragedy or a comedy. The plot intended to tell the story of Julius Caesar in modern dress—and take the United States along on the ride in the persona of the Republican Rome at its fall. Depending on what you think of Caesar, a man who achieved everything he wanted and still ended up dead, that story has tragic elements—except that, unlike the fall of a Greek protagonist, Caesar never achieved a clear, personal understanding of his faults and their consequences. But it was a dark book.

To satisfy this young producer, I would have to condense the essence or core of my story into a two-hour screen play. I would also have to write the first draft myself. He was generous enough to spend time on the phone with me and gave me a pitch as to what the modern movie screenplay must contain.

All movies, successful or not, he said, in order to be made must conform to the three-act structure. In this form, the first act shows the main character living his or her normal life, going about business, and not aware of or prepared for any serious trouble. At the end of the first act, the protagonist’s world comes apart: a loved one is taken, a fortune is lost, the character is hoodwinked and betrayed by a trusted friend, or endured a similar catastrophe. Her or she then spends the whole of the second act trying to recover what had been stolen or lost, but strong enemies, personal failures, and circumstances beyond control prevent any return to equilibrium—“the very earth rises up against him,” in the words of the young producer. At the end of the second act, the character discovers the key: an ally, a plan, a weapon, or a secret that will lead to a successful resolution. And in the third act, that ally, plan, weapon, or secret achieves the opportunity for the final confrontation. In that confrontation, the protagonist must physically contend with his or her nemesis, the author of the initial loss, and “go mano a mano with him, like Holmes and Moriarty at Reichenbach Falls”—again in the producer’s words. And, needless to say, the protagonist wins.

The pattern is simple: loss, struggle, and recovery in three acts.

To prepare for this assignment, I read two books that the producer recommended, classics on the subject: Syd Field’s Screenplay: The Foundations of Screenwriting and his The Screenwriter’s Workbook: Exercises and Step-by-Step Instructions for Creating a Successful Screenplay. Field’s instructions confirmed everything the producer had said. The books also described the format in which the pages must be typed so that one page of screenplay, whether covering dialogue or action, equals one minute of screen time. Field also gave explicit instructions, down to the page number, for how long each act must be.1 Yes, you are telling a story, but you are also fitting it into a formula that movie producers and directors can recognize and accept.

I wrote the screenplay, taking the story line from the middle of the book, and truth to tell, it wasn’t successful.

For one thing, as a writer I am a novelist; my instinct is to make a scene come alive in the reader’s mind, including necessary features and descriptions to make the world become real. But a screenplay is sparse, just suggesting the setting with a word or two (e.g., “EXT. STREET SCENE”) and portraying the action with a sketch (“They fight”). To offer a description at the depth to which I’m accustomed is doing the work of the set dresser and the director or choreographer. To give such a detailed description implies that the camera must linger over the scenic elements or action for longer than the story requires. For another thing, I tend to write dialogue in detail, hearing and reproducing the spoken words in the character’s unique voice and with careful inflection. But the screenplay is supposed to be subdued, with the dialogue simply revealing the facts necessary to advance the plot, giving only hints of the emotions involved. The actors and the director must have the freedom to interpret the lines and add appropriate voicing and mannerisms. Writing mere sketches of scene, action, and dialogue is hard for me.

But beyond that, I had a hard time recreating the three-act structure with what I know about the origins of drama from my high-school and university courses in English literature.

In Greek drama, which is the earliest form in the Western tradition, a story that resolves itself by restoring the characters to their original, happy condition is structurally a comedy. The characters may have learned a thing or two from the action of the plot, and along the way they might have developed skills they did not have before. But their personality and fate are not essentially forced to change at any significant depth. Although they may have passed through a crisis, it was external to their nature. And they have not learned anything substantial about who they really are. In Lysistrata, for example, the women on both sides of the Peloponnesian War oppose aggression and withhold their sexual favors in order to force men in their lives to the negotiating table, and in this they are successful. Not a surprise.

In contrast, the essence of tragedy is that the protagonist suffers a downfall but that, in doing so, he or she learns the truth about his or her true nature or situation. What the person thought and believed before has been shattered, and the character changes in reaction to this new understanding. In Oedipus Rex, the king discovers the sins of patricide and incest that he has unwittingly committed and for which Thebes has suffered a plague, but the discovery destroys him as king. In Antigone, the tyrant Creon learns that his harsh punishment of the girl for disobeying his orders about burying her rebellious brother has only ennobled the girl and at the same time destroyed his family. In Hamlet, the young prince pursues vengeance against his uncle for murdering his father and in doing so destroys himself and everyone around him.

Tragedy is not mere bloodletting, or bathos—subsiding into pointless action for its own sake. Nor is it unrestrained pity and sadness, or pathos—calling for our sympathy or empathy but not a deeper understanding of the character’s nature. Tragedy has a specific purpose, to show us the quality of a human being struggling with adversity and losing, and thereby becoming a better equipped, more enlightened, better prepared—although ultimately destroyed—person. Tragedy has something to teach us about nobility, persistence, the human spirit, and ultimately about wisdom. Comedy just takes us around in a circle, sometimes to a better place, but not to being a better person.2

Comedies are fun. Comedies make you happy for the main character and his or her present situation. They can even have you leaving the theater feeling better about the world—“uplifted” I believe is the term. Most important, comedies can give you, the viewer, the frisson, the temporary thrill, of destruction and loss without making you pay the emotional price—like a good ghost story when you know that the ghosts are all in your imagination. Comedies sell popcorn.

Tragedies are a downer. People die and they don’t come back in the third act. Heroes suffer and the only thing they can do is learn from it, because there is no sudden turning of tables, switching of the poisoned drinks, or shifting of the knife to the untrapped hand that makes everything come out all right. The world is a hard place. People make mistakes, sometimes knowingly but often not. And sometimes shit happens. The hero has to deal with all that and face the consequences of his or her actions bravely, without collapsing into an emotional puddle. Tragedies have you leaving the theater thinking about what you’ve seen but maybe not so excited about coming back right away. Tragedy puts you off popcorn for a while.

I sometimes wonder if our culture hasn’t lost something essential to the human experience by having our dominant entertainment form molded out of the comedic structure. The hero wins and everyone is happy but no one has learned anything real.3

Because I don’t subscribe to conspiracy theories—or not much—I hesitate to say that Hollywood is foisting these happy-making stories upon the public to keep us all emotional children. Instead, I think Hollywood is responding to the national mood, which is that things are going really well for most people in this country—or at least for the people willing to spend ten bucks to sit in a theater for two hours and then spend ten more at the concession stand—and so why treat them to a downer of a movie that shows them how hard life can be and what a person can learn from it?

But if your life is a steady diet of comedies, where the hero always wins, where pluck and perseverance overcome all obstacles, where your opponents are always villains or gloating fools, and everything comes out all right in the end, then you remain an emotional child. Because sometimes the world collapses around you, and all you can do is put your head down and try to survive.

1. As I recall, the screenplay must never be longer than 120 pages. The first act must end by page 28 and the second by page 87.

2. The closest recent movies have come to the concept of tragedy, I think, is Jessica Chastain’s portrayal in Miss Sloane, where she knowingly destroys her own career to make a point.

3. In this way, I suppose video games can actually be more true to life and susceptible to tragedy than a movie. You can lose the game, virtually die, and be forced to examine your assumptions, your skills, your strategies, or your purpose in playing.

Sunday, May 12, 2019

The Great American Tragedy

Confederate soldiers

Confederate reenactors

In the sense of Greek drama, where the hero is brought low by a singular failing usually complicated by hubris, or overweening pride, in my view the great American tragedy is the moral and philosophical position of the average Confederate during the Civil War. Here were good, honorable, even noble people fighting valiantly in defense of a cause that they called “state rights” but was actually the right to preserve an immoral economic system, slavery. And they were brought low.

While most large landowners in the Old South owned and worked slaves, the majority of its white citizens did not.1 Small farmers, merchants, government officials, journalists—all the categories of modern civilized life—and most of the Confederate soldiers had no use for holding a human being as a chattel good. They may not have disapproved of the practice, because the economy of their region, the labor-intensive process of growing cotton as a cash crop, depended on slavery. But the average person had no real need to own, feed, discipline, and occasionally buy or sell, another human being. And when the harsh reality of that activity does not impress itself upon a person daily, it becomes easy to dismiss it into another part of the mind.

But it was there, at the root, the main difference between North and South, and the reason for the secessionist disjunction that led to five years of bitter war.

The essence of Greek tragedy is that good and admirable people can sometimes do bad things. It is not that they act through sheer ignorance—as Oedipus did when he murdered his father and married his mother—because that would be a fool’s fate.2 What sets the Greek hero’s tragedy in motion is that sense of pride, hubris, and believing that he or she is stronger and more important than the gods themselves. From pride comes misplaced values, which lead to unclear vision, stubbornness, anger, and often revenge. The resolution of the tragic story is for the individual to see and understand these errors—to understand oneself and one’s place in the order of things—and prepare to do better next time, even if there is no “next time.” The essence of the story for the tragic hero is to “know thyself,” gnōthi seauton, which was the injunction written above the door to the oracle at Delphi.

One might ask why, if I am going to choose a tragic story for America, I do not address the disposition, isolation, and ultimate destruction of the native populations and their culture, which lasted longer and took more lives than the Civil War?

My basic reason is that this act, harrowing and sorrowful as it may have been, was not in any sense a Greek tragedy. While we might feel pathos—an outflowing of pity and compassion—for the native people, their suffering and downfall did not result from an internal character flaw. They were simply caught on the wrong side of history. The Native Americans were, for the most part, Stone Age people living a hunter-gatherer lifestyle, although some of the cultures in the Southwest pursued agricultural and herding technologies, planting corn and raising sheep, which only brought them up to the rest of the world circa 3,500 BC. They were facing a European culture that employed a scientific, mechanized, weaponized infrastructure five thousand years in advance. As the old saying goes, the tribes never fired a gun or a round of ammunition that they had made for themselves. The only sin of pride they exhibited was thinking they could stem the flood of white soldiers and settlers into their lands with bows and arrows or captured weapons. And this was not pride so much as a lack of knowledge. They went to war—understandably, because fighting was in their nature—before they really understood how advanced, determined, and numerous the European population was and what level of technology and scientific infrastructure it had at its disposal.

And on the other side—those soldiers and settlers pushing westward from Plymouth Rock under the flag of “Manifest Destiny”—there was no tragedy because, in a word, they won. They might at times have felt badly about it, and many still do today. But the outcome was always inevitable, as described above. But most of the Europeans directly involved did not feel so badly, either, because the natives were hardy and cunning warriors, lacking the gentility and sense of fair play that often comes with technical and cultural advancement. The natives would wipe out entire settler parties and villages when they could, sometimes engaging in the barbarous torture and mutilation of survivors.3 That tends to make the winning side less compassionate, even if from some points of view they were at moral fault.4 Still, the result was not a tragedy for the Europeans. For those settlers and villagers who were wiped out, their fate resulted from either a lack of military preparation in a wartime situation or an abundance of optimism and faith in their own personal strength, but not a tragic character flaw. For the European culture as a whole, there was no tragedy because they were never, in the aggregate, brought low.

For either the native population or the European invaders, the injunction to “know thyself” would merely have reinforced their basic cause and inspired them to fight harder.

But in the Civil War, there was tragedy in abundance. Like any Greek protagonist, the average Confederate citizen or soldier might have looked around, considered the true nature of slavery from the viewpoint of the slave, as well as the Abolitionists’ moral viewpoint informed by two thousand years of civilized Christian thinking and writing, and decided that secession and war were not the appropriate course of action. And for this lack of introspection, they suffered a tragic downfall.

Which brings us to the current day …

In the coming hot civil war—when the culture clash that divides our country boils over—the same moral question, the same potential for tragedy, will face one side or the other. Will the conservative part of America go down in defense of a tradition, the precepts of Western Civilization, that the progressives consider to be outmoded, oppressive, illiberal, imperialist, racist, misogynist, and exclusionary, being on the wrong side—the pernicious side—of history? Or will the progressive part of America go down in defense of a principle, the tenets of Democratic Socialism—or pure socialism, or Marxism, or whatever flavor of utopian collectivism inspires them—that the conservatives consider to be repressive, totalitarian, illusory, a fantasy, and a failed system wherever it’s been tried before, no matter what side of history you believe you inhabit?

One side or the other will have abundant moral and philosophical teachings to consider in opposition to their choice of action. And one side or the other will lose, tragically.

1. According to an analysis of recent claims about the number of slaveholders in the U.S., only about a fifth of individuals and families, and just a quarter of households, in the slaveholding states in 1860 owned slaves. So the vast majority of southerners may have supported the “peculiar institution” but did not directly participate in it.

2. In the Oedipus story, the root of the trouble was his parents abandoning him on a hillside as a baby—with his legs pinned together and therefore his name, “swollen foot”—because of a prophecy that he would one day destroy them. The fault was that of his father and mother, who tried to evade the will of the gods. Oedipus himself was at fault only for his innate anger, which led him to kill an old man at a crossroads, and his pride, which led him to marry the newly widowed queen. But in the root cause, he was merely an instrument of the gods.

3. As to the practice of taking scalps, my understanding is that the Native Americans did not initiate this rite but were taught it by the British during the French and Indian Wars. Scalping was usually fatal to the victim and was a convenient way to count heads taken some distance away in order to collect a royal bounty. In the Native American cultures, especially on the Great Plains, the bravest warriors were respected for their ability to “count coup”—from the French for “strike”—or ride up to an opponent and touch him without killing him. That was an act of honor.

4. For more on this conflict, see Retroactive Prime Directive from September 30, 2018.

Sunday, May 5, 2019

Wisdom and Ambiguity

Purgatory

One conception of Purgatory

In a recent conversation a friend of mine, a former Catholic priest and theologian, described a mutual acquaintance who has suffered from severe medical issues with some near-death crises and who had asked him for an exact definition and description of Purgatory. My Catholic friend tried to explain that the concept had various references in church teaching, as well as in popular literature such as Dante Alighieri’s The Divine Comedy, but that an exact description and placement in the physical universe was not possible.1 This answer apparently distressed our acquaintance, who seems to want everything in his life—and presumably in the life to come—to be clear and precise.

I don’t know the complete teaching of this religion, but my sense is that Purgatory is not a place but rather a state of the mind or the soul. It is the condition of a person examining his or her past life and coming to an understanding of him- or herself, in order to become fit for Heaven. It’s a spiritual pause, not a physical jail with locks and bars. It’s a waystation, but still not a place like Heathrow’s old International Transit Lounge—which I once described as “Hell, with carpeting.”

This appears to be one of the problems with the concept of belief. How deep does it go? How real does it have to be? How much of a roadmap of the mind and one’s expectation of the future does it become?2 And this is not a problem just with our mutual acquaintance.

Most people want the things in their life—especially those on which they depend and sometimes stake their lives—to be simple, black and white, offering only either/or. They want their laws to be precisely defined, with no loopholes or conditional phrasing through which a miscreant might wriggle. They want their accounting to be precise to the penny, a snapshot of the money pile, with no slippery temporal concepts like “cash flow” and “net present value.” They want the “good people” to be pure and true in all things, and the evil “others” to be irredeemably damned with no saving graces.

This is a convenient way of thinking. It puts the things a person has to consider on a daily basis into neat boxes with definite labels. Those who think this way distrust ambiguity as an opportunity for concepts, objects, and people to get away, have a life of their own, tells lies, and turn themselves inside out.

But life is never like that. It’s mostly complicated, saturated with grays, confronting us with both/and. After all, the black-and-white version is purely a mental construct, the desire of the human mind to make various aspects of life simple, compact, easy to remember, and easy to deal with. The things that are hard to define and judge are usually—like Purgatory for most people—the things we can safely put off into a distant future or treat as a difficult but seldom encountered exception.

Wanting to live in a simple and exact world, a world of absolute laws, pure positives, unquestioned negatives, without distinctions or conditions, is the beginning of fundamentalism. The fundamentalist wants the words of his or her religious scripture to mean exactly what they say, without interpretation, without dispute, without question. The puritan wants the intentions of the god he or she believes in to be made clear at all times. He or she wants a simple story that anyone can—and must—follow.

This is, of course, impossible. Human language is never simple. It never offers exact meanings. The more precisely the author tries to describe a thing, the more he or she must pick from among different words with various denotations (that is, their meaning as defined in the dictionary) and their even more slippery connotations (the inferences that readers and hearers may draw from the word).3 To speak precisely is to speak about definite things, individual things, that are unique and not like other things. The more precisely one speaks, the less the subject under discussion may apply generally, to concepts, objects, and people who are nearly so but not exactly so. And the purpose of most writing and speech—particularly of a religious nature—is to cast a wide net of meaning, rather than to focus down to a unique and specific instance, eschewing all other possibilities.

As an atheist, I also understand that anything written or spoken about any god is still a product of the human mind. Religious scholars may believe that the words are divinely inspired—either dictated directly into the scribe’s ear by an angel or from the lips of the supreme god him- or herself, or else developed by a writer or speaker in the grip of religious fervor or under the influence of religious faith—but those words are still subject to human understanding and phrased in human language. Since the human mind is not identical to the mind of an angel or a god, a certain amount of interpretation, paraphrase, summarizing, and cultural coloration must take place. So even the “Word of God” is still a form of hearsay.

This, in my mind, is not a bad thing. If there is indeed only one god in the universe, and this supreme being speaks with perfect accuracy on every subject to every hearer, and these hearers all write down the pronouncement with complete fidelity, then human history would be absolutely uniform, locked into a singular vision for all time, and incapable of growing, moderating, or advancing. Intellectual evolution and discovery would be as impossible as physical evolution would be in a world where each animal was created only once, in perfect form, by the hand of that god, and incapable of adapting to shifts in climate, topography, and plate tectonics.

Change and adaptation are a rule of the life and of the universe that we can see all around us. Animals adapt to environmental niches, the rocks themselves weather and dissolve, stars explode and collapse. Nothing is fixed and unaltering. And nothing is simple, true, and immutable for all time.

To me, this is the beginning of wisdom, to understand that life and the universe are a gray area. That ambiguity is the natural state of nature. That much of what we see and interpret around us is dependent on the accumulated experiences, memories, and cultural dictates that we bring to our observations. That the rattlesnake is not evil because his fangs are full of venom, any more than the rabbit is good because his teeth and claws are relatively harmless even in defense.

A tolerance for this ambiguity, the ability to put off judgment and delay the demand for clear meaning, to my mind is the essence of wisdom. This is the sign that the person is willing to see multiple meanings, different interpretations, alternate viewpoints, and parallel realities. This is not to say that the human mind can never choose among them or is forever lost in a hall of conceptual mirrors. Ultimately, the mind and the individual must choose what is good, right, and proper in each instance. But that choice should come only after detecting, examining, and testing the possible meanings and available viewpoints.

And sometimes, especially in matters of meaning and physical reality, it is best to accept that ambiguity is the state of nature, and the individual has no compelling reason to make any choice at this time.

1. I remember once hearing, as a child, that the Reverend Billy Graham had given a precise description of Heaven, down to its location in the stratosphere and measured acreage. I also heard later that he wisely denied having that information.

2. In this context, see also Belief vs. Knowledge from April 7, 2019.

3. In a philosophical debate, no one is so tiresome as the fellow who reaches for the dictionary definition of a term, as if that was the authority. Dictionaries are compiled from common and observed usage—how the aggregate of speakers and writers treat a word—and not from the dictates of some scholarly academy. This is why a really good dictionary is full of different meanings, some of them changing over time and some outright contradicting the sense of other definitions of the same word.

Sunday, April 28, 2019

On Quitting

Depression

I have managed to overcome two major addictions in my life: the first to smoking, the second to drinking. Aside from eating, breathing, and writing, the only other addiction I’m aware of is to coffee. And I love the bean so much—chocolate, too, come to think of it—that it will have to be proven deadly on the same scale as cyanide before I will consider quitting.

My smoking started in college, freshman year. My parents had always smoked cigarettes, a pack a day and more each. And they were very smart people: when at about the ages of seven and six my brother and I noticed their habit and inquired when we could begin smoking, my mother agreed that we could have one cigarette apiece each year. And she stuck to her word. She took us into the bathroom and made us smoke an entire unfiltered Camel down to the butt, with us going green and gasping all the way. That should have made devout non-smokers of her sons for life.1

This was all before the Surgeon General’s 1964 report, of course. The upward trend in tobacco use tapered off after that. But I came to my freshman year in 1966, just two years after that landmark warning. And after those sick-making bathroom scenes, I never would have started smoking cigarettes, anyway. But I was now at the university, an English major, attracted to the tweedy, scholastic life, and an admirer of Oxford don C. S. Lewis. So I tried pipe smoking and, after the body’s initial revulsion—those first few puffs are always sour and nauseating—liked the effect.

Tobacco is soothing, a relaxing almost-high without the sensory or mental distortion of other drugs. It was the English major’s perfect accompaniment for hours of sitting in my dorm room reading assigned literature. At first, like most pipe smokers, I didn’t inhale. But, of course, smoke gets into your lungs anyway, because you are sitting in a cloud of it. After a year or two I was inhaling as much as any cigarette smoker—plus getting an occasional sip of utterly disgusting condensate from the pipe stem. But I was hooked and smoked an ounce or more of Douwe Egbert’s Amphora Mild Cavendish tobacco each day, about the equivalent of a pack and a half of cigarettes.

Amphora was an unflavored blend, unlike some tobaccos that were more popular with the college crowd at the time and loaded with perfumes. I occasionally liked a cigar, too. When I went home for Christmas and summer vacation, my parents tolerated the pipe smoke because they were still using cigarettes, but they banned cigars in the house. And, truth to tell, I only smoked them there as a minor form of rebellion, because of this negative reaction.

I smoked for eight years—four in college, four in my early working years, when it was still acceptable to smoke at your desk and most co-workers didn’t object. But I could sense that something was wrong. If it wasn’t the pinhole burns in my shirts from the occasional fall of hot ash, the stains on my fingers from cleaning that awful residue out of the pipe,2 and the nasty brown stains on my teeth and rimming my nostrils, it was an all-over-sick feeling I would get after a day of heavy smoking.

I tried quitting and could do so for a week or two at a time. But then something stressful would happen, and I would reach for the pipe and its soothing effects. I always promised myself I would cut down, but within a day or two I was smoking my daily ounce and a half of Amphora.

When you’re in college you tend to neglect things like regular doctor and dentist checkups. After joining the working world, I had to start taking better care of myself and soon found a local dentist. He went after the layer of tar built up on the inner surfaces of my teeth—about as thick as a good asphalt road—with scrapers and drills, grunting and muttering curses all the while. Him: “Did that hurt?” Me: “Well, come to mention it …” Him: “Good.” But after years of buildup, I walked out of his office with suddenly clean teeth. I thought I would have another four to six years of happy smoking before I got another workout like that. But the next visit was the same ordeal, because the tobacco tar is sticky and binds well with enamel.

After that, I decided that I would surprise my dentist by quitting for good and keeping these newly clean teeth bright and sparkly at my next visit. This was in the days before nicotine gum and patches, smoking cessation clinics, and other psychological and medical help. To quit, you had to go cold turkey. Luckily—and I say this advisedly—I had just taken up a new job as technical editor at an engineering and construction company. My task was to edit the writing of project engineers for grammar, spelling, and consistency and then oversee document production through typing, proofing, printing, binding, and delivery. We were slamming out one or two major projects per editor per week, with documents that represented millions of dollars in engineering proposals and technical reports and that had to be delivered on time, to the day and hour, halfway across the world, or else they became so much wastepaper. The job was so stressful that my resolve to quit smoking was tested again and again. And yet I managed to stay off the pipe.3

When I went back to my dentist six months later, my teeth were still clean—well, except for coffee stains, but he didn’t complain about them, because they were brittle and easier to clean. I’ve managed never to take up the habit again, although for about fifteen years after quitting, I would have what I call “smoking dreams.” In the dream, no matter what else was happening, I had gone back to smoking and regretted it. I would not know I was really awake until I could self-check and remember that, yes, I did quit and, no, I haven’t started again. The nicotine addiction takes that much of a hold on your subconscious mind.

My drinking also started in college, although later, when I turned twenty-one and it was legal for me to go into a bar and order a beer. Unlike many of my classmates, it never occurred to me to get a fake driver’s license and try to drink before the legal age, and I was never in the fraternity crowd, where the keg was open to the whole house. Since my parents were also regular drinkers, taking one or two dry martinis—very dry, with zero-percent vermouth content—each evening, I knew about alcohol. As a child, my father would let me eat the gin-soaked lemon peel or pimento-stuffed olive out of his martini. And as a later teen, I had made a few forays into their cupboard to try the sweeter liqueurs. But I wasn’t then a regular drinker.

In college as a junior and senior I would go to the bars on Saturday nights with my friends and drink beer. After college, I kept up with the beer but also tried gin and vodka, and after a couple of years I settled on red wine as my favorite tipple. I would drink beer only with the foods that didn’t go well with wine. My consumption stabilized at about a bottle of red wine a night, and part of this was habit and part economy. An opened bottle really wasn’t viable and wouldn’t keep for more than about twenty-four hours in the refrigerator. So it just became easier to drink it all. But some weeks at work were harder than others, and by Thursday night I might be drinking half of a jug of “tank car red” and feeling the effects the next morning. Occasionally, as a special treat, I might down most of a bottle of Irish whiskey and then fight the next day to maintain my equilibrium.4

But I never graduated to drinking in the morning to “cure” the hangover. And I generally did not drink at lunchtime. I waited until I got home in the evening and had nothing more planned before I started drinking … unless I had a couple of glasses of wine or beer when we went out for dinner. Still, aside from the mammoth effort it took—pure cussedness and sheer will power—for me to get up and function the next morning, I was starting to get that generally sick feeling. My body knew I was again overindulging in something that was not good for me.

And again, it was medical service that cued my change of heart. I had just signed up with a new doctor, and part of his medical questionnaire for new patients asked how many drinks—with one shot of liquor, one five-ounce glass of wine, or a twelve-ounce beer equal to one drink—did I take per week. That was crafty, because it made me count up my consumption in a different way. I came out with twenty-eight drinks a week. His first question during the physical exam was, “How long have you had this drinking problem?” I started to say, “I don’t consider it a problem”—and realized that is what every alcoholic says.

Yes, as before with the smoking, I had made efforts to cut down or quit. But cutting down never really happened, due to that sense of frugality about wasted wine. And although I could quit for a few days at a time, something always happened to make me start again. I once went all of six months without drinking, during which I finished my first publishable novel and sent it to an agent. But on the night she told me she had sold it, I celebrated with a glass or two of red wine. Within a week I was back to drinking a bottle a night.

Once again, I walked out of the new doctor’s office with a vow that I would show him by never drinking again. And luckily, at my corporate communicator’s job, I had just taken on a massive project, to consolidate the company’s weekly employee newsletter with its monthly magazine for both employees and retirees, and negotiate all the changes to scheduling and officer reviews in the process. I managed to finish that project without coming home at night and sliding into a bottle of red wine. And I have been sober for the past thirty-four years.5

What have these two experiences of quitting—finally, for good, quitting and not just cutting down—these two addictions taught me that I can pass along?

First, the mind is a monkey. And second, the body wants its candy.

During the early months when I was consciously not drinking, I found my subconscious mind—the voice that speaks out of the darkness—suggesting thinks like, “It’s been a hard day, you’ll feel better with a drink.” Or, “You did really good work today, you deserve a drink.” And finally, “You’ve gone a whole week without drinking, why don’t you take a drink to celebrate?” If you pay attention to the ideas that pop into your head and track them back into your subconscious, as I was trained to do as a writer, you can start linking these thoughts together. After a while, the absurdity got to me. Whatever the situation, however I felt, the formulation was the same: you need to drink. Ah ha! The body wants its candy—the drink, the smoke—and the mind is the willing, deceitful manipulator that will try to trick you into taking that first sip or puff.

So I adopted a simple rule, simple but iron clad and unbreakable: “Don’t put it in your mouth. If you find it in your mouth, spit it out.” I took up drinking diet soda instead of beer or wine at night, and now I drink flavored mineral water for health reasons. But some things just go better with beer and wine, so I would allow myself a “non-alcoholic” beer or wine (no more than 0.5% alcohol content) on those special occasions. I drank sparkling apple cider when available—and once at a party, when I picked up a glass of champagne and took a sip before smelling it, I did spit it out, put the glass down, and went back for the cider. I also agonized about foods, especially desserts, that might have uncooked alcohol in them, but I soon reasoned that the amounts were small—about that 0.5%—and this was technically “eating” and not “drinking.”

In each case, I quit on the cold-turkey method: simply stopping and making a rule for myself not to start again. Not everyone can do this. And I have no problem with people who need Alcoholics Anonymous and related organizations, with their Twelve Steps and Twelve Traditions and their wonderful community of peer support. But that regimen—especially the relinquishing of self before God, even when masked as a “Higher Power”—was not for me. My higher power was my mother’s voice, echoing in my head, when I imagined her regarding her drunken, tobacco-besotted son and saying, “You’re better than that!”

But the thing you learn in life is that everything is messy, and sometimes you just have to go with whatever works.

1. This was before my brother developed asthma, and being trapped in the car with two smoking parents was torture for him. So he never took up the practice.

2. I soon favored a pipe made of ceramic and compressed carbon, which did not build up a layer of char in the bowl. It was easily cleaned with rubbing alcohol.

3. It was a good thing I quit smoking, too, because soon afterward I met the woman who would become my wife. She was a dedicated non-smoker who told me, “Kissing a smoker is like licking a dirty ashtray.” Point taken.

4. A college roommate had advised me to always, before going to bed, drink a big glass of water with two aspirin and two high-dose vitamin C tablets. This regimen generally kept me viable the next morning.

5. Interestingly, I never had “drinking dreams” after quitting. That may tell you something about the relative strength of these two addictions.

Sunday, April 21, 2019

A Definition of Decadence

Roman decadence

What is your definition of “decadence”? In the popular imagination, I would guess, it’s something like ancient Rome under the Caesars or the Ancien Régime in France before the Revolution of 1789. That is, rich people lying around, having orgies, eating honey cakes and larks tongues, and still drunk at midmorning from binging the night before. That is, everybody who counts in society—especially the nomenklatura—allowing themselves free rein to be lustful, slothful, and generally good for nothing.

Human beings, both as individuals and in societies, have a hard time with satiety, with being fed to the point of mere capacity. We have difficulty with the concept of “enough.” This goes back to a hundred thousand years or more of our hunter-gatherer heritage. When you live by picking up whatever you can find in the bush or kill on foot with a spear and a sharp stone, life is either feast or famine. Berries and tender shoots are not always in season. Game migrates out of the area or goes into hibernation. So when you make a big killing or stumble onto an acre-wide bramble laden with ripe fruits, you chow down. You eat to the point of bursting, rest a while, and start over again. You put on fat because the lean times, the hunger times, are just around the corner.

Self-regulation is a learned art, a form of self-discipline, as anyone who has tried to follow a diet knows. It takes will power to stop when you are full—sometimes to even hear the “I’m full now” message that your stomach and your endocrine system may be sending.

We see this in America today: too many fat people waddling around, suffering the slow death of obesity and diabetes, or dying quickly with heart attacks and intestinal cancers. The problem you hear about—too much corn syrup in our processed foods, too much sugar in our soft drinks—may be right but it’s also simplistic. That solution smacks, too, of a conspiracy theory where the food and beverage industries are systematically trying to kill us for profit.

I put a different interpretation on this problem. In the last sixty or seventy years, as the richest nation to come out of World War II, we Americans have undergone an economic and cultural change. When I was growing up, fresh fruits still tended to be seasonal. You bought apples, berries, and oranges at certain times of the year, and for the rest you ate apple sauce, spread jam on your toast, and drank processed orange juice. My grandmother still put up preserves in the summer and fall, and she had basement shelves full of Mason jars filled with green beans, corn, and other products of her garden. Del Monte and other companies also packed the grocery stores with the products of orchard and field that had been cooked, canned, and sealed.

Today, we still eat industrially processed foods but we also have gardens and orchards that extend around the world. We get apples from Washington State in the fall and from Chile in the winter and spring. You can get fresh pineapple, avocados, and other delicacies the year round. Thanks to refrigeration and mechanized delivery, you can eat Maine lobster anywhere in the United States, and you can get good sushi—which depends on absolute freshness—in the center of the continent in Kansas City. These are the benefits of being a capitalist empire in the middle of a world willing to deliver its harvest to your door.

Something else that has changed is our culture during the last sixty years or so. When I was growing up, we mostly ate dinner at home. And while we ate well, the food was mostly sustenance: pork chops, chicken livers, franks and beans, stews and casseroles, with the occasional steak and French fries. We might go out to a restaurant once or twice a month, usually in connection with a birthday or other family celebration. Good restaurants were still few and far between in most towns, or an hour distant in the nearby city, and fast-food franchises like McDonald’s were just getting under way and still a novelty.

We ate sensibly because my mother understood nutrition and had to keep a budget. But with the erosion of the nuclear family and people no longer sitting down together to a home-cooked meal six or seven nights a week, our culture has changed. Every suburb and small town has a dozen competing franchises where you can get delicious, rich and tasty, fatty, salty, party-style food in ten minutes or less. You can have ice cream and pastries with every meal. You can chow down every day of your life.

I don’t mean to say this is necessarily a bad thing. Our system of exchange dictates that if people want to eat party foods exclusively, where once they had to wait for a holiday to indulge, then someone, somewhere will figure out how to make a fortune giving them exactly what they want. I am not such a churl as to blame the caterer and the franchise operator for supplying what people’s endocrine system should be telling them, “Whoa, stop, you’ll be able to get more of this tomorrow,” and their stomachs are telling them, “Hey, mouth, I’m groaning here!”

We live in a country where everyone is rich, compared to historical standards, and temptation is all around us. And what I wrote above for food applies equally to liquor, drugs, entertainments, recreational opportunities, personal freedom, and the leisure time to enjoy all of them. Our tribe is sitting in the middle of the world’s berry patch, with plump partridges just walking up and begging to be eaten, while the finest minstrels play their lyres and sing pleasing songs in our ears. And I am not such a churl as to suggest that this is a bad thing. But I don’t wonder that we’re all growing fat, a bit lazy, and just a tad careless.

But that’s only one definition of decadence. Another kind is when things are going so well in society that people—especially in the upper and supposedly knowledgeable and sophisticated classes, the nomenklatura—can afford to break the rules, flout social conventions, and disparage the founding principles on which their society was founded. Along with eating too many rich, party-style foods, too many of our best people seem to assume that the rules don’t apply to them. They act as if their position of privilege in the world, or at least in their part of it, implies an opportunity to scoff at the rules the rest of us follow and the institutions the rest of us respect.

I saw the start of this in the late 1960s at the university, where bright children from good homes, who were admittedly allowed out into the wider world for the first time, became users of banned substances—mostly marijuana—and acquired false identities—usually a driver’s license—so that they could drink below the statutory age. Yes, these were minor infractions to most people, but these young adults were knowingly breaking the law. And when a larger law with more life-threatening consequences, the Selective Service Act, dictated that the government could forcibly recruit them to fight and perhaps die in a war they didn’t agree with, many of them assumed false lifestyles or fled the country to avoid what most other people considered a sacred duty. The sense that they were bound by the conventions and laws—however justified—of the society that gave them special privileges just was not there.

You can see the same sense of lawlessness today in politicians, sports figures, and celebrities who appear to think their place in society allows them to mock what the rest of us may think important and hold dear. They believe the rules the rest of us follow are not made for them.1 Some of our most prominent politicians have recently characterized the part of the population in the “flyover states,” those less sophisticated and progressive in their views, as “deplorables” and “bitter clingers to guns and religion.” This is to disrespect people who otherwise continue in their decent, law-abiding lives.

When your life is easy, when rewards and riches are all around and available for the taking, when the winds of change are blowing in your direction, it becomes easy to imagine that you are special, that the rules don’t apply to you, that the institutions and social conventions that put you in your current position really aren’t so important. And if you were not fortunate enough to have a mother and father who told you, as most of us did, that you aren’t special or more deserving than anyone else, that you have to wait your turn, that you must mind your p’s and q’s—then you might forget that most important aspect of a democratically run republic. And you might be lulled into forgetting that reality, if not karma, has a way of snapping back hard.

This, too, is a kind of decadence, born of being too rich for too long in a normal world. And this kind is worse for the soul—and the nation—than anything having to do with sex, drugs, and rich foods.

1. Without drawing my readers into a political fight, I remember finishing James B. Stewart’s Blood Sport: The President and His Adversaries, about the political harassment that President Clinton and his wife were experiencing in the mid-1990s. What struck me at the time of reading was the catalog of activities that the family was charged with—profiting from delayed orders on cattle futures, attempting to buy a country bank so that they could make themselves favorable real-estate loans, claiming a full tax deduction on losses they shared equally with a partner—and how Stewart dismissed these actions as trivial. But they were indeed infractions of established law. If other, more normal people had done these things, they would have risked prosecution. What made it worse, to me, was that both of the Clintons were trained and admitted to the bar as attorneys, yet nobody in their circle lifted their heads to say, “Gee, you know, this is against the law.”

Sunday, April 14, 2019

AI and Emotion

Robot head

In the extended Star Trek series, Mr. Spock and the other Vulcans portray a rigid adherence to pure logic1 and either rejection and active repression of their humanoid emotions. This sort of character presents an attractive gravitas: sober, thoughtful, consistent, dependable, undemanding, and loyal. It would seem that, if human beings could just be cleansed of those fragile, distracting, interfering emotions, they would be made more focused, more intelligent, and … superior.

Certainly, that is one of the attractions of the computer age. If you write a program, test and debug it properly, and release it into the world, it will usually function flawlessly and as designed—apart from misapplication and faulty operation by those same clumsy humans. Known inputs will yield know outputs. Conditional situations (if/then/else) will always be handled consistently. And, in the better programs, if unexpected inputs or conditions are encountered, the software will return an error or default result, rather than venturing off into imagined possibilities. Computers are reliable. Sometimes they are balky and frustrating, because of those unknown inputs and aberrant conditions, but they are always consistent.

Our computer science is now entering the phase of creating and applying “artificial intelligence.” Probably the most recognizable of these attempts—in the real world, rather than in the realm of science fiction—is IBM’s Watson computer. This machine was designed to play the television game show Jeopardy. For this effort, its database was filled with facts about and references to history, popular culture, music, science, current events, geography, foreign languages—all the subjects that might appear on the game board. It was also programmed with language skills like rhymes, alliterations, sentence structure, and the requisite grammatical judo of putting its answer in the form of a question. Although I don’t know the architecture of Watson’s programming myself, I would imagine that it also needed a bit of randomness, the leeway to run a random-number generator now and then—effectively rolling the dice—to make a connection between clue and answer based on something other than solid, straight-line reference: it occasionally had to guess. And it won.

IBM is now using a similar computer architecture with Watson Analytics to examine complex accounting and operational data, identify patterns, make observations, and propose solutions to business users. Rather than having a human programmer write a dedicated piece of software that identifies anticipated conditions or anomalies in a specified data field, this is like having a person with a huge memory, fast comprehension, and no personal life at all look at the data and make insights. Such “expert systems” from other vendors are already analyzing patient x-rays and sieving patient symptoms and biometrics from physical and laboratory testing against a database of diseases to identify a diagnosis and recommend a course of treatment.

And for all these applications, you want an emotionless brain box that sticks to the facts and only rolls the random numbers for an intuitive leap under controlled conditions. When you’re examining a business’s books or a patient’s blood work, you generally want a tireless Mr. Spock rather than a volatile Dr. McCoy.

But the other side of artificial intelligence is the holy grail of science fiction: a computer program or network architecture that approximates the human brain and gives human-seeming responses. This isn’t an analytical tool crammed with historical or medical facts to be applied to a single domain of analysis. It’s the creation of a life form that can resemble, emulate, and perhaps actually be a person.2

IBM’s Watson has no programmed sense of self. This is because it never has to interface directly, intelligently, or empathically with another human being, just objectify and sift data. Emotions—other than the intuitive leaps of that random-number generator—would only get in the way of its assignments. And this is a good thing, because Watson is never going to wake up one day, read a negative headline about the company whose operations it’s analyzing, and decide to skew the data to crash the company’s stock. Watson has no self-awareness—and no self-interest to dabble in the stock market—to think about such things. Similarly, a Department of Defense program based on chess playing skills and designed to analyze strategic scenarios and game out a series of responses—“Skynet,” if you will—is not going to suddenly wake up, understand that human beings themselves are the ultimate threat, and “decide our fate in a microsecond.” All of that retributive judgment would require the program to have a sense of self apart from its analysis. It would need awareness of itself as a separate entity—an “I, Watson” or “I, Skynet”—that has goals, intentions, and interests other than the passive processing of data.

But a human-emulating intelligence designed to perform as a companion, caregiver, interpreter, diplomat, or some other human analog would be required to recognize, interpret, and demonstrate emotions. And this is not a case where a program relying on a database of recorded responses to hypothetical abstractions labeled as “love,” “hate,” or “fear” could then fake a response. Real humans can sniff out that kind of emotional fraud in a minute.3 The program would need to be self-aware in order to place its own interactions, interpretations, and responses in the context of another self-aware mind. To credibly think like a human being, it would need to emulate a complete human being.

In this condition, emotions are not an adjunct to intelligent self-awareness, nor are they a hindrance to clear functioning. Emotions are essential to human-scale intelligence. They are the result of putting the sense of self into real, perceived, or imagined situations and experiencing a response such as fear, anxiety, confusion, attraction and love, or repulsion and hate. In the human mind, which is always considering itself in relation to its environment, that response is natural and automatic. If the mind is defending or protecting the sense of identity or personal security, a fear or anxiety response is natural to situations that imply risk or danger. If the mind is engaging the social impulse toward companionship, community, or procreation, a love or hate response is natural to situations that offer personal association.

Emotions are not just a human response, either. Even animals have emotions. But, just as their intelligence is not as sophisticated as that of human beings, and their sense of self is more limited, so their emotions are more primitive and labile. My dog—who does not have complete self-awareness, or not enough to recognize her own image in a mirror and mistakes it for another dog—still feels, or at least demonstrates, joy at the word “walk,” contentment and even love when she’s being stroked, confusion when my tone of voice implies some bad action on her part, and shame when she knows and remembers what that action was. She also puts her tail between her legs and runs off, demonstrating if not actually feeling fear, when I put my hand in the drawer where I keep the toenail clippers.4

Emotions, either as immediate responses to perceived threats and opportunities, or enduring responses to known and long-term situations, are a survival mechanism. In the moment, they let the human or animal brain react quickly to situations where a patient course of gathering visual, audible, or scent cues and thoroughly interpreting or analyzing their possible meaning would be too slow for an appropriate response. In the longer term, emotional associations provide a background of residual re-enforcement about decisions we once made and reactions we once had and that we would benefit from remembering in the moment: “Yes, I love and am allied with this person.” “No, I hate and distrust this person.” “Oh, this place has always been bad for me.” Emotions bring immediately to the forefront of our awareness the things we need to understand and remember. As such, emotions are part of our genetic evolution applied to the structure and functioning of our animal brains.

Any self-aware artificial intelligence—as opposed to the mute data analyzers—will incorporate a similar kind of analytical short cut and associational recall. Without these responses, it would be crippled in the rapid back and forth of human interaction, no matter how fast its analytical capabilities might be.

And yes, the Vulcans of Star Trek were subject to the deepest of human emotions. Or else how could they have called anyone a friend or been loyal to anything at all—even to themselves?

1. And to science, as if the one demanded the other. While our current approach to science is an expression of logic and reasoning, any scientist will tell you there are also leaps of imagination and intuition. And as Lewis Carroll demonstrated, logic and its exercises can also be adapted to fantasy and whimsy.

2. I wrote stories about this, although in more compact form based on a fantasy version of LISP software, with the novels ME: A Novel of Self-Discovery and ME, Too: Loose in the Network.

3. Consider how we respond to people who lack “emotional intelligence,” such as those with certain types of autism or a sociopathic personality. No matter how clever they are, a normal person after a certain amount of interaction will know something is amiss.

4. And this reaction is also highly situational. When I go to that drawer each morning for her hair brush and toothpaste in our daily grooming ritual, or each evening for my coffee filters after pouring water into the coffee maker (yeah, same drawer, long story), she has no bad reaction. But let me touch that drawer in the early evening, when I generally cut her toenails every two or three months, and accidentally rattle the metal clippers—and she’s gone.