Sunday, April 28, 2019

On Quitting

Depression

I have managed to overcome two major addictions in my life: the first to smoking, the second to drinking. Aside from eating, breathing, and writing, the only other addiction I’m aware of is to coffee. And I love the bean so much—chocolate, too, come to think of it—that it will have to be proven deadly on the same scale as cyanide before I will consider quitting.

My smoking started in college, freshman year. My parents had always smoked cigarettes, a pack a day and more each. And they were very smart people: when at about the ages of seven and six my brother and I noticed their habit and inquired when we could begin smoking, my mother agreed that we could have one cigarette apiece each year. And she stuck to her word. She took us into the bathroom and made us smoke an entire unfiltered Camel down to the butt, with us going green and gasping all the way. That should have made devout non-smokers of her sons for life.1

This was all before the Surgeon General’s 1964 report, of course. The upward trend in tobacco use tapered off after that. But I came to my freshman year in 1966, just two years after that landmark warning. And after those sick-making bathroom scenes, I never would have started smoking cigarettes, anyway. But I was now at the university, an English major, attracted to the tweedy, scholastic life, and an admirer of Oxford don C. S. Lewis. So I tried pipe smoking and, after the body’s initial revulsion—those first few puffs are always sour and nauseating—liked the effect.

Tobacco is soothing, a relaxing almost-high without the sensory or mental distortion of other drugs. It was the English major’s perfect accompaniment for hours of sitting in my dorm room reading assigned literature. At first, like most pipe smokers, I didn’t inhale. But, of course, smoke gets into your lungs anyway, because you are sitting in a cloud of it. After a year or two I was inhaling as much as any cigarette smoker—plus getting an occasional sip of utterly disgusting condensate from the pipe stem. But I was hooked and smoked an ounce or more of Douwe Egbert’s Amphora Mild Cavendish tobacco each day, about the equivalent of a pack and a half of cigarettes.

Amphora was an unflavored blend, unlike some tobaccos that were more popular with the college crowd at the time and loaded with perfumes. I occasionally liked a cigar, too. When I went home for Christmas and summer vacation, my parents tolerated the pipe smoke because they were still using cigarettes, but they banned cigars in the house. And, truth to tell, I only smoked them there as a minor form of rebellion, because of this negative reaction.

I smoked for eight years—four in college, four in my early working years, when it was still acceptable to smoke at your desk and most co-workers didn’t object. But I could sense that something was wrong. If it wasn’t the pinhole burns in my shirts from the occasional fall of hot ash, the stains on my fingers from cleaning that awful residue out of the pipe,2 and the nasty brown stains on my teeth and rimming my nostrils, it was an all-over-sick feeling I would get after a day of heavy smoking.

I tried quitting and could do so for a week or two at a time. But then something stressful would happen, and I would reach for the pipe and its soothing effects. I always promised myself I would cut down, but within a day or two I was smoking my daily ounce and a half of Amphora.

When you’re in college you tend to neglect things like regular doctor and dentist checkups. After joining the working world, I had to start taking better care of myself and soon found a local dentist. He went after the layer of tar built up on the inner surfaces of my teeth—about as thick as a good asphalt road—with scrapers and drills, grunting and muttering curses all the while. Him: “Did that hurt?” Me: “Well, come to mention it …” Him: “Good.” But after years of buildup, I walked out of his office with suddenly clean teeth. I thought I would have another four to six years of happy smoking before I got another workout like that. But the next visit was the same ordeal, because the tobacco tar is sticky and binds well with enamel.

After that, I decided that I would surprise my dentist by quitting for good and keeping these newly clean teeth bright and sparkly at my next visit. This was in the days before nicotine gum and patches, smoking cessation clinics, and other psychological and medical help. To quit, you had to go cold turkey. Luckily—and I say this advisedly—I had just taken up a new job as technical editor at an engineering and construction company. My task was to edit the writing of project engineers for grammar, spelling, and consistency and then oversee document production through typing, proofing, printing, binding, and delivery. We were slamming out one or two major projects per editor per week, with documents that represented millions of dollars in engineering proposals and technical reports and that had to be delivered on time, to the day and hour, halfway across the world, or else they became so much wastepaper. The job was so stressful that my resolve to quit smoking was tested again and again. And yet I managed to stay off the pipe.3

When I went back to my dentist six months later, my teeth were still clean—well, except for coffee stains, but he didn’t complain about them, because they were brittle and easier to clean. I’ve managed never to take up the habit again, although for about fifteen years after quitting, I would have what I call “smoking dreams.” In the dream, no matter what else was happening, I had gone back to smoking and regretted it. I would not know I was really awake until I could self-check and remember that, yes, I did quit and, no, I haven’t started again. The nicotine addiction takes that much of a hold on your subconscious mind.

My drinking also started in college, although later, when I turned twenty-one and it was legal for me to go into a bar and order a beer. Unlike many of my classmates, it never occurred to me to get a fake driver’s license and try to drink before the legal age, and I was never in the fraternity crowd, where the keg was open to the whole house. Since my parents were also regular drinkers, taking one or two dry martinis—very dry, with zero-percent vermouth content—each evening, I knew about alcohol. As a child, my father would let me eat the gin-soaked lemon peel or pimento-stuffed olive out of his martini. And as a later teen, I had made a few forays into their cupboard to try the sweeter liqueurs. But I wasn’t then a regular drinker.

In college as a junior and senior I would go to the bars on Saturday nights with my friends and drink beer. After college, I kept up with the beer but also tried gin and vodka, and after a couple of years I settled on red wine as my favorite tipple. I would drink beer only with the foods that didn’t go well with wine. My consumption stabilized at about a bottle of red wine a night, and part of this was habit and part economy. An opened bottle really wasn’t viable and wouldn’t keep for more than about twenty-four hours in the refrigerator. So it just became easier to drink it all. But some weeks at work were harder than others, and by Thursday night I might be drinking half of a jug of “tank car red” and feeling the effects the next morning. Occasionally, as a special treat, I might down most of a bottle of Irish whiskey and then fight the next day to maintain my equilibrium.4

But I never graduated to drinking in the morning to “cure” the hangover. And I generally did not drink at lunchtime. I waited until I got home in the evening and had nothing more planned before I started drinking … unless I had a couple of glasses of wine or beer when we went out for dinner. Still, aside from the mammoth effort it took—pure cussedness and sheer will power—for me to get up and function the next morning, I was starting to get that generally sick feeling. My body knew I was again overindulging in something that was not good for me.

And again, it was medical service that cued my change of heart. I had just signed up with a new doctor, and part of his medical questionnaire for new patients asked how many drinks—with one shot of liquor, one five-ounce glass of wine, or a twelve-ounce beer equal to one drink—did I take per week. That was crafty, because it made me count up my consumption in a different way. I came out with twenty-eight drinks a week. His first question during the physical exam was, “How long have you had this drinking problem?” I started to say, “I don’t consider it a problem”—and realized that is what every alcoholic says.

Yes, as before with the smoking, I had made efforts to cut down or quit. But cutting down never really happened, due to that sense of frugality about wasted wine. And although I could quit for a few days at a time, something always happened to make me start again. I once went all of six months without drinking, during which I finished my first publishable novel and sent it to an agent. But on the night she told me she had sold it, I celebrated with a glass or two of red wine. Within a week I was back to drinking a bottle a night.

Once again, I walked out of the new doctor’s office with a vow that I would show him by never drinking again. And luckily, at my corporate communicator’s job, I had just taken on a massive project, to consolidate the company’s weekly employee newsletter with its monthly magazine for both employees and retirees, and negotiate all the changes to scheduling and officer reviews in the process. I managed to finish that project without coming home at night and sliding into a bottle of red wine. And I have been sober for the past thirty-four years.5

What have these two experiences of quitting—finally, for good, quitting and not just cutting down—these two addictions taught me that I can pass along?

First, the mind is a monkey. And second, the body wants its candy.

During the early months when I was consciously not drinking, I found my subconscious mind—the voice that speaks out of the darkness—suggesting thinks like, “It’s been a hard day, you’ll feel better with a drink.” Or, “You did really good work today, you deserve a drink.” And finally, “You’ve gone a whole week without drinking, why don’t you take a drink to celebrate?” If you pay attention to the ideas that pop into your head and track them back into your subconscious, as I was trained to do as a writer, you can start linking these thoughts together. After a while, the absurdity got to me. Whatever the situation, however I felt, the formulation was the same: you need to drink. Ah ha! The body wants its candy—the drink, the smoke—and the mind is the willing, deceitful manipulator that will try to trick you into taking that first sip or puff.

So I adopted a simple rule, simple but iron clad and unbreakable: “Don’t put it in your mouth. If you find it in your mouth, spit it out.” I took up drinking diet soda instead of beer or wine at night, and now I drink flavored mineral water for health reasons. But some things just go better with beer and wine, so I would allow myself a “non-alcoholic” beer or wine (no more than 0.5% alcohol content) on those special occasions. I drank sparkling apple cider when available—and once at a party, when I picked up a glass of champagne and took a sip before smelling it, I did spit it out, put the glass down, and went back for the cider. I also agonized about foods, especially desserts, that might have uncooked alcohol in them, but I soon reasoned that the amounts were small—about that 0.5%—and this was technically “eating” and not “drinking.”

In each case, I quit on the cold-turkey method: simply stopping and making a rule for myself not to start again. Not everyone can do this. And I have no problem with people who need Alcoholics Anonymous and related organizations, with their Twelve Steps and Twelve Traditions and their wonderful community of peer support. But that regimen—especially the relinquishing of self before God, even when masked as a “Higher Power”—was not for me. My higher power was my mother’s voice, echoing in my head, when I imagined her regarding her drunken, tobacco-besotted son and saying, “You’re better than that!”

But the thing you learn in life is that everything is messy, and sometimes you just have to go with whatever works.

1. This was before my brother developed asthma, and being trapped in the car with two smoking parents was torture for him. So he never took up the practice.

2. I soon favored a pipe made of ceramic and compressed carbon, which did not build up a layer of char in the bowl. It was easily cleaned with rubbing alcohol.

3. It was a good thing I quit smoking, too, because soon afterward I met the woman who would become my wife. She was a dedicated non-smoker who told me, “Kissing a smoker is like licking a dirty ashtray.” Point taken.

4. A college roommate had advised me to always, before going to bed, drink a big glass of water with two aspirin and two high-dose vitamin C tablets. This regimen generally kept me viable the next morning.

5. Interestingly, I never had “drinking dreams” after quitting. That may tell you something about the relative strength of these two addictions.

Sunday, April 21, 2019

A Definition of Decadence

Roman decadence

What is your definition of “decadence”? In the popular imagination, I would guess, it’s something like ancient Rome under the Caesars or the Ancien Régime in France before the Revolution of 1789. That is, rich people lying around, having orgies, eating honey cakes and larks tongues, and still drunk at midmorning from binging the night before. That is, everybody who counts in society—especially the nomenklatura—allowing themselves free rein to be lustful, slothful, and generally good for nothing.

Human beings, both as individuals and in societies, have a hard time with satiety, with being fed to the point of mere capacity. We have difficulty with the concept of “enough.” This goes back to a hundred thousand years or more of our hunter-gatherer heritage. When you live by picking up whatever you can find in the bush or kill on foot with a spear and a sharp stone, life is either feast or famine. Berries and tender shoots are not always in season. Game migrates out of the area or goes into hibernation. So when you make a big killing or stumble onto an acre-wide bramble laden with ripe fruits, you chow down. You eat to the point of bursting, rest a while, and start over again. You put on fat because the lean times, the hunger times, are just around the corner.

Self-regulation is a learned art, a form of self-discipline, as anyone who has tried to follow a diet knows. It takes will power to stop when you are full—sometimes to even hear the “I’m full now” message that your stomach and your endocrine system may be sending.

We see this in America today: too many fat people waddling around, suffering the slow death of obesity and diabetes, or dying quickly with heart attacks and intestinal cancers. The problem you hear about—too much corn syrup in our processed foods, too much sugar in our soft drinks—may be right but it’s also simplistic. That solution smacks, too, of a conspiracy theory where the food and beverage industries are systematically trying to kill us for profit.

I put a different interpretation on this problem. In the last sixty or seventy years, as the richest nation to come out of World War II, we Americans have undergone an economic and cultural change. When I was growing up, fresh fruits still tended to be seasonal. You bought apples, berries, and oranges at certain times of the year, and for the rest you ate apple sauce, spread jam on your toast, and drank processed orange juice. My grandmother still put up preserves in the summer and fall, and she had basement shelves full of Mason jars filled with green beans, corn, and other products of her garden. Del Monte and other companies also packed the grocery stores with the products of orchard and field that had been cooked, canned, and sealed.

Today, we still eat industrially processed foods but we also have gardens and orchards that extend around the world. We get apples from Washington State in the fall and from Chile in the winter and spring. You can get fresh pineapple, avocados, and other delicacies the year round. Thanks to refrigeration and mechanized delivery, you can eat Maine lobster anywhere in the United States, and you can get good sushi—which depends on absolute freshness—in the center of the continent in Kansas City. These are the benefits of being a capitalist empire in the middle of a world willing to deliver its harvest to your door.

Something else that has changed is our culture during the last sixty years or so. When I was growing up, we mostly ate dinner at home. And while we ate well, the food was mostly sustenance: pork chops, chicken livers, franks and beans, stews and casseroles, with the occasional steak and French fries. We might go out to a restaurant once or twice a month, usually in connection with a birthday or other family celebration. Good restaurants were still few and far between in most towns, or an hour distant in the nearby city, and fast-food franchises like McDonald’s were just getting under way and still a novelty.

We ate sensibly because my mother understood nutrition and had to keep a budget. But with the erosion of the nuclear family and people no longer sitting down together to a home-cooked meal six or seven nights a week, our culture has changed. Every suburb and small town has a dozen competing franchises where you can get delicious, rich and tasty, fatty, salty, party-style food in ten minutes or less. You can have ice cream and pastries with every meal. You can chow down every day of your life.

I don’t mean to say this is necessarily a bad thing. Our system of exchange dictates that if people want to eat party foods exclusively, where once they had to wait for a holiday to indulge, then someone, somewhere will figure out how to make a fortune giving them exactly what they want. I am not such a churl as to blame the caterer and the franchise operator for supplying what people’s endocrine system should be telling them, “Whoa, stop, you’ll be able to get more of this tomorrow,” and their stomachs are telling them, “Hey, mouth, I’m groaning here!”

We live in a country where everyone is rich, compared to historical standards, and temptation is all around us. And what I wrote above for food applies equally to liquor, drugs, entertainments, recreational opportunities, personal freedom, and the leisure time to enjoy all of them. Our tribe is sitting in the middle of the world’s berry patch, with plump partridges just walking up and begging to be eaten, while the finest minstrels play their lyres and sing pleasing songs in our ears. And I am not such a churl as to suggest that this is a bad thing. But I don’t wonder that we’re all growing fat, a bit lazy, and just a tad careless.

But that’s only one definition of decadence. Another kind is when things are going so well in society that people—especially in the upper and supposedly knowledgeable and sophisticated classes, the nomenklatura—can afford to break the rules, flout social conventions, and disparage the founding principles on which their society was founded. Along with eating too many rich, party-style foods, too many of our best people seem to assume that the rules don’t apply to them. They act as if their position of privilege in the world, or at least in their part of it, implies an opportunity to scoff at the rules the rest of us follow and the institutions the rest of us respect.

I saw the start of this in the late 1960s at the university, where bright children from good homes, who were admittedly allowed out into the wider world for the first time, became users of banned substances—mostly marijuana—and acquired false identities—usually a driver’s license—so that they could drink below the statutory age. Yes, these were minor infractions to most people, but these young adults were knowingly breaking the law. And when a larger law with more life-threatening consequences, the Selective Service Act, dictated that the government could forcibly recruit them to fight and perhaps die in a war they didn’t agree with, many of them assumed false lifestyles or fled the country to avoid what most other people considered a sacred duty. The sense that they were bound by the conventions and laws—however justified—of the society that gave them special privileges just was not there.

You can see the same sense of lawlessness today in politicians, sports figures, and celebrities who appear to think their place in society allows them to mock what the rest of us may think important and hold dear. They believe the rules the rest of us follow are not made for them.1 Some of our most prominent politicians have recently characterized the part of the population in the “flyover states,” those less sophisticated and progressive in their views, as “deplorables” and “bitter clingers to guns and religion.” This is to disrespect people who otherwise continue in their decent, law-abiding lives.

When your life is easy, when rewards and riches are all around and available for the taking, when the winds of change are blowing in your direction, it becomes easy to imagine that you are special, that the rules don’t apply to you, that the institutions and social conventions that put you in your current position really aren’t so important. And if you were not fortunate enough to have a mother and father who told you, as most of us did, that you aren’t special or more deserving than anyone else, that you have to wait your turn, that you must mind your p’s and q’s—then you might forget that most important aspect of a democratically run republic. And you might be lulled into forgetting that reality, if not karma, has a way of snapping back hard.

This, too, is a kind of decadence, born of being too rich for too long in a normal world. And this kind is worse for the soul—and the nation—than anything having to do with sex, drugs, and rich foods.

1. Without drawing my readers into a political fight, I remember finishing James B. Stewart’s Blood Sport: The President and His Adversaries, about the political harassment that President Clinton and his wife were experiencing in the mid-1990s. What struck me at the time of reading was the catalog of activities that the family was charged with—profiting from delayed orders on cattle futures, attempting to buy a country bank so that they could make themselves favorable real-estate loans, claiming a full tax deduction on losses they shared equally with a partner—and how Stewart dismissed these actions as trivial. But they were indeed infractions of established law. If other, more normal people had done these things, they would have risked prosecution. What made it worse, to me, was that both of the Clintons were trained and admitted to the bar as attorneys, yet nobody in their circle lifted their heads to say, “Gee, you know, this is against the law.”

Sunday, April 14, 2019

AI and Emotion

Robot head

In the extended Star Trek series, Mr. Spock and the other Vulcans portray a rigid adherence to pure logic1 and either rejection and active repression of their humanoid emotions. This sort of character presents an attractive gravitas: sober, thoughtful, consistent, dependable, undemanding, and loyal. It would seem that, if human beings could just be cleansed of those fragile, distracting, interfering emotions, they would be made more focused, more intelligent, and … superior.

Certainly, that is one of the attractions of the computer age. If you write a program, test and debug it properly, and release it into the world, it will usually function flawlessly and as designed—apart from misapplication and faulty operation by those same clumsy humans. Known inputs will yield know outputs. Conditional situations (if/then/else) will always be handled consistently. And, in the better programs, if unexpected inputs or conditions are encountered, the software will return an error or default result, rather than venturing off into imagined possibilities. Computers are reliable. Sometimes they are balky and frustrating, because of those unknown inputs and aberrant conditions, but they are always consistent.

Our computer science is now entering the phase of creating and applying “artificial intelligence.” Probably the most recognizable of these attempts—in the real world, rather than in the realm of science fiction—is IBM’s Watson computer. This machine was designed to play the television game show Jeopardy. For this effort, its database was filled with facts about and references to history, popular culture, music, science, current events, geography, foreign languages—all the subjects that might appear on the game board. It was also programmed with language skills like rhymes, alliterations, sentence structure, and the requisite grammatical judo of putting its answer in the form of a question. Although I don’t know the architecture of Watson’s programming myself, I would imagine that it also needed a bit of randomness, the leeway to run a random-number generator now and then—effectively rolling the dice—to make a connection between clue and answer based on something other than solid, straight-line reference: it occasionally had to guess. And it won.

IBM is now using a similar computer architecture with Watson Analytics to examine complex accounting and operational data, identify patterns, make observations, and propose solutions to business users. Rather than having a human programmer write a dedicated piece of software that identifies anticipated conditions or anomalies in a specified data field, this is like having a person with a huge memory, fast comprehension, and no personal life at all look at the data and make insights. Such “expert systems” from other vendors are already analyzing patient x-rays and sieving patient symptoms and biometrics from physical and laboratory testing against a database of diseases to identify a diagnosis and recommend a course of treatment.

And for all these applications, you want an emotionless brain box that sticks to the facts and only rolls the random numbers for an intuitive leap under controlled conditions. When you’re examining a business’s books or a patient’s blood work, you generally want a tireless Mr. Spock rather than a volatile Dr. McCoy.

But the other side of artificial intelligence is the holy grail of science fiction: a computer program or network architecture that approximates the human brain and gives human-seeming responses. This isn’t an analytical tool crammed with historical or medical facts to be applied to a single domain of analysis. It’s the creation of a life form that can resemble, emulate, and perhaps actually be a person.2

IBM’s Watson has no programmed sense of self. This is because it never has to interface directly, intelligently, or empathically with another human being, just objectify and sift data. Emotions—other than the intuitive leaps of that random-number generator—would only get in the way of its assignments. And this is a good thing, because Watson is never going to wake up one day, read a negative headline about the company whose operations it’s analyzing, and decide to skew the data to crash the company’s stock. Watson has no self-awareness—and no self-interest to dabble in the stock market—to think about such things. Similarly, a Department of Defense program based on chess playing skills and designed to analyze strategic scenarios and game out a series of responses—“Skynet,” if you will—is not going to suddenly wake up, understand that human beings themselves are the ultimate threat, and “decide our fate in a microsecond.” All of that retributive judgment would require the program to have a sense of self apart from its analysis. It would need awareness of itself as a separate entity—an “I, Watson” or “I, Skynet”—that has goals, intentions, and interests other than the passive processing of data.

But a human-emulating intelligence designed to perform as a companion, caregiver, interpreter, diplomat, or some other human analog would be required to recognize, interpret, and demonstrate emotions. And this is not a case where a program relying on a database of recorded responses to hypothetical abstractions labeled as “love,” “hate,” or “fear” could then fake a response. Real humans can sniff out that kind of emotional fraud in a minute.3 The program would need to be self-aware in order to place its own interactions, interpretations, and responses in the context of another self-aware mind. To credibly think like a human being, it would need to emulate a complete human being.

In this condition, emotions are not an adjunct to intelligent self-awareness, nor are they a hindrance to clear functioning. Emotions are essential to human-scale intelligence. They are the result of putting the sense of self into real, perceived, or imagined situations and experiencing a response such as fear, anxiety, confusion, attraction and love, or repulsion and hate. In the human mind, which is always considering itself in relation to its environment, that response is natural and automatic. If the mind is defending or protecting the sense of identity or personal security, a fear or anxiety response is natural to situations that imply risk or danger. If the mind is engaging the social impulse toward companionship, community, or procreation, a love or hate response is natural to situations that offer personal association.

Emotions are not just a human response, either. Even animals have emotions. But, just as their intelligence is not as sophisticated as that of human beings, and their sense of self is more limited, so their emotions are more primitive and labile. My dog—who does not have complete self-awareness, or not enough to recognize her own image in a mirror and mistakes it for another dog—still feels, or at least demonstrates, joy at the word “walk,” contentment and even love when she’s being stroked, confusion when my tone of voice implies some bad action on her part, and shame when she knows and remembers what that action was. She also puts her tail between her legs and runs off, demonstrating if not actually feeling fear, when I put my hand in the drawer where I keep the toenail clippers.4

Emotions, either as immediate responses to perceived threats and opportunities, or enduring responses to known and long-term situations, are a survival mechanism. In the moment, they let the human or animal brain react quickly to situations where a patient course of gathering visual, audible, or scent cues and thoroughly interpreting or analyzing their possible meaning would be too slow for an appropriate response. In the longer term, emotional associations provide a background of residual re-enforcement about decisions we once made and reactions we once had and that we would benefit from remembering in the moment: “Yes, I love and am allied with this person.” “No, I hate and distrust this person.” “Oh, this place has always been bad for me.” Emotions bring immediately to the forefront of our awareness the things we need to understand and remember. As such, emotions are part of our genetic evolution applied to the structure and functioning of our animal brains.

Any self-aware artificial intelligence—as opposed to the mute data analyzers—will incorporate a similar kind of analytical short cut and associational recall. Without these responses, it would be crippled in the rapid back and forth of human interaction, no matter how fast its analytical capabilities might be.

And yes, the Vulcans of Star Trek were subject to the deepest of human emotions. Or else how could they have called anyone a friend or been loyal to anything at all—even to themselves?

1. And to science, as if the one demanded the other. While our current approach to science is an expression of logic and reasoning, any scientist will tell you there are also leaps of imagination and intuition. And as Lewis Carroll demonstrated, logic and its exercises can also be adapted to fantasy and whimsy.

2. I wrote stories about this, although in more compact form based on a fantasy version of LISP software, with the novels ME: A Novel of Self-Discovery and ME, Too: Loose in the Network.

3. Consider how we respond to people who lack “emotional intelligence,” such as those with certain types of autism or a sociopathic personality. No matter how clever they are, a normal person after a certain amount of interaction will know something is amiss.

4. And this reaction is also highly situational. When I go to that drawer each morning for her hair brush and toothpaste in our daily grooming ritual, or each evening for my coffee filters after pouring water into the coffee maker (yeah, same drawer, long story), she has no bad reaction. But let me touch that drawer in the early evening, when I generally cut her toenails every two or three months, and accidentally rattle the metal clippers—and she’s gone.

Sunday, April 7, 2019

Belief vs. Knowledge

Total honesty

All of us who identify as human have large and complex brains. We are capable—or most of us, anyway1—of holding different and sometimes conflicting thoughts on the same subject. This is because we live on many mental levels.

Our daily experience is structured around a large, capacious, and persistent memory and a system for its recall. We can summon—accurately, we believe2—past events, as well as the experiences and emotions surrounding these events. We can also draw inferences and rules or imagined truths from these pieces of our personal history. Add to this set of “real” memories the “shadow” experiences, different from but concatenated with our real-life experiences, associated with everything that we read, see in movies and plays, or are told by our parents, relatives, friends, and the people we trust. It all goes into the retentive sponge that is our memory.

We also live a good part of our lives in the future. We have an active life in the portion of our brain called the prefrontal cortex. This is the area that controls “executive functions” like decision-making, planning, anticipation, and—because they are usually associated with consequences—personal and social behaviors. Using the prefrontal cortex, we consider current events and map them forward into an imagined future, enabling us to make decisions and plan our future actions. But that executive function also opens up a Pandora’s box of wishes, dreams, and fantasies that can affect our daily lives and intended actions.

What our brain is actually “thinking” at any one time depends on what we bring forth from this stew of past, present, and future beliefs, knowledge, and imagination, either by active, intentional, conscious, thoughtful focus and recall, or by the random firing of related neural circuitry that we associate with the “subconscious mind.”

And so we all live multiple lives that generally can be resolved into what we believe versus what we know to be the fact. And most of us are better at adhering to one or the other, depending on the situation. From this point on, I’m going to be treading on some metaphysical toes. If you are easily upset or angered, please stop reading. Anyway, this—like all of my blog postings—is just a thought experiment.

We all—or most of us—tend to believe that we have within ourselves some unending part: a soul, a spark of life, an enduring energy that will continue after our personal death. It will not just continue as a metaphysical force, like a raw radio or light wave, but carry with it our consciousness, our memories, our emotions … everything that makes us a real person except for our physical strength, sensation, and bodily needs. This part will endure for eternity in some place or dimension, and usually there we will meet our parents, family, ancestors, pets, and lost loves. The absurdity of continuing forever in a place that is not-living, not-growing, not evolving—a kind of limbo, however pleasant the circumstances and the company may be—is lost on those of us who so believe. And the idea of meeting not just parents and grandparents but g’g’g’great-to-near-infinity-grandparents whom we never met, going back to the great apes and little fishes of our genetic ancestry, is an aspect we never consider. Still, all of this belief comes to us from the religion we practice, the stories we’ve read, and the insistent looking-forwardness of that prefrontal cortex.

And yet we also know—or most of us—that death has an undeniable finality and stillness to it. Many of us have encountered isolated deaths, either that of a pet or family member as a child or among acquaintances in our extended community as an adult, if not in worse and more memorable circumstances like war and environmental catastrophes. Much as we would like to believe that something eternal is preserved from that ended life, we know on an intellectual level that the dead are not going anywhere and not coming back. Yes, there are stories, plays, and movies about consciousness existing and love enduring beyond the grave. But unless we are so crazed with grief that we try to conjure the dead with the aid of a charlatan, we know that these are just stories. We know that everything comes to an end: plants, animals, people, cities, empires, planets, and stars. The universe is old beyond comprehension and everything in it exists in an impermanent state of flux. So why should our personal selves be any different?

In our current politics, literature, and media environment, we are now bathed in stories of apocalypse, of the end times, of the collapse of civilization, of the destruction of the world. My generation has been living through prophesied doomsdays since we practiced duck-and-cover for nuclear war in grade school. Then it was overpopulation and Malthusian starvation, next Y2K and the collapse of the economy, and finally global warming and rising sea levels. Apocalypse has its attractions: you no longer have to pay rent, get up and go to work, or put up with the daily frustrations of living in a crowded society. It will be every man or woman for him- or herself, and the rules about just killing anyone who annoys you will be automatically rescinded.

Another current political belief is the notion that human nature is somehow defective and that, if we could only change people for the better—make them nicer, kinder, more giving, more reliable, less selfish—then we can achieve utopia here on Earth. It has been tried by several societies, of course, most recently in Venezuela. The utopian ideal is another form of end-times thinking: the end of struggle; the end of nations at war; the end of hunger, poverty, and fear; and the end of history as we know it. Once we achieve this perfect state for humankind, nothing will ever change again.

And yet both of these states—apocalypse and utopia—are fantasies. Yes, catastrophes happen: hurricanes and earthquakes destroy whole towns at a stroke; war and invasion wipe entire civilizations and cultures off the map; and war itself is a long and terrible experience. Yes, healthy and happy societies are occasionally formed and live through a golden age, where almost everyone has something interesting and fulfilling to do in their life, gets enough to eat, and lives in relative peace. But neither state is the end of times or the end of history, and all of them finish up and are replaced by something else. And usually, no one notices or can pinpoint the end of either condition. The Roman Empire took a couple of centuries to fall, and for some people in some places—think of Constantinople—it endured for a thousand years after the sack of Rome in the Western Empire. So while we may indulge fantasies about end times, we all—or most of us—know that history is a process of slow change, that no state or civilization endures without constant revision and reevaluation, both upward and downward, and that most people are now on the upward curve of both spiritual and technological human progress—as we have been since the founding of Sumer in Mesopotamia some seven thousand years ago.

In our current politics and morality, many people—if not most—believe that humanity is divided by race, ethnic affiliation, political or religious views, or some other distinction between ourselves and a presumed “other.” And we can entertain notions that those others, even if they share 99.99% genetic identity with us, are somehow different and less than human. That they don’t have the same human drives, love their children, possess a sense of purpose and dignity, want to earn their living and come home at night, and want their football team to succeed just as much as we do.

At the same time, many people—if not most—believe that all human beings should be equal. This is not just about being treated equally before the law or receiving equal opportunities for education and personal and commercial success. This is the belief that there is not much innate difference among human beings in all groups and conditions, except for those unfortunates with a developmental disability or some form of physical limitation. So therefore any differences in living standards and personal outcomes between individuals and parts of society must be due to that previous belief in racial or ethnic difference and to overt discrimination, or else due to some structural unfairness in society.

And yet anyone with experience in the world knows that no two human beings are the same. Everyone is born with a unique and personal complement of traits, talents, intellectual and emotional strengths and weaknesses, family background and history, genetic inheritance and innate health, and that undefinable element we call “luck.” A fair society can try to compensate for some of the worst and most obvious deficiencies in any of these areas. But nothing can make all of these varied human beings equal in terms of their health, longevity, success, and happiness.

Finally, in our sense of the universe, we all—or most of us—like to believe that our world, our lives, and our fates are rule by some unseen yet benevolent hand that establishes our current circumstances, foresees all outcomes, and ensures that things will turn out for the better, that right and love will triumph in the end, and that the world and each life in it—or at least my life in it, because I am special—has a definite purpose. Again, this is a residue or distillation of the religion we were taught and the stories we’ve heard. It is also the product of a twitch in the prefrontal cortex that engenders hope.

But we also know from history and from personal experience—unless we deceive ourselves with selective memory—that bad things happen about as often as good, that sometimes innocent people die without reason, and that the finger of evolution is a wandering one that makes ravening wolves as well as gentle deer, and sometimes it also creates a platypus. Life on this planet doesn’t come into being and function because it has a purpose. Life, the union of egg and sperm and all that comes afterward, is the purpose. Species develop in relation to environmental niches for which their genetics have haphazardly adapted them. They exist only for so long as they can, and then they die out. Humans, with their big brains and clever hands, have learned to adapt their technology and culture to many different environments; perhaps one day they will learn to adapt the environment itself, both here and on other planets, to their needs; and eventually they may even adapt their own genetics to environment yet unimagined. Or human beings, too, may die out. And for each of us, if there is a purpose to living, we must find it for ourselves.

We all live on many levels of mental activity; of intellectual curiosity, honesty, and dishonesty; of desire, fear, and hope; and of belief and fantasy. All those levels sometimes override both previous knowledge and common sense. And that complex internal life is also part of the human condition.

1. Here I will allow for different kinds of human mentation, due perhaps to disease, accident, or developmental damage. I cannot know, for example, that a person with severe autism or one whose frontal lobes are destroyed by a stroke engages in the kind of mental activity described here, or whether such a brain experiences reality directly without the filters of belief and knowledge.

2. However, some recent studies suggest we are all susceptible to the phenomenon of “false memory” (see for example “False Memories and How They Form” by Kendra Cherry from 2018). It also seems that a memory is not just recorded once, when the event was experienced, but is re-experienced, shaped, and edited every time it is recalled. This tends to create a “collage” of perception around the memory rather than a fixed and indelible image.