Sunday, April 28, 2019

On Quitting

Depression

I have managed to overcome two major addictions in my life: the first to smoking, the second to drinking. Aside from eating, breathing, and writing, the only other addiction I’m aware of is to coffee. And I love the bean so much—chocolate, too, come to think of it—that it will have to be proven deadly on the same scale as cyanide before I will consider quitting.

My smoking started in college, freshman year. My parents had always smoked cigarettes, a pack a day and more each. And they were very smart people: when at about the ages of seven and six my brother and I noticed their habit and inquired when we could begin smoking, my mother agreed that we could have one cigarette apiece each year. And she stuck to her word. She took us into the bathroom and made us smoke an entire unfiltered Camel down to the butt, with us going green and gasping all the way. That should have made devout non-smokers of her sons for life.1

This was all before the Surgeon General’s 1964 report, of course. The upward trend in tobacco use tapered off after that. But I came to my freshman year in 1966, just two years after that landmark warning. And after those sick-making bathroom scenes, I never would have started smoking cigarettes, anyway. But I was now at the university, an English major, attracted to the tweedy, scholastic life, and an admirer of Oxford don C. S. Lewis. So I tried pipe smoking and, after the body’s initial revulsion—those first few puffs are always sour and nauseating—liked the effect.

Tobacco is soothing, a relaxing almost-high without the sensory or mental distortion of other drugs. It was the English major’s perfect accompaniment for hours of sitting in my dorm room reading assigned literature. At first, like most pipe smokers, I didn’t inhale. But, of course, smoke gets into your lungs anyway, because you are sitting in a cloud of it. After a year or two I was inhaling as much as any cigarette smoker—plus getting an occasional sip of utterly disgusting condensate from the pipe stem. But I was hooked and smoked an ounce or more of Douwe Egbert’s Amphora Mild Cavendish tobacco each day, about the equivalent of a pack and a half of cigarettes.

Amphora was an unflavored blend, unlike some tobaccos that were more popular with the college crowd at the time and loaded with perfumes. I occasionally liked a cigar, too. When I went home for Christmas and summer vacation, my parents tolerated the pipe smoke because they were still using cigarettes, but they banned cigars in the house. And, truth to tell, I only smoked them there as a minor form of rebellion, because of this negative reaction.

I smoked for eight years—four in college, four in my early working years, when it was still acceptable to smoke at your desk and most co-workers didn’t object. But I could sense that something was wrong. If it wasn’t the pinhole burns in my shirts from the occasional fall of hot ash, the stains on my fingers from cleaning that awful residue out of the pipe,2 and the nasty brown stains on my teeth and rimming my nostrils, it was an all-over-sick feeling I would get after a day of heavy smoking.

I tried quitting and could do so for a week or two at a time. But then something stressful would happen, and I would reach for the pipe and its soothing effects. I always promised myself I would cut down, but within a day or two I was smoking my daily ounce and a half of Amphora.

When you’re in college you tend to neglect things like regular doctor and dentist checkups. After joining the working world, I had to start taking better care of myself and soon found a local dentist. He went after the layer of tar built up on the inner surfaces of my teeth—about as thick as a good asphalt road—with scrapers and drills, grunting and muttering curses all the while. Him: “Did that hurt?” Me: “Well, come to mention it …” Him: “Good.” But after years of buildup, I walked out of his office with suddenly clean teeth. I thought I would have another four to six years of happy smoking before I got another workout like that. But the next visit was the same ordeal, because the tobacco tar is sticky and binds well with enamel.

After that, I decided that I would surprise my dentist by quitting for good and keeping these newly clean teeth bright and sparkly at my next visit. This was in the days before nicotine gum and patches, smoking cessation clinics, and other psychological and medical help. To quit, you had to go cold turkey. Luckily—and I say this advisedly—I had just taken up a new job as technical editor at an engineering and construction company. My task was to edit the writing of project engineers for grammar, spelling, and consistency and then oversee document production through typing, proofing, printing, binding, and delivery. We were slamming out one or two major projects per editor per week, with documents that represented millions of dollars in engineering proposals and technical reports and that had to be delivered on time, to the day and hour, halfway across the world, or else they became so much wastepaper. The job was so stressful that my resolve to quit smoking was tested again and again. And yet I managed to stay off the pipe.3

When I went back to my dentist six months later, my teeth were still clean—well, except for coffee stains, but he didn’t complain about them, because they were brittle and easier to clean. I’ve managed never to take up the habit again, although for about fifteen years after quitting, I would have what I call “smoking dreams.” In the dream, no matter what else was happening, I had gone back to smoking and regretted it. I would not know I was really awake until I could self-check and remember that, yes, I did quit and, no, I haven’t started again. The nicotine addiction takes that much of a hold on your subconscious mind.

My drinking also started in college, although later, when I turned twenty-one and it was legal for me to go into a bar and order a beer. Unlike many of my classmates, it never occurred to me to get a fake driver’s license and try to drink before the legal age, and I was never in the fraternity crowd, where the keg was open to the whole house. Since my parents were also regular drinkers, taking one or two dry martinis—very dry, with zero-percent vermouth content—each evening, I knew about alcohol. As a child, my father would let me eat the gin-soaked lemon peel or pimento-stuffed olive out of his martini. And as a later teen, I had made a few forays into their cupboard to try the sweeter liqueurs. But I wasn’t then a regular drinker.

In college as a junior and senior I would go to the bars on Saturday nights with my friends and drink beer. After college, I kept up with the beer but also tried gin and vodka, and after a couple of years I settled on red wine as my favorite tipple. I would drink beer only with the foods that didn’t go well with wine. My consumption stabilized at about a bottle of red wine a night, and part of this was habit and part economy. An opened bottle really wasn’t viable and wouldn’t keep for more than about twenty-four hours in the refrigerator. So it just became easier to drink it all. But some weeks at work were harder than others, and by Thursday night I might be drinking half of a jug of “tank car red” and feeling the effects the next morning. Occasionally, as a special treat, I might down most of a bottle of Irish whiskey and then fight the next day to maintain my equilibrium.4

But I never graduated to drinking in the morning to “cure” the hangover. And I generally did not drink at lunchtime. I waited until I got home in the evening and had nothing more planned before I started drinking … unless I had a couple of glasses of wine or beer when we went out for dinner. Still, aside from the mammoth effort it took—pure cussedness and sheer will power—for me to get up and function the next morning, I was starting to get that generally sick feeling. My body knew I was again overindulging in something that was not good for me.

And again, it was medical service that cued my change of heart. I had just signed up with a new doctor, and part of his medical questionnaire for new patients asked how many drinks—with one shot of liquor, one five-ounce glass of wine, or a twelve-ounce beer equal to one drink—did I take per week. That was crafty, because it made me count up my consumption in a different way. I came out with twenty-eight drinks a week. His first question during the physical exam was, “How long have you had this drinking problem?” I started to say, “I don’t consider it a problem”—and realized that is what every alcoholic says.

Yes, as before with the smoking, I had made efforts to cut down or quit. But cutting down never really happened, due to that sense of frugality about wasted wine. And although I could quit for a few days at a time, something always happened to make me start again. I once went all of six months without drinking, during which I finished my first publishable novel and sent it to an agent. But on the night she told me she had sold it, I celebrated with a glass or two of red wine. Within a week I was back to drinking a bottle a night.

Once again, I walked out of the new doctor’s office with a vow that I would show him by never drinking again. And luckily, at my corporate communicator’s job, I had just taken on a massive project, to consolidate the company’s weekly employee newsletter with its monthly magazine for both employees and retirees, and negotiate all the changes to scheduling and officer reviews in the process. I managed to finish that project without coming home at night and sliding into a bottle of red wine. And I have been sober for the past thirty-four years.5

What have these two experiences of quitting—finally, for good, quitting and not just cutting down—these two addictions taught me that I can pass along?

First, the mind is a monkey. And second, the body wants its candy.

During the early months when I was consciously not drinking, I found my subconscious mind—the voice that speaks out of the darkness—suggesting thinks like, “It’s been a hard day, you’ll feel better with a drink.” Or, “You did really good work today, you deserve a drink.” And finally, “You’ve gone a whole week without drinking, why don’t you take a drink to celebrate?” If you pay attention to the ideas that pop into your head and track them back into your subconscious, as I was trained to do as a writer, you can start linking these thoughts together. After a while, the absurdity got to me. Whatever the situation, however I felt, the formulation was the same: you need to drink. Ah ha! The body wants its candy—the drink, the smoke—and the mind is the willing, deceitful manipulator that will try to trick you into taking that first sip or puff.

So I adopted a simple rule, simple but iron clad and unbreakable: “Don’t put it in your mouth. If you find it in your mouth, spit it out.” I took up drinking diet soda instead of beer or wine at night, and now I drink flavored mineral water for health reasons. But some things just go better with beer and wine, so I would allow myself a “non-alcoholic” beer or wine (no more than 0.5% alcohol content) on those special occasions. I drank sparkling apple cider when available—and once at a party, when I picked up a glass of champagne and took a sip before smelling it, I did spit it out, put the glass down, and went back for the cider. I also agonized about foods, especially desserts, that might have uncooked alcohol in them, but I soon reasoned that the amounts were small—about that 0.5%—and this was technically “eating” and not “drinking.”

In each case, I quit on the cold-turkey method: simply stopping and making a rule for myself not to start again. Not everyone can do this. And I have no problem with people who need Alcoholics Anonymous and related organizations, with their Twelve Steps and Twelve Traditions and their wonderful community of peer support. But that regimen—especially the relinquishing of self before God, even when masked as a “Higher Power”—was not for me. My higher power was my mother’s voice, echoing in my head, when I imagined her regarding her drunken, tobacco-besotted son and saying, “You’re better than that!”

But the thing you learn in life is that everything is messy, and sometimes you just have to go with whatever works.

1. This was before my brother developed asthma, and being trapped in the car with two smoking parents was torture for him. So he never took up the practice.

2. I soon favored a pipe made of ceramic and compressed carbon, which did not build up a layer of char in the bowl. It was easily cleaned with rubbing alcohol.

3. It was a good thing I quit smoking, too, because soon afterward I met the woman who would become my wife. She was a dedicated non-smoker who told me, “Kissing a smoker is like licking a dirty ashtray.” Point taken.

4. A college roommate had advised me to always, before going to bed, drink a big glass of water with two aspirin and two high-dose vitamin C tablets. This regimen generally kept me viable the next morning.

5. Interestingly, I never had “drinking dreams” after quitting. That may tell you something about the relative strength of these two addictions.

Sunday, April 21, 2019

A Definition of Decadence

Roman decadence

What is your definition of “decadence”? In the popular imagination, I would guess, it’s something like ancient Rome under the Caesars or the Ancien Régime in France before the Revolution of 1789. That is, rich people lying around, having orgies, eating honey cakes and larks tongues, and still drunk at midmorning from binging the night before. That is, everybody who counts in society—especially the nomenklatura—allowing themselves free rein to be lustful, slothful, and generally good for nothing.

Human beings, both as individuals and in societies, have a hard time with satiety, with being fed to the point of mere capacity. We have difficulty with the concept of “enough.” This goes back to a hundred thousand years or more of our hunter-gatherer heritage. When you live by picking up whatever you can find in the bush or kill on foot with a spear and a sharp stone, life is either feast or famine. Berries and tender shoots are not always in season. Game migrates out of the area or goes into hibernation. So when you make a big killing or stumble onto an acre-wide bramble laden with ripe fruits, you chow down. You eat to the point of bursting, rest a while, and start over again. You put on fat because the lean times, the hunger times, are just around the corner.

Self-regulation is a learned art, a form of self-discipline, as anyone who has tried to follow a diet knows. It takes will power to stop when you are full—sometimes to even hear the “I’m full now” message that your stomach and your endocrine system may be sending.

We see this in America today: too many fat people waddling around, suffering the slow death of obesity and diabetes, or dying quickly with heart attacks and intestinal cancers. The problem you hear about—too much corn syrup in our processed foods, too much sugar in our soft drinks—may be right but it’s also simplistic. That solution smacks, too, of a conspiracy theory where the food and beverage industries are systematically trying to kill us for profit.

I put a different interpretation on this problem. In the last sixty or seventy years, as the richest nation to come out of World War II, we Americans have undergone an economic and cultural change. When I was growing up, fresh fruits still tended to be seasonal. You bought apples, berries, and oranges at certain times of the year, and for the rest you ate apple sauce, spread jam on your toast, and drank processed orange juice. My grandmother still put up preserves in the summer and fall, and she had basement shelves full of Mason jars filled with green beans, corn, and other products of her garden. Del Monte and other companies also packed the grocery stores with the products of orchard and field that had been cooked, canned, and sealed.

Today, we still eat industrially processed foods but we also have gardens and orchards that extend around the world. We get apples from Washington State in the fall and from Chile in the winter and spring. You can get fresh pineapple, avocados, and other delicacies the year round. Thanks to refrigeration and mechanized delivery, you can eat Maine lobster anywhere in the United States, and you can get good sushi—which depends on absolute freshness—in the center of the continent in Kansas City. These are the benefits of being a capitalist empire in the middle of a world willing to deliver its harvest to your door.

Something else that has changed is our culture during the last sixty years or so. When I was growing up, we mostly ate dinner at home. And while we ate well, the food was mostly sustenance: pork chops, chicken livers, franks and beans, stews and casseroles, with the occasional steak and French fries. We might go out to a restaurant once or twice a month, usually in connection with a birthday or other family celebration. Good restaurants were still few and far between in most towns, or an hour distant in the nearby city, and fast-food franchises like McDonald’s were just getting under way and still a novelty.

We ate sensibly because my mother understood nutrition and had to keep a budget. But with the erosion of the nuclear family and people no longer sitting down together to a home-cooked meal six or seven nights a week, our culture has changed. Every suburb and small town has a dozen competing franchises where you can get delicious, rich and tasty, fatty, salty, party-style food in ten minutes or less. You can have ice cream and pastries with every meal. You can chow down every day of your life.

I don’t mean to say this is necessarily a bad thing. Our system of exchange dictates that if people want to eat party foods exclusively, where once they had to wait for a holiday to indulge, then someone, somewhere will figure out how to make a fortune giving them exactly what they want. I am not such a churl as to blame the caterer and the franchise operator for supplying what people’s endocrine system should be telling them, “Whoa, stop, you’ll be able to get more of this tomorrow,” and their stomachs are telling them, “Hey, mouth, I’m groaning here!”

We live in a country where everyone is rich, compared to historical standards, and temptation is all around us. And what I wrote above for food applies equally to liquor, drugs, entertainments, recreational opportunities, personal freedom, and the leisure time to enjoy all of them. Our tribe is sitting in the middle of the world’s berry patch, with plump partridges just walking up and begging to be eaten, while the finest minstrels play their lyres and sing pleasing songs in our ears. And I am not such a churl as to suggest that this is a bad thing. But I don’t wonder that we’re all growing fat, a bit lazy, and just a tad careless.

But that’s only one definition of decadence. Another kind is when things are going so well in society that people—especially in the upper and supposedly knowledgeable and sophisticated classes, the nomenklatura—can afford to break the rules, flout social conventions, and disparage the founding principles on which their society was founded. Along with eating too many rich, party-style foods, too many of our best people seem to assume that the rules don’t apply to them. They act as if their position of privilege in the world, or at least in their part of it, implies an opportunity to scoff at the rules the rest of us follow and the institutions the rest of us respect.

I saw the start of this in the late 1960s at the university, where bright children from good homes, who were admittedly allowed out into the wider world for the first time, became users of banned substances—mostly marijuana—and acquired false identities—usually a driver’s license—so that they could drink below the statutory age. Yes, these were minor infractions to most people, but these young adults were knowingly breaking the law. And when a larger law with more life-threatening consequences, the Selective Service Act, dictated that the government could forcibly recruit them to fight and perhaps die in a war they didn’t agree with, many of them assumed false lifestyles or fled the country to avoid what most other people considered a sacred duty. The sense that they were bound by the conventions and laws—however justified—of the society that gave them special privileges just was not there.

You can see the same sense of lawlessness today in politicians, sports figures, and celebrities who appear to think their place in society allows them to mock what the rest of us may think important and hold dear. They believe the rules the rest of us follow are not made for them.1 Some of our most prominent politicians have recently characterized the part of the population in the “flyover states,” those less sophisticated and progressive in their views, as “deplorables” and “bitter clingers to guns and religion.” This is to disrespect people who otherwise continue in their decent, law-abiding lives.

When your life is easy, when rewards and riches are all around and available for the taking, when the winds of change are blowing in your direction, it becomes easy to imagine that you are special, that the rules don’t apply to you, that the institutions and social conventions that put you in your current position really aren’t so important. And if you were not fortunate enough to have a mother and father who told you, as most of us did, that you aren’t special or more deserving than anyone else, that you have to wait your turn, that you must mind your p’s and q’s—then you might forget that most important aspect of a democratically run republic. And you might be lulled into forgetting that reality, if not karma, has a way of snapping back hard.

This, too, is a kind of decadence, born of being too rich for too long in a normal world. And this kind is worse for the soul—and the nation—than anything having to do with sex, drugs, and rich foods.

1. Without drawing my readers into a political fight, I remember finishing James B. Stewart’s Blood Sport: The President and His Adversaries, about the political harassment that President Clinton and his wife were experiencing in the mid-1990s. What struck me at the time of reading was the catalog of activities that the family was charged with—profiting from delayed orders on cattle futures, attempting to buy a country bank so that they could make themselves favorable real-estate loans, claiming a full tax deduction on losses they shared equally with a partner—and how Stewart dismissed these actions as trivial. But they were indeed infractions of established law. If other, more normal people had done these things, they would have risked prosecution. What made it worse, to me, was that both of the Clintons were trained and admitted to the bar as attorneys, yet nobody in their circle lifted their heads to say, “Gee, you know, this is against the law.”

Sunday, April 14, 2019

AI and Emotion

Robot head

In the extended Star Trek series, Mr. Spock and the other Vulcans portray a rigid adherence to pure logic1 and either rejection and active repression of their humanoid emotions. This sort of character presents an attractive gravitas: sober, thoughtful, consistent, dependable, undemanding, and loyal. It would seem that, if human beings could just be cleansed of those fragile, distracting, interfering emotions, they would be made more focused, more intelligent, and … superior.

Certainly, that is one of the attractions of the computer age. If you write a program, test and debug it properly, and release it into the world, it will usually function flawlessly and as designed—apart from misapplication and faulty operation by those same clumsy humans. Known inputs will yield know outputs. Conditional situations (if/then/else) will always be handled consistently. And, in the better programs, if unexpected inputs or conditions are encountered, the software will return an error or default result, rather than venturing off into imagined possibilities. Computers are reliable. Sometimes they are balky and frustrating, because of those unknown inputs and aberrant conditions, but they are always consistent.

Our computer science is now entering the phase of creating and applying “artificial intelligence.” Probably the most recognizable of these attempts—in the real world, rather than in the realm of science fiction—is IBM’s Watson computer. This machine was designed to play the television game show Jeopardy. For this effort, its database was filled with facts about and references to history, popular culture, music, science, current events, geography, foreign languages—all the subjects that might appear on the game board. It was also programmed with language skills like rhymes, alliterations, sentence structure, and the requisite grammatical judo of putting its answer in the form of a question. Although I don’t know the architecture of Watson’s programming myself, I would imagine that it also needed a bit of randomness, the leeway to run a random-number generator now and then—effectively rolling the dice—to make a connection between clue and answer based on something other than solid, straight-line reference: it occasionally had to guess. And it won.

IBM is now using a similar computer architecture with Watson Analytics to examine complex accounting and operational data, identify patterns, make observations, and propose solutions to business users. Rather than having a human programmer write a dedicated piece of software that identifies anticipated conditions or anomalies in a specified data field, this is like having a person with a huge memory, fast comprehension, and no personal life at all look at the data and make insights. Such “expert systems” from other vendors are already analyzing patient x-rays and sieving patient symptoms and biometrics from physical and laboratory testing against a database of diseases to identify a diagnosis and recommend a course of treatment.

And for all these applications, you want an emotionless brain box that sticks to the facts and only rolls the random numbers for an intuitive leap under controlled conditions. When you’re examining a business’s books or a patient’s blood work, you generally want a tireless Mr. Spock rather than a volatile Dr. McCoy.

But the other side of artificial intelligence is the holy grail of science fiction: a computer program or network architecture that approximates the human brain and gives human-seeming responses. This isn’t an analytical tool crammed with historical or medical facts to be applied to a single domain of analysis. It’s the creation of a life form that can resemble, emulate, and perhaps actually be a person.2

IBM’s Watson has no programmed sense of self. This is because it never has to interface directly, intelligently, or empathically with another human being, just objectify and sift data. Emotions—other than the intuitive leaps of that random-number generator—would only get in the way of its assignments. And this is a good thing, because Watson is never going to wake up one day, read a negative headline about the company whose operations it’s analyzing, and decide to skew the data to crash the company’s stock. Watson has no self-awareness—and no self-interest to dabble in the stock market—to think about such things. Similarly, a Department of Defense program based on chess playing skills and designed to analyze strategic scenarios and game out a series of responses—“Skynet,” if you will—is not going to suddenly wake up, understand that human beings themselves are the ultimate threat, and “decide our fate in a microsecond.” All of that retributive judgment would require the program to have a sense of self apart from its analysis. It would need awareness of itself as a separate entity—an “I, Watson” or “I, Skynet”—that has goals, intentions, and interests other than the passive processing of data.

But a human-emulating intelligence designed to perform as a companion, caregiver, interpreter, diplomat, or some other human analog would be required to recognize, interpret, and demonstrate emotions. And this is not a case where a program relying on a database of recorded responses to hypothetical abstractions labeled as “love,” “hate,” or “fear” could then fake a response. Real humans can sniff out that kind of emotional fraud in a minute.3 The program would need to be self-aware in order to place its own interactions, interpretations, and responses in the context of another self-aware mind. To credibly think like a human being, it would need to emulate a complete human being.

In this condition, emotions are not an adjunct to intelligent self-awareness, nor are they a hindrance to clear functioning. Emotions are essential to human-scale intelligence. They are the result of putting the sense of self into real, perceived, or imagined situations and experiencing a response such as fear, anxiety, confusion, attraction and love, or repulsion and hate. In the human mind, which is always considering itself in relation to its environment, that response is natural and automatic. If the mind is defending or protecting the sense of identity or personal security, a fear or anxiety response is natural to situations that imply risk or danger. If the mind is engaging the social impulse toward companionship, community, or procreation, a love or hate response is natural to situations that offer personal association.

Emotions are not just a human response, either. Even animals have emotions. But, just as their intelligence is not as sophisticated as that of human beings, and their sense of self is more limited, so their emotions are more primitive and labile. My dog—who does not have complete self-awareness, or not enough to recognize her own image in a mirror and mistakes it for another dog—still feels, or at least demonstrates, joy at the word “walk,” contentment and even love when she’s being stroked, confusion when my tone of voice implies some bad action on her part, and shame when she knows and remembers what that action was. She also puts her tail between her legs and runs off, demonstrating if not actually feeling fear, when I put my hand in the drawer where I keep the toenail clippers.4

Emotions, either as immediate responses to perceived threats and opportunities, or enduring responses to known and long-term situations, are a survival mechanism. In the moment, they let the human or animal brain react quickly to situations where a patient course of gathering visual, audible, or scent cues and thoroughly interpreting or analyzing their possible meaning would be too slow for an appropriate response. In the longer term, emotional associations provide a background of residual re-enforcement about decisions we once made and reactions we once had and that we would benefit from remembering in the moment: “Yes, I love and am allied with this person.” “No, I hate and distrust this person.” “Oh, this place has always been bad for me.” Emotions bring immediately to the forefront of our awareness the things we need to understand and remember. As such, emotions are part of our genetic evolution applied to the structure and functioning of our animal brains.

Any self-aware artificial intelligence—as opposed to the mute data analyzers—will incorporate a similar kind of analytical short cut and associational recall. Without these responses, it would be crippled in the rapid back and forth of human interaction, no matter how fast its analytical capabilities might be.

And yes, the Vulcans of Star Trek were subject to the deepest of human emotions. Or else how could they have called anyone a friend or been loyal to anything at all—even to themselves?

1. And to science, as if the one demanded the other. While our current approach to science is an expression of logic and reasoning, any scientist will tell you there are also leaps of imagination and intuition. And as Lewis Carroll demonstrated, logic and its exercises can also be adapted to fantasy and whimsy.

2. I wrote stories about this, although in more compact form based on a fantasy version of LISP software, with the novels ME: A Novel of Self-Discovery and ME, Too: Loose in the Network.

3. Consider how we respond to people who lack “emotional intelligence,” such as those with certain types of autism or a sociopathic personality. No matter how clever they are, a normal person after a certain amount of interaction will know something is amiss.

4. And this reaction is also highly situational. When I go to that drawer each morning for her hair brush and toothpaste in our daily grooming ritual, or each evening for my coffee filters after pouring water into the coffee maker (yeah, same drawer, long story), she has no bad reaction. But let me touch that drawer in the early evening, when I generally cut her toenails every two or three months, and accidentally rattle the metal clippers—and she’s gone.

Sunday, April 7, 2019

Belief vs. Knowledge

Total honesty

All of us who identify as human have large and complex brains. We are capable—or most of us, anyway1—of holding different and sometimes conflicting thoughts on the same subject. This is because we live on many mental levels.

Our daily experience is structured around a large, capacious, and persistent memory and a system for its recall. We can summon—accurately, we believe2—past events, as well as the experiences and emotions surrounding these events. We can also draw inferences and rules or imagined truths from these pieces of our personal history. Add to this set of “real” memories the “shadow” experiences, different from but concatenated with our real-life experiences, associated with everything that we read, see in movies and plays, or are told by our parents, relatives, friends, and the people we trust. It all goes into the retentive sponge that is our memory.

We also live a good part of our lives in the future. We have an active life in the portion of our brain called the prefrontal cortex. This is the area that controls “executive functions” like decision-making, planning, anticipation, and—because they are usually associated with consequences—personal and social behaviors. Using the prefrontal cortex, we consider current events and map them forward into an imagined future, enabling us to make decisions and plan our future actions. But that executive function also opens up a Pandora’s box of wishes, dreams, and fantasies that can affect our daily lives and intended actions.

What our brain is actually “thinking” at any one time depends on what we bring forth from this stew of past, present, and future beliefs, knowledge, and imagination, either by active, intentional, conscious, thoughtful focus and recall, or by the random firing of related neural circuitry that we associate with the “subconscious mind.”

And so we all live multiple lives that generally can be resolved into what we believe versus what we know to be the fact. And most of us are better at adhering to one or the other, depending on the situation. From this point on, I’m going to be treading on some metaphysical toes. If you are easily upset or angered, please stop reading. Anyway, this—like all of my blog postings—is just a thought experiment.

We all—or most of us—tend to believe that we have within ourselves some unending part: a soul, a spark of life, an enduring energy that will continue after our personal death. It will not just continue as a metaphysical force, like a raw radio or light wave, but carry with it our consciousness, our memories, our emotions … everything that makes us a real person except for our physical strength, sensation, and bodily needs. This part will endure for eternity in some place or dimension, and usually there we will meet our parents, family, ancestors, pets, and lost loves. The absurdity of continuing forever in a place that is not-living, not-growing, not evolving—a kind of limbo, however pleasant the circumstances and the company may be—is lost on those of us who so believe. And the idea of meeting not just parents and grandparents but g’g’g’great-to-near-infinity-grandparents whom we never met, going back to the great apes and little fishes of our genetic ancestry, is an aspect we never consider. Still, all of this belief comes to us from the religion we practice, the stories we’ve read, and the insistent looking-forwardness of that prefrontal cortex.

And yet we also know—or most of us—that death has an undeniable finality and stillness to it. Many of us have encountered isolated deaths, either that of a pet or family member as a child or among acquaintances in our extended community as an adult, if not in worse and more memorable circumstances like war and environmental catastrophes. Much as we would like to believe that something eternal is preserved from that ended life, we know on an intellectual level that the dead are not going anywhere and not coming back. Yes, there are stories, plays, and movies about consciousness existing and love enduring beyond the grave. But unless we are so crazed with grief that we try to conjure the dead with the aid of a charlatan, we know that these are just stories. We know that everything comes to an end: plants, animals, people, cities, empires, planets, and stars. The universe is old beyond comprehension and everything in it exists in an impermanent state of flux. So why should our personal selves be any different?

In our current politics, literature, and media environment, we are now bathed in stories of apocalypse, of the end times, of the collapse of civilization, of the destruction of the world. My generation has been living through prophesied doomsdays since we practiced duck-and-cover for nuclear war in grade school. Then it was overpopulation and Malthusian starvation, next Y2K and the collapse of the economy, and finally global warming and rising sea levels. Apocalypse has its attractions: you no longer have to pay rent, get up and go to work, or put up with the daily frustrations of living in a crowded society. It will be every man or woman for him- or herself, and the rules about just killing anyone who annoys you will be automatically rescinded.

Another current political belief is the notion that human nature is somehow defective and that, if we could only change people for the better—make them nicer, kinder, more giving, more reliable, less selfish—then we can achieve utopia here on Earth. It has been tried by several societies, of course, most recently in Venezuela. The utopian ideal is another form of end-times thinking: the end of struggle; the end of nations at war; the end of hunger, poverty, and fear; and the end of history as we know it. Once we achieve this perfect state for humankind, nothing will ever change again.

And yet both of these states—apocalypse and utopia—are fantasies. Yes, catastrophes happen: hurricanes and earthquakes destroy whole towns at a stroke; war and invasion wipe entire civilizations and cultures off the map; and war itself is a long and terrible experience. Yes, healthy and happy societies are occasionally formed and live through a golden age, where almost everyone has something interesting and fulfilling to do in their life, gets enough to eat, and lives in relative peace. But neither state is the end of times or the end of history, and all of them finish up and are replaced by something else. And usually, no one notices or can pinpoint the end of either condition. The Roman Empire took a couple of centuries to fall, and for some people in some places—think of Constantinople—it endured for a thousand years after the sack of Rome in the Western Empire. So while we may indulge fantasies about end times, we all—or most of us—know that history is a process of slow change, that no state or civilization endures without constant revision and reevaluation, both upward and downward, and that most people are now on the upward curve of both spiritual and technological human progress—as we have been since the founding of Sumer in Mesopotamia some seven thousand years ago.

In our current politics and morality, many people—if not most—believe that humanity is divided by race, ethnic affiliation, political or religious views, or some other distinction between ourselves and a presumed “other.” And we can entertain notions that those others, even if they share 99.99% genetic identity with us, are somehow different and less than human. That they don’t have the same human drives, love their children, possess a sense of purpose and dignity, want to earn their living and come home at night, and want their football team to succeed just as much as we do.

At the same time, many people—if not most—believe that all human beings should be equal. This is not just about being treated equally before the law or receiving equal opportunities for education and personal and commercial success. This is the belief that there is not much innate difference among human beings in all groups and conditions, except for those unfortunates with a developmental disability or some form of physical limitation. So therefore any differences in living standards and personal outcomes between individuals and parts of society must be due to that previous belief in racial or ethnic difference and to overt discrimination, or else due to some structural unfairness in society.

And yet anyone with experience in the world knows that no two human beings are the same. Everyone is born with a unique and personal complement of traits, talents, intellectual and emotional strengths and weaknesses, family background and history, genetic inheritance and innate health, and that undefinable element we call “luck.” A fair society can try to compensate for some of the worst and most obvious deficiencies in any of these areas. But nothing can make all of these varied human beings equal in terms of their health, longevity, success, and happiness.

Finally, in our sense of the universe, we all—or most of us—like to believe that our world, our lives, and our fates are rule by some unseen yet benevolent hand that establishes our current circumstances, foresees all outcomes, and ensures that things will turn out for the better, that right and love will triumph in the end, and that the world and each life in it—or at least my life in it, because I am special—has a definite purpose. Again, this is a residue or distillation of the religion we were taught and the stories we’ve heard. It is also the product of a twitch in the prefrontal cortex that engenders hope.

But we also know from history and from personal experience—unless we deceive ourselves with selective memory—that bad things happen about as often as good, that sometimes innocent people die without reason, and that the finger of evolution is a wandering one that makes ravening wolves as well as gentle deer, and sometimes it also creates a platypus. Life on this planet doesn’t come into being and function because it has a purpose. Life, the union of egg and sperm and all that comes afterward, is the purpose. Species develop in relation to environmental niches for which their genetics have haphazardly adapted them. They exist only for so long as they can, and then they die out. Humans, with their big brains and clever hands, have learned to adapt their technology and culture to many different environments; perhaps one day they will learn to adapt the environment itself, both here and on other planets, to their needs; and eventually they may even adapt their own genetics to environment yet unimagined. Or human beings, too, may die out. And for each of us, if there is a purpose to living, we must find it for ourselves.

We all live on many levels of mental activity; of intellectual curiosity, honesty, and dishonesty; of desire, fear, and hope; and of belief and fantasy. All those levels sometimes override both previous knowledge and common sense. And that complex internal life is also part of the human condition.

1. Here I will allow for different kinds of human mentation, due perhaps to disease, accident, or developmental damage. I cannot know, for example, that a person with severe autism or one whose frontal lobes are destroyed by a stroke engages in the kind of mental activity described here, or whether such a brain experiences reality directly without the filters of belief and knowledge.

2. However, some recent studies suggest we are all susceptible to the phenomenon of “false memory” (see for example “False Memories and How They Form” by Kendra Cherry from 2018). It also seems that a memory is not just recorded once, when the event was experienced, but is re-experienced, shaped, and edited every time it is recalled. This tends to create a “collage” of perception around the memory rather than a fixed and indelible image.

Sunday, March 31, 2019

Personal Selfishness

Mangrove path

I have always been a selfish person. It’s not as if I don’t care for others, extend a hand when it is needed, or give to charity. But my first goal in life has been to protect and defend my own person, my integrity, my … destiny, if you will, and my family. I have always resented the suggestion—sometimes spoken, usually implied—that I did not and should not live for myself but for others. It said that if I was not selfless in spirit and did not pursue the goals of others or of some generalized sense of society, I was not a worthwhile human being.

But I have not exactly lived for my own good times and catered to my lusts and pleasures. Early on, back in high school, I conceived of my life’s purpose as being a writer—that “destiny,” if you will. I was going to capture something mysterious but important, wring it out of random thoughts and thin air, and put it down on paper for all the world to read and understand. What I intended to capture was not going to be some secret of life or instruction for the illuminati, but instead a particular view of humanity, of human possibility, and of personal fulfillment.

To achieve this, I realized that I would have to give up part of my brain to the development and pursuit of stories. I was going to be the prism through which these thoughts and stories entered the world. And novel writing is a full-time activity that extends upon the time a person is actually sitting down and marking a piece of paper or configuring pixels on a computer screen. Plots come out in bits and pieces, and good ideas awaken you in the middle of the night. Characters offer suggestions about what they will and will not do, and they try out bits of dialogue while you are soaping in the shower. As I’ve written elsewhere, the writing life is like renting out your head to a traveling theater company and being constantly nagged by the in-house playwright and the actors about the show in production.1 The only selfish part in all of this is that I have pursued my own visions, my own stories, instead of consciously adopting the purposes and narratives of others or working for some generalized benefit to society.

As I politically matured, sometime in college, I began to see this “selfishness” as a key element in the differences between the Left and the Right. A person’s focus of interest, striving, and goals can be placed along a spectrum: from individual self to family, clan, and tribe—or village, neighborhood, city—or guild, profession, class—and then on to state, region, nation—or party and government. From the closer, more personal, and more tangible, to the wider, more social, and more abstract.

The Right would place a person’s natural emphasis on the closer end of the spectrum: self, family, personal beliefs, individual preferences. This is actually a state of diversity within society. “I like chocolate ice cream.” “I like vanilla.” “I want to work as a plumber.” “I’m going to be a lawyer.” “I choose to drive a Mercedes.” “I like Chevrolets.” “I believe in the Christian god.” “I am an atheist.” And a society, economy, and government fashioned according to the ideals of the Right would allow all of this. So long as the individual does not install faulty plumbing fixtures, cheat clients out of inheritances, rob a bank to buy that Mercedes, or burn crosses in anyone’s yard—go for it. The Right has generally been about individual freedoms, self-determination, and going your own way.

The Left would place a person’s entrance on the wider end of that spectrum: class, state, nation, and lately the globe itself as some simulacrum of all humanity. This is actually a state of uniformity for individuals. “I like whatever foods can be most sustainably produced.” “I will work wherever the government requires my service.” “I will ride the bus and take the train if it relieves pressure on the environment.” “I believe in the principles of my party.” And the society, the economy, and the government would be fashioned to make the best use of this willingness to conform, according to the inspirations of the philosophers, social scientists, and technical experts identified and promoted by the party. (I almost wrote “the state” there, but in the Left’s current paradigm, going back to Mao and Lenin if not to Marx himself, the state is the supreme expression of the party—and not the other way around.) The Left has always been about obedience, subservience, and getting in line.

Is all this a bit too extreme? Perhaps a few decades or even a dozen years ago I would have admitted as much. But in the last couple of years the Left has come out of its populist shell. The Democratic Party has veered into Democratic Socialism, if not into traditional Socialism and even Marxism. I would like to believe that the party had a cooling off and period of reappraisal from its pro-Soviet heyday in the 1920s and ’30s after the failure of the Ukrainian harvests, the Moscow show trials, and the Molotov–Ribbentrop Pact. But perhaps the party leaders simply devised a mask fashioned out of unionism, suburban populism, and later environmentalism to gain support for itself in the shadow of the Cold War.

Of course, at points along this spectrum and in certain societies, the focus of individual intentions has sometimes become confused.

During my college days in the late 1960s, a lot of young leftists were “tuning in, turning on, and dropping out”—if not dropping out of college directly, they were dropping out of the vision of social order and the nominal good that their parents espoused. Some of this turning away from society had to do with the Vietnam War. There, young leftists refused to comply with the state’s demand for more soldiers—not for their own personal reasons, of course, or because they were scared, but presumably in service to a greater humanity that surpassed the dictates of the state of that time.

There has also been some confusion about just how free and open a rightist, free-market, capitalist economy can be. Sure, everyone is free to find his or her own career and own way of making a living. But some of those choices—novel writing for one example, or almost any art form—don’t necessarily pay a living wage. And while the individual is free to like any flavor of ice cream or drive any brand of car he or she desires, the object of desire is not always within economic reach. Indeed, it sometimes seems that the producers of ice cream, cars, and everything else are promoting choices as limited and directed as if they were promoted by a government technical expert. The only difference would be that if a socialist government decided to produce only plum-flavored ice cream as the best, most healthful, and cheapest, then individuals would have no choice but to eat it and forget about chocolate or vanilla. The government could not be wrong in its decision and would not suffer economically. But if a competitor in a capitalist economy decided to promote only plum ice cream and found subscribers willing to fund the venture, they would probably lose their money and go out of business.

The question of emphasis along the spectrum, of self or others, comes down to a basic philosophical view. Does the individual belong to him- or herself or to society as a whole? Does the individual have innate value as an autonomous and conscious being? Or is the individual important only as a cog in a bigger machine, a part of the social whole, a zero stringing out a larger number?

Anyone who knows my writing will know how I answer that question.

1. And so I have always carried a pocket notebook and pen with me—as early as high school—and placed pad and pencil on the nightstand, just to capture these random thoughts. From loose paper, these bits and pieces go into a folder on the hard drive dedicated to the current novel or to one in development. Notes are organized into themes, motifs, characters, plot points, and eventually an outline. And the outline is then translated into a full production draft. The difference between a working writer and everyone else is this system of capture. Everyone gets these random ideas; the writer organizes them toward the work in progress.

Sunday, March 24, 2019

Creation Stories

Starfield

Humanity loves stories. I love stories. From the time of Homer in Greece of the seventh century BC, the first storyteller of the literate Western tradition; from the time of Gilgamesh, King of Uruk, from sometime between 2,800 and 2,500 BC—the epic poem about him was the first known work of literature—poets have been putting their thoughts into story form. Not just chants or prayers or tributes to gods and goddesses, but coherent tales of action and dialogue, of sequence and consequence, of cause and effect. This is what sets us apart from the chatter of monkeys, the mindless songs of whales, and the howling of wolves.

The Bible, Old Testament and New, is a series of stories. The creation of the world, where God—the one true god, the only god, the supreme being—divides the light from the darkness, the water from the land, the animals from the new creation, Man. That beginning is a story that starts from nothing but the mind of God and proceeds to build a world.

And that story as much as anything set the human mind—or at least its Western variant, the Judeo-Christian tradition—up to see the world as a created place. It didn’t just exist for all time, as far as anyone knew. It had a start, a point at which something came out of relative nothing. Other traditions have their creation stories. For example, in the American Northwest, the native cultures believe that Great Raven made the world, the mountains, and the tides.

I suppose that, absent the ingrained need to tell stories, any observant and inquisitive mind would ponder how the world came into being. Such a mind would note that the nature of mountains and hills is to break down—through erosion, landslides, rockfalls—but never to build up. Such a mind as Leonardo Da Vinci’s, as recounted by paleontologist Stephen J. Gould,1 noted that mountaintops sometimes contained the fossils of sea creatures, and he wondered about them. In fact, humanity didn’t have an adequate explanation for the rise of mountains until the theory of plate tectonics was first proposed in the early 20th century and validated by evidence of the seafloor’s spreading in the 1950s and ’60s. But without that theory, mountains can be observed to break down from known causes but must be imagined to rise due to … earthquakes, convulsions, or the Hand of God.

Back when the Bible writers and Renaissance polymaths were wondering about such things, the world was just this one planet. The Earth was the central place of the universe, while everything else—Sun, Moon, other planets, and the stars in the night sky—was just a set of ornaments provided to heat this world, keep track of the months (and cause madness), and aid human beings with navigation. Otherwise, Earth was the place that mattered.

It wasn’t until that same early 20th century that humanity knew there were stars beyond the stars we could see. The “galaxy” was the Milky Way, the concentration of stars like a river in the night sky. But some of those “stars” were distinct points of light, while others were fuzzy patches that astronomers called “nebulae,” or clouds. With better telescopes, they could decide that some of these cloudy objects appeared to be actual backlit clouds of dust and gas, the remnants of exploded stars. But others remained just fuzzy patches. It wasn’t until Edwin Hubble announced in 1924 that the patch everyone called “Andromeda”—after the mythological daughter of Aethiopian King Cepheus and his wife Cassiopeia—was actually another galaxy, like our own Milky Way, but far off.

Just as Nicolaus Copernicus, with his new model for the motions of the Sun and Earth, turned “the world” into a solar system with other planets no less important than our own, so in one step Hubble expanded “the universe” from an island of local stars into a vastness of galaxies, hundreds of them, perhaps thousands, no less important than the stars we can see around us. It would take much larger telescopes, including the orbiting scope named after Hubble himself, to see that there are actually billions, if not a couple of trillion, galaxies expanding in clusters and webs that extend into the space beyond which even our most powerful telescopes can see.

Hubble also noted that the light from these distant objects was “redshifted,” or appeared farther down the spectrum and at lower energy levels than the light from stars in our own galaxy. This suggested to him and to other astronomers that those other galaxies were expanding away from us, and so the universe itself must be expanding.

By modeling the life cycle of stars based on size and temperature, calculating the age of the oldest stars according to this cycle, and figuring out how stars create the various elements by fusion—from helium out of hydrogen; then the lighter elements and metals through iron; and finally the heaviest extant elements like gold, lead, and uranium from the collapsing pressure of supernovas—astrophysicists could determine the various generations of stars needed to make up the universe we can observe. They came up with a probable age for the universe of about 13 billion years.

If the universe is expanding, it is reasonable to assume that it has always been doing so. And then, if you “roll back” that observed expansion by 13 billion years, you come to a point in primordial space. All the stars we can see, and the dust and gas we can’t directly see, everything in the universe collapses down to a point that’s infinitely small. And because it’s so packed with material, that point or singularity must be infinitely hot and dense and just waiting to explode. That’s what must have happened: this infinitely tiny, infinitely dense, infinitely hot thing exploded and spewed out all the matter in the universe. And this hot stuff then began expanding and cooling and evolving into protons, electrons, neutrons, neutrinos, and all the other subatomic particles, and finally into coherent matter in the form of hydrogen atoms. At the same time, residual energy in the form of fast-moving photons made light waves and all the variable energies we can detect. And after a time of expansion, bits of the local scene began to contract under gravity—as stars still do today—until they could ignite a fusion reaction and begin making the other elements out of those hydrogen atoms.

All of this was called the “Big Bang” theory, somewhat derisively, by astronomers who instead assumed that the universe had always existed in a “Steady State.” The two sides might never have resolved their positions, until in 1965 two engineers at Bell Labs in New Jersey, Arno Penzias and Robert Wilson, discovered the Big Bang’s echo. They were trying to fix the radio interference that was plaguing a giant radio antenna that was supposed to pick up satellite communications. Nothing they tried—even sweeping it for physical debris such as twigs and leaves—could clear up the signal. It was a low hum, energy at an almost stone-cold 2.3 degrees Kelvin, or -455.53 degrees Fahrenheit. This temperature matched theories that predicted those high-energy photos released in the Big Bang would, over a time of 13 billion years, have cooled to a microwave background radiation at just such a frequency.

That was proof of the Big Bang as the creation story of the universe. There was just one problem: if you rolled the apparent size of the observable universe back to that single point, it takes a lot longer than 13 billion years for it to expand. In other words, even if the universe expanded outward from that point at the speed of light, it would be a smaller universe than the one we can see today. This puzzled astronomers until a professor at Cornell, Alan Guth, in 1979 conceived of Inflation Theory. This theory said that in the period between 10-36 seconds and perhaps 10-33 or 10-32 seconds after the singularity exploded, and for reasons that are not explained, the space containing that outpouring of material expanded exponentially at much greater than light speed. It went from a zero-dimension point to about 0.88 millimeter—about the size of a grain of sand—in virtually no time, and afterwards the universe expanded at a much slower rate. This inflation period accounts not only for the current size of the universe but also its apparent smoothness.

And all of this story—from the discovery of other galaxies and the expansion of the universe, to the Big Bang, to the inflation that explains the Big Bang—has been conceived from observations and worked out with intense mathematical calculation within the last hundred years. Much of that calculation—if laymen can follow the numbers at all—grapples with issues of general relativity.

According to general relativity, time and space, or timespace, have/has no fixed or absolute value. Instead, they are true and fixed only for the local observer and based on his or her speed and the gravity well in which the observer finds him- or herself. People traveling faster or existing under heavier gravity experience the passage of time at a slower rate and the shape of space at a greater curvature than people living in slower, more open domains.

But also, according to general relativity, the speed limit of the universe is fixed at the speed of light, c, or 186,272 miles (299,792 kilometers) per second. So regardless of how compact the Big Bang mass might have been, its speed of expansion according to any observation was pegged at that amount of distance over time … or not.

Doesn’t this all seem to be just a bit too artificial? Massive singularities, rapid expansion, a fixed age for a universe that is continually expanding, with temporary conditions that violate other theories. As I have stated elsewhere,2 we may not yet understand the nature of space, time, and gravity at all. So we may not be equipped to unravel the nature of an expanding universe or roll its scale back to the infinitesimal spitball of hot matter that the Big Bang requires.

We may instead be living in an age of conjecture comparable—but with more sophisticated theories and advanced mathematics—to the years between Copernicus’s modeling of the sun-centered universe in 1543 and Kepler’s working out the theory of planetary motion as ellipses rather than perfect circles in 1619. And now we are waiting for a better theory of space, time, and gravity to account for our developing observations.

But then, on the other hand, why did the universe need to be created at all?

1. See Leonardo’s Mountain of Clams and the Diet of Worms: Essays on Natural History, from 1998.

2. See Fun with Numbers (I) from September 19, 2010, and (II) from September 26, 2010.

Sunday, March 17, 2019

Warp Drive

Enterprise warp bubble

So I get to thinking about things. And a recurrent theme with me is the size of the universe, interstellar distances, and how humanity will one day—and other intelligent beings perhaps sometime sooner—cross them.

According to our current thinking about the nature of space and time—or “spacetime,” if you will—we physical beings cannot travel faster than light.1 Supposedly, the nearer you approach light speed, or c, the more massive yourself and your ship become and the slower your onboard clocks tick until, finally, at c, you and the ship weigh an infinite amount and time stops for you. That would be a problem, especially since you also have to carry fuel to move that mass—at least with our current propulsion technologies. And if time has stopped, how are you accounting for your speed toward your destination? But I digress …

Popular science fiction tropes to deal with this—so that humanity and other beings can conquer and maintain interstellar empires that don’t quickly become temporally distorted and dissociated—include both wormholes and warp drives. Wormholes presumably punch through the “fabric” of space that appears to be crumped up like a giant wad of papier-mâché, so that one place and another are not actually separated by vast interstellar distances but actually lie side-by-side through interdimensional space. This presumes, of course, that the entire universe we see around us is crushed up to a thing about the size of a walnut. But I digress …

Warp drives, popularized by the Star Trek television franchise, allow that those two places are indeed far apart, but that you can get from one to the other without violating the light-speed limit by collapsing the “fabric” of space ahead of the ship while simultaneously expanding it behind. You do this by creating a “warp bubble” around the ship. My best analogy for this—since it’s kind of hard to envision space itself collapsing and expanding2—is someone walking on an elastic sidewalk.

From my home in the Bay Area to the state capitol in Sacramento is a distance of about seventy miles. Walking at a steady pace of four miles per hour—my maximum sustainable speed—I could get there in about twenty hours, allowing for one or two rest stops along the way. I have long legs, so my stride is a bit longer than three feet, heel to toe, but I can’t move my legs any faster than my normal, determined pace of about two strides per second, or say, six feet per second, to complete the trip in any less time.

But suppose I could somehow, magically, draw together or compress the sidewalk and the ground beneath it that lies in front of me, so that each of my three-foot strides might cover, say, thirty feet. And as soon as I had placed that forward foot and lifted my rear foot, the sidewalk expanded again to unclench the concrete and soil behind me. My legs wouldn’t be moving any faster; I myself would not be exceeding my walking speed limit of about four miles per hour. But with a ten-to-one advantage in ground coverage, I could make the trip to Sacramento in two hours without getting out of breath. If I could compress the sidewalk by 300 feet per step, and expand it again behind me at the same rate, I could walk to the city in twelve minutes without even breaking a sweat.

I would be warping the sidewalk and the ground under it in the same way the starship Enterprise creates a warp bubble to collapse and expand the space around it.

With my walking pace, we have known points of contact with the ground—my heel coming down, my toes pushing off—at a steady pace of three feet per step, six feet per second. So we can easily determine how much the sidewalk has to collapse and expand to achieve a reasonable travel time. If I were moving much slower—say, hobbling with a cane—we would need to collapse larger and larger amounts of sidewalk but do so at a slower rate than once per half-second, to make that twelve-minute trip of seventy miles. Conversely, if I were a seasoned marathoner, running three times as fast as a brisk walking pace, or about twelve miles per hour, we would need to take smaller bites of sidewalk but collapse and expand them at a much higher cycling rate.

What’s left out of the Star Trek story is how fast the Enterprise can travel between the stars without warp effects. The alternative to warp dive in the narrative is “impulse drive,” which is presumably some kind of mass-reaction thrust. But the stories told around the ship’s adventures never exactly correlate the capabilities of either drive with distances traveled. Sometimes a modest speed at warp drive covers light years in a matter of minutes; sometimes much longer. Sometimes a hefty fraction of impulse drive will take them halfway across a solar system in a minute or two; sometimes much shorter distances—say, to close with an enemy vessel a thousand kilometers away—in the same time.

Various official and unofficial “manuals” created either by the show runners or the fans attempt to quantify these fantasy speeds. One reference says that maximum impulse speed is one-quarter of light speed, or 167,000,000 miles per hour. So, without the benefit of warp drive effects, the ship could travel the 93 million from Earth to the Sun in about half an hour. Or the 365 million miles, on average, between Earth and Jupiter in about two hours and eleven minutes. So, to maneuver within orbital distances around a planet or to close within range of an enemy a hundred kilometers away, the ship would need to operate at the barest fraction of impulse drive. Not tenths but hundredths or thousandths of the available thrust.

All of which—and given that these fantasy speeds are extremely slippery—makes me wonder how much of the fabric of space does the starship’s warp bubble need to collapse and expand to appreciably speed up its maximum non-warp speed. If the bubble extended for just a hundred feet around the ship, or even a couple of thousand feet, it would have to cycle extremely fast to make any decent headway on an interstellar flight. I mean, it would be collapsing and releasing that much space in terms of microsecond or even nanosecond cycling, over and over again. Otherwise, the ship’s maximum 167-million-mile-per-hour speed would simply overrun the bubble.

Or the ship’s warp bubble would have to take in a lot of space. If the Enterprise is reputed to be a kilometer long, and moving at even a fraction of its top impulse speed, say, fifty percent, or 83.8 million miles per hour—or 23,285 miles per second (37,474 kilometers per second)—then it would have to collapse a volume of space about 37,500 times its own length each second just to keep pace with itself. To increase this natural, non-stressful impulse speed by a factor of ten, it would have to collapse 374,740 kilometers of space ahead of its bow. That’s just a little less than the distance from Earth to the Moon, which is 384,400 kilometers. And even that wouldn’t be a very high “warp factor,” because at ten times half-impulse speed, the ship would travel to the nearest star, Proxima Centauri—a distance of 4.22 light years, or 40 trillion kilometers—in about 3.4 years.

To obtain the warp speeds needed to represent travel times in the Star Trek world, the Enterprise would have to be collapsing volumes of space roughly equivalent to our solar system. Either that, or the ship would be collapsing and expanding smaller volumes at much higher cycling rates—so high that the warp field would probably destabilize any “structure” such a volume of space might have and reduce any interstellar dust and gas captured in that volume to blazing quarks.

Early in the Star Trek narrative, the ship had to travel far outside a planet’s gravity well before engaging its warp drive. That story element seems to have since been dropped from the telling in later series—again, the show’s distances and speeds are slippery things. But still, if a ship that was traveling even close to a near-Earth orbit engaged its warp drive at even the lowest factors described above, it would severely damage the fabric of the planet and play havoc with the Moon’s orbit.

But the question of the ship overrunning the warp bubble presents a conceptual puzzle, doesn’t it? The warp drive serves no purpose as a travel enhancer if the starship merely sits in the middle of a bubble while space pulsates around it: contracting and relaxing ahead of the bow, expanding and then retracting behind the stern. Just as I must step across the wrinkled, compressed concrete of the sidewalk to take advantage of that thirty- or three hundred-foot contraction, so the starship would have to cross the region of collapsed space ahead of it in order to put all that collapsed space behind it. If I’m walking in the open air and only the sidewalk is contracting beneath me, then my body is not affected by the compression. But for the starship, all of space is collapsing around it. Presumably this collapse would also affect the fabric of the ship’s hull and the people inside. So how does the ship survive that ultimate disruption? But perhaps I digress …

As things stand in our real, non-fantasy physics, we can’t begin to imagine grappling directly with the “fabric” of space—if such a thing even exists—or how we might make it collapse by so much as a cubic centimeter. My bet is that doing this kind of roughhouse to so small a volume would still require immense amounts of energy. To collapse a volume stretching from Earth to the Moon would take more energy than you could get from any conceivable matter-antimatter reaction. And to collapse the volume of even a medium-sized solar system would be playing with energies reserved for the gods.

Space is really, really big. Manipulating it in any significant way to travel between the stars will take unheard-of energies and a physics we can’t yet begin to understand. … Maybe it would be simpler just to punch through a wormhole to the other side.

1. Neither can energy elementals or beings of pure thought, according to the theory of relativity. But for now we’ll concentrate on carting our physical, protoplasmic bodies to the stars.

2. As I’ve said numerous times before, I don’t think our physics or mathematics really understands or accurately describes space, time, and gravity. For which see, once again, Fun with Numbers (I) from September 19, 2010, and (II) from September 26, 2010.