Sunday, August 21, 2016

Utopia and Dystopia

I grew up with the dystopian1 novels 1984 by George Orwell and Brave New World by Aldous Huxley, and in college I studied the utopian works The Republic of Plato2 and Looking Backward by Edward Bellamy, and the dystopian We by Yevgeny Zamyatin, among others. More recently, I’ve read Suzanne Collins’s dystopian Hunger Games trilogy and seen its movie adaptations, and I’ve watched the movies of Veronica Roth’s dystopian Divergent series, although I have not yet read the books.

I understand why an author would write either a utopia or a dystopia. The former is to test out ideas about how the human condition and human society might be made better if not perfect. The latter demonstrates how, in the presence of some human failing or absent some leavening force, society might become very much worse. All of this is the proper sphere of science fiction: to envision an alternate future for humanity and the mechanisms that might drive us toward its attainment. I understand the purpose of writing a utopia or dystopia, but I’ve never tried to write one myself.

One might also see in much of our current politics, especially among American Progressives and European Socialists, a real-life attempt to erect utopia here on Earth, marked by universal and equal access to education, health care, job opportunities, plentiful goods and services, and proportionate personal wealth. In similar fashion, the goal of the early Marxists was to create a worker’s paradise of individual labor freed from the strictures of market forces, return on capital, authoritarian bosses, religious coercion, and government interference. This was also the theme of the John Lennon song Imagine: no religion, no countries, no possessions, “… nothing to kill or die for … a brotherhood of man.” And it was the drive of every 19th-century utopian commune in America that was run along socialist lines to create a separate space of equality, friendship, love, and shared physical labor.

I can understand the motive for writing such stories and dreaming such dreams. But, for me, the books and the political schemes they represent simply do not work. I don’t think I could ever write a thoroughgoing utopia or dystopia.

These books and political programs are all based on distorting human nature. In utopias, the distortion is toward a good and positive spirit of sacrifice and selflessness, which is indeed found in pure form in some human beings, but is by no means the dominant characteristic of our species. In dystopias, the distortion is usually toward a spirit of gullibility and passivity on the part of society as a whole, and callousness, manipulation, and greed among its leaders—which again marks some humans to a high degree but is not the nature of all or even most human beings. The result is that most of these stories present societies that revolve around only one political premise, one positive or negative cultural value. And most of the characters in them are mere caricatures or cartoons, reacting to that premise, sustained or squashed by that value, and not real people with complex emotions and motivations. Paintings done in two dimensions with a limited palette are boring. Music played on a simple scale without sharps or flats, or the rudiments of harmonics, is boring.

Probably the most true-to-life—and therefore most frightening—of the dystopias is 1984. Its depiction of an all-powerful state with an ever-watchful leader feels real when compared with Nazi Germany, Soviet Russia, or Communist North Korea. The protagonist Winston Smith and his paramour Julia are small, powerless, feeble in their attempted resistance, and ultimately left naked in their defeat. The mass of society is obedient to a fault, innocently accepting when people and events are written out of the history books and sent down the “memory hole.” They try eagerly to comply with the state-sponsored changes in language which make productive thought harder and harder. There is an apparently active underground, but it turns out to be only a state-sponsored fiction created to cement state control.

You read the book and despair—until you remember that even the most draconian states, like the Nazis and Soviets, had their internal resistance groups which were not state approved. Real people have a conscience and a working memory. Yes, you can imitate public allegiance through the regulation of civic action, like forced attendance at parades and regimented salutes, and the manipulation of popular speech, like substituting “Heil Hitler” for “Good morning.” Yes, you can turn some people into frightened sheep and others into social-climbing wolves. But in the privacy of the home, in the intimacy of the family, the leavening of human nature will react with doubt and scorn. The Soviets, for all their propaganda machine, oiled with the likes of the dreary newspaper Pravda and the even drearier humor magazine Krokodil, reaped a harvest of popular samizdat, or underground publications and Western music laboriously copied out or rerecorded manually and passed along from hand to hand.

Utopias and dystopias both seem to founder on questions of scale and issues of absolutism. In Ray Bradbury’s Fahrenheit 451, the prohibition on books and reading matter was so complete that, at least in the movie adaptation, people were looking at newspapers printed as comic-book imagery with empty word balloons, and they seemed to enjoy television plays with empty drama about who gets to sleep in the Blue Room. Real people don’t put up with such nonsense.3 Presumably, then, statutes and public announcements, and technical documents such as instruction manuals and operating procedures, would all be transmitted through verbal recitations and video demonstrations rather than written text.4

The matter of absolutes comes home in the idea of the perfect society having “no possessions.” I can understand how that would mean one person not owning, as his or her sole fiefdom, a chain of factories with monopoly power over some national good or service, employing hundreds of thousands of people who respond to the owner’s instructions as if they were government edicts, and paying him or her billions of dollars in profits. I can also imagine this deprecation of the human impulse toward acquisition as applying to the farm that a family works to both provide a private food source for themselves and generate a cash crop, or the corner store that a family runs to sell food and sundries to the local neighborhood while generating work for themselves and a personal income. “No possessions” might also apply to the house a person lives in or the car he or she drives, as communal housing and public transportation can be thought more equitable and efficient. But do I get to own the pants that I wear today, wash myself tonight, and then put on again tomorrow? How about my shoes or my jacket? Or the toy my child plays with and loves to distraction? Where does one draw the line when decreeing “no possessions”?

In 1984, the telescreen in every room not only played out endless exhortations and propaganda but also watched as the occupant went about his or her private business and eavesdropped on every conversation. The slightest deviation from party discipline could presumably be detected and punished. This sort of continuous surveillance was science fiction when George Orwell wrote the novel. However, as many people discovered when the Patriot Act was approved in 2001, powerful computers and the transition of our telephone system from analog copper wires to networks switching around digital packets meant that the National Security Agency and other government bodies could sample every conversation flowing through the system, pick out key words and subversive ideas, and build a case against any citizen. This was the absolute control of the 1984 telescreen brought terrifyingly to life.

Except … it’s not. Even if a network of supercomputers could analyze, weigh, and flag subversive speech among the billions of words that 320 million Americans—and about as many more foreign nationals—speak into the telephone system every day, the government prosecutors with their limited—though still impressive—resources would go crazy trying to track down and take action on every lead. Even to become a “person of interest,” a potential assassin or saboteur needs to do more than speak a few predesignated words into a telephone. And, as we’ve seen from the news analysis of recent terrorist acts, even people of interest with proof against them tend to be vetted and dismissed into a sea of suspicious but not indictable characters. Supercomputers may do the flagging, but human beings with their limited imaginations, faulty attention spans, and imperfect understanding of every situation will still do the follow-up interviews.5

The world is neither wholly good nor bad; instead, it blends both characteristics in equal measure. Human beings are neither wholly self-sacrificing and subservient, nor selfish and grasping; instead, they are a mixture of both, in different measures at different times. Emphasizing one aspect of the world, society, or human nature might make a strong point in an interesting study, but it does not make for a good story. Real people make for good stories because they are not caricatures, not entirely predictable, and they follow a story arc that the reader or viewer can only bet on but never know for certain in advance.

1. Thomas More coined the word utopia from Greek roots meaning “no place” for the title of his 1516 book.

2. Although The Republic is commonly described as a utopia, I wouldn’t want to live there. Like Thomas More’s description of an ideal island full of selfless people farming the land and being rationally distributed, and when necessary redistributed, around the countryside in groups to maintain an unnatural state of balance, Plato’s ideal city-state treats its people more like puppets than citizens: deprived of family life and private property, subjected to an educational system strong on physical conditioning and mathematics, forbidden to read poetry and fiction, and drenched in public-spirited martial music. In either situation, I would be planning my escape … and I suppose that’s the point.

3. Even American television programs—or at least those that survive past the cancellation point of their first season—have some content or features to which a conscious, self-aware, adult human being can relate.

4. When I worked as a documentation specialist in the pharmaceutical industry, the question arose about using photos and illustrations in our operating procedures—and presumably this would extend to reliance on training videos. The U.S. Food and Drug Administration regulations require written procedures, which can be cited and specifically enforced, rather than imagery and demonstrations, which are subject to the viewer’s attention span and interpretation. The force of law will still be codified in words.

5. But with all of this, I cannot account for North Korea. There the population lives in primitive darkness—see the nighttime satellite photos of a blacked-out country—under gulag conditions, and on the edge of starvation. The Kim family and their military supporters seem to have achieved total, interpersonal, locked-down control of the country in the sixty-odd years since the Korean War ended. Perhaps you cannot breed humans into self-sacrificing sheep, but in three generations of unrelenting surveillance and punishment you can shape them into mice that hide from the daylight.

Sunday, August 14, 2016

Rational Thoughts on Suicide

Suicide—the taking of one’s own life or allowing oneself to die with or without a fight—is not always or by itself an irrational act. As a novelist, I can think of many situations where a calm and rational person might be willing to face certain death in order that others may live. This is the wounded soldier, found in many stories, who stays behind to hold off the approaching enemy while the rest of the company escapes. Or the Sydney Carton1 who offers himself in the place of another, better man.

But rational suicide might not always involve self-sacrifice. A person faced with an inevitable and painful death, such as burning alive or succumbing to a ravaging disease, might choose to accept a quicker, less painful way out of life. This is not an irrational act, although it might be a desperate and despairing one.

Our species could not be fully self-aware, or even fully human, if we could not rationally contemplate our own personal destruction, the end of a time that we must know is finite. Indeed, I have always favored the definition Robert A. Heinlein gives for an adult: someone who knows he is going to die. Once a person has come to terms with the inevitable, he or she knows what is possible, understands the value of his or her own life, and can decide how best to spend it. That is, how to choose between the potential achievements of the remaining years against the goal that is on offer now. Someone who does not know those years are already numbered, no matter how many they may be, and death is inevitable, whenever it comes—such a person remains a child with the fancies and illusions of a child.

As an adult, we want our lives to mean something: to serve some purpose greater than ourselves. Even if that purpose is one we have chosen for ourselves and serves some internal ideal—such as painting a beautiful picture or writing a thoughtful novel, something only we ourselves can judge and appreciate—it is still a greater purpose than satisfying our personal wants or gratifying our senses. In the same way, we want our deaths, the last act of our lives, to mean something as well, to serve a purpose greater than demonstrating our own foolish choices and carelessness.

As human beings, we strive for purpose in a world, and in an ecological niche, that does not automatically provide sense and meaning to our lives. Yes, we have the commandment, written into our genes as in our Bible,2 to go forth, be fruitful, and multiply. But this is not an individual mandate. Being one link in a chain that stretches backward to the first one-celled microbes and forward to whatever comes next in the evolution of life on this planet is simply a biological necessity. And indeed, to fail to reproduce is an act of cellular suicide all in itself. But merely having children—for most people, males especially—does not satisfy the rational part of the brain that celebrates the individual, the ego, the “I” that is not merely a collection of cells but an autonomous, free-willed being.

Nothing in life can supply the ego’s purpose from the outside. Well, except perhaps for a parent or kindly grandparent who bends the imagination of a young child toward a certain pursuit, amenable to the child’s talents, experience, capabilities. Such a lucky child may grow up with an ingrained sense of purpose that he or she might think came out of the air, naturally, as a directive from some higher power.

But for the rest of us, we flounder. We must decide for ourselves what our destiny and our fate will be. And many of us never rise to the awareness that this is a natural choice at all, that we must put thought and energy into deciding what path our lives will take and what kind of person we will become. For those who do not ever recognize the choice and its importance, life is a matter of drifting on the currents, like a not very interesting character in a not very well written novel. For such people, suicide might come easily.

The wish to continue in life and fulfill that purpose is also a matter of projection, expectations, and the weighing of chances. For those of us who make the arts our personal goal and the focus of our extra-biological attentions—that is, aside from the daily routine of eating, sleeping, bathing, dressing, and other self-maintenance activities—the realization that our own talent may not meet expectations, that a future of study and practice won’t improve our odds of success, and that we will end in obscurity can be a crushing blow. “Ego death,” as one of my wargaming friends describes a total, ignominious defeat.

Yes, we are assured that the effort is the goal, that simply doing the work is its own reward, and that fame and fortune come to but a few. If an artist or a writer can be satisfied with his or her own work, no matter what the critics and the buying public think, then these palliatives will satisfy the demands of ego and purpose. But what happens when the creator looks at the work, the total oeuvre, and sees only trash?3 Then he or she has failed not only the expectations of the public, friends, and family, but also of oneself. And then nothing is left. Ego death for real.

Given this potential for critical self-doubt, perhaps it is better to make the personal goal simply one of offering service to others, in the manner of Mother Theresa. We can make personal meaning out of helping wherever there is a need and we can supply a willing pair of hands or a problem-solving intellect. In these cases, the overall quality of the work and the personal responsibility for the outcome are less important than the will and vigor with which the effort is made. The outcome lies in other hands, the responsibility with the fates or the gods.

And, as to ego death, people go through calamities all the time. Into each life comes the loss of a loved one, alienation from family and friends, disappearance of fortune or reputation, devastation by storm or fire with the loss of a home or property into which the person has put so much of his or her time and effort. The things we value turn to ashes and dust. Our hope for the future, of living out our lives in a time bubble where these perishable things remain forever unchanging, is dashed. And yet into the void created by such losses there sometimes seems to creep—at least for those of us who are lucky in attitude, or have learned from early training, or persist by some cellular vitality—the restless turning to other loves, other goals, other vessels for our sense of self, security, attachment to life, and hope for the future.

The lucky people can bend with misfortune, shift gears, find new roads, and move forward. In fact, they may never look far enough down the road on which they are traveling to ask what happens when it ends. They know that all roads eventually end, but that most roads also branch out, that goals are malleable, and that people—every person, regardless of past history—are capable of remaking themselves into something new. There is always something new that a human being can try. All it takes is bravery and patience.

Life is persistence. And it can be a long time until the candle finally burns out. That is also something every adult knows.4

1. From Charles Dickens’s A Tale of Two Cities.

2. Genesis 1:28 in the King James Version.

3. Public radio personality Ira Glass has an especially apt thought in this regard, animated by the following quote.

4. After reading all this, Odin asked, “Does he have any idea what’s coming?” And the Three Norns replied in unison, “Nope.”

Sunday, August 7, 2016

Sparkly Shoes

Recently in our condominium garage we came across our neighbors from down the hall, who have a little girl about three years old. She was stomping across the pavement, and with every step her tennis shoes gave off red, blue, and green sparkles. Clearly, she was delighted with the effect, and so were her parents. And that made me think …

When I was growing up, batteries were bulky things—mostly C and D cells—that tended toward fragility and leaked various corrosive liquids. Tiny, powerful, long-lived batteries based on rare minerals like lithium were decades away from commercial use. Back then, too, strain gauges were exotic devices in the hands of NASA and possibly the military. And light-emitting diodes (LEDs) were either unknown or still in deep development in the laboratory. If someone told me that in my lifetime an entrepreneur would put them all together to make sparkly shoes for toddlers … No, that someone would think of putting these exotic and expensive devices into shoes for which there is no naturally perceived need, and that parents would buy them just to get a smile from a child’s face—well, I would have marveled at the thought.1

In another amazingly silly use of high technology, we now have millions of people all over this country using their smartphones—which have embedded applications such as timekeeping, photo imaging, global satellite positioning, and software programming—to track down and “capture” mythical Japanese pocket monsters, or “pokémons,” so they could win non-monetary credits or kudos or some kind of recognition, even if it’s only their own self-satisfaction.

Please understand that I’m not against sparkly shoes and pokémons. In fact, as a convinced free-market capitalist, I find this frivolous use of advanced technology absolutely wonderful. We live in a world where whimsy and fun still matter. And smart entrepreneurs can still make a buck inventing clever ways to amuse other people. You might call that buck-making a cynical manipulation of people’s emotions. I call it, in the words of Henry J. Kaiser, “finding an [as yet unspoken] need and filling it.”

A socialist or communist society would never come up with these things. In such societies, the Ministry of Shoes would be dedicated to making sober, sensible, box-toed Oxfords for all the serious, pre-grownup children. And when every last child had at least one pair of regulation shoes—as if the children of America are not actually swimming in shoes—the ministry would turn its attention to other worthy causes, like preserving cattle hides, preventing deforestation, or engaging in Muslim outreach. The Ministry of Shoes would never think to develop, manufacture, and offer sparkly shoes as a secondary and delightful addition to a toddler’s wardrobe. And the Ministry of Communications would never think to put a camera, programming, or GPS function into a telephone in the first place. After all, the sober, sensible bureaucrats in charge of new product development would never let frivolity and fun enter the fixed-market equation while there were still hunger, want, and homelessness somewhere in the world.2

The amazing thing about this rise in the marketplace of sparkly shoes is that the national supply of batteries, strain gauges, and LEDs has not been in any way depleted. Neither has the playing of Pokémon Go cut into the availability of telecommunication or satellite positioning services for the rest of the country. Sure, there are children in Ethiopia and South Sudan who are deprived of their fair share of sparkly shoes—as I am sure the military establishments in those places are also suffering a dearth of batteries, strain gauges, and LEDs. But their lack was not caused by putting sparkly shoes on the feet of American toddlers. And stripping the sparkles from American sneakers would do nothing to put more shoes on the feet or food in the mouths of African children, nor would it improve their local economy or raise their educational prospects.

As I’ve noted elsewhere, the economy is not a pie. Slicing economic rewards thinner for me does not create more wealth for you, or vice versa. Rather, the economy is like a rain forest ecology: the more life there exists under its canopy—capturing the energy of sunlight and preserving it as fruits, seeds, sap, edible leaves, insects, birds, beasts, and compostable mulch3—the more niches for life there can be. The more people who are out there in the economy creating sparkly shoes and pokémon games, the more incentive there will be to demand, and more wealth to fund, the next wave of miniaturization in batteries, strain gauges, LEDs, megapixel cameras, computer controls, GPS satellites, and a host of related technologies.

This has been the story of our amazing escalation in technology since the invention of the steam engine as a coal-mine dewatering machine in the late 1700s. Someone thinks of a new application—put the engine in a boat with a paddle wheel, put it in a cart on steel rails—and soon the technology is growing and changing, becoming more ubiquitous. And, with the human capacity for learning, retaining, and sharing experiences and discoveries, the technologies usually become smaller, better, more efficient, and less expensive. If you doubt this, think back to the first cell phones in the 1970s and ’80s: usually mounted in cars, because of their bulk and power requirements, then more portable but still the size of a brick, with a Western Electric–style handset on a cord. A rich man’s toy. Now you can buy a mobile phone for the cost of a good lunch, and in some countries it’s easier to get cellular service than a landline.

Wars have sometimes helped with the development of some of this technology. Certainly, World War I saw an improvement in the mechanization and automation of the battlefield, with benefits drifting over to civilian technology in the form of more robust automobiles and airplanes. World War II saw vast improvements in radio technology, radar, codes and code breaking, the first computing technology—generally associated with code breaking, artillery firing solutions, and development of the atom bomb—and large-scale production and use of aluminum in aircraft manufacturing. These advances then provided a boost to everyday civilian life in the decades that followed.

But television also came along between the wars, served no real military purpose, and advanced just as rapidly in purely civilian usage. And the discovery and manipulation of the silicon transistor—progressing from individual devices that emulated old-style vacuum tubes to integrated circuits that put a huge number of gated operations onto something the size of a postage stamp—were first a civilian invention. Sure, military technology benefited from using integrated circuits, but so did whole civilian industries of electronics applications for entertainment, automotive controls, and mobile computing. Now we are entering the biotech age, and that owes most of its advances to the sequencing of the human genome—a purely civilian project—and almost nothing to work on bioweapons.

Barring the civilizational devastation of a global economic crash, nuclear war, or asteroid strike, this advancement in technology will continue for as far as the eye can see. Some advances are predictable, and as a science fiction writer I try mightily to get ahead of them: like more convenient and personalized communications, new clothing options, transportation modes, and medical procedures, all based on computerized automation, artificial intelligence, and the linkage of systems and technologies that once operated in isolation. Some advances I defy anyone to predict or even imagine: like sparkly shoes and Pokémon Go.

The world of the next twenty years, hundred years … thousand years is going to be unrecognizable to our most modern eyes. I can hardly wait!

1. And then I would have asked, when do I get my Jetsons-style jetpack? Oh, yes, that’s almost here—and here, too.

2. Of course, in a socialist or communist society, where ever-declining government tax revenues must chase ever-increasing economic and social problems—“eventually running out of other people’s money,” in the words of Margaret Thatcher—there would never be any money to spare for frivolity and fun.

3. In this view of economics, the energy from sunlight captured with hydrocarbon compounds in the rain forest is analogous to the energy of human work and imagination captured in goods, services, and the money to pay for them in the marketplace.

Sunday, July 24, 2016

Excess Energy

I was watching a baseball game the other day. As a batter stepped into the batter’s box, I took note of his motions. He scraped the dirt with the tip of his bat, tracing out an obliterated line on one side of the box. He set his feet, then kicked out a rut in the dirt, and set his feet again. He tapped home plate twice with his bat, then brought the bat up behind his shoulder. He took a slow practice swing—but not all the way through—while waiting for the pitcher. He brought the bat up again. He shifted his feet and lifted one knee.

We’ve all seen this performance before. But then I got to wondering how the game of baseball would be played by robots. A machine would walk or wheel itself into position—a place determined by triangulation from visual cues in memory to be the perfect location for the hitting of a ball. It would raise the bat to the correct elevation and angle for a perfect swing—again established by past programming. It would then wait for the pitch and for its cameras to collect enough information on the ball’s inbound trajectory to make the perfect connection in terms of speed, force, and angle, allowing for proper deflection and spin to put the outbound trajectory in the strategically perfect part of the field. And in between these separate acts—positioning, attaining bat angle, observing trajectory, and swinging—there would be nothing, stillness, silence, and neural calm, because everything was determined by programming from prior analysis and evaluation of previous experience and optimal probabilities.

Does this stillness due to programmed experience make the robot the better player? No, because the human being has just as much experience. Although a human batter could not verbally articulate or consciously identify all the previous pitches he had seen and all of his best responses and swings, they would indeed be stored somewhere in his memory and fed into his muscles by training and learned reflex.

Why then does the human batter have to move around so much? Why is he wasting energy that should be contained and stored and fed into the muscle power behind his swing?

Consider that our bodies are made up of individual cells. These are collected into nerve networks and muscle fibers, but each cell is still functioning as an independent metabolic factory. Each nerve is firing and each muscle is moving all the time, even when not needed—sometimes even when they are not wanted. Our bodies are complex associations of once-independent organisms, rather than intelligently designed, single-purpose—or even multi-purpose—machines. As individuals, we must learn to harness this energy. If we desire stillness, either mentally or physically, we must learn it through practice and concentration. Our natural state is daydreaming and fidgeting.

This is a survival skill. Not the peaceful pose and focused mind of the meditative Buddha, but the wandering mind and restless body of our keyed-up natural state. For a species that once lived by hunting and gathering—and often being hunted ourselves by larger, fiercer predators—we benefited from a pair of eyes that constantly scan and search from near to far and then left to right, an attention span that focuses briefly here and there in the underbrush and then in the sky, and muscles twitching and poised to move instantly in any direction. We put that excess energy in all those nerve and muscle fibers to good use: ready to snatch the next berry we spot, shoot an arrow at the next shadow that moves in the forest, or take advantage of a sudden line of retreat when the bushes shake and the leopard leaps.

This excess energy means that our minds and bodies are reevaluating current conditions and recalibrating our potential all the time. We need that in our upright stance and bipedal gait, because balancing on two legs requires constant tensioning of our muscles and tendons to maintain our posture, and constant monitoring of our inner-ear balance in relation to gravity as we sit, stand, or move in order to keep our upright bodies from toppling. It’s a problem that a turtle dragging its shell along on four legs—or a robot stabilized on four wheels or a tripod—does not have to face.

This excess energy and the jumpy, unfocused awareness and twitching muscles that accompany it mean that we not only have to practice stillness—such as when we wait patiently along a game trail for the shy deer to approach—but also when we want to perform a coordinated activity, like drawing and shooting an arrow or hitting a baseball with a bat. We must practice the individual acts of preparation, tension, and release; match the starting and ending positions of hands and limbs to the smooth movements we are about to make; focus our eyes on the selected target; and prepare our minds with rehearsed imagery and spiritual self-talk in order to keep our thoughts focused on the task at hand and coordinate the entire performance subliminally. We do these mental rituals so that this time—in actual execution instead of random practice—we don’t have to consciously break the performance down into the tiny, component movements and intermediate physical and mental positions that make up the whole.

Humans are dynamic systems that brim with energy—both mental and physical—and operate at much higher cellular and nervous rates—“clock rates,” if you will—than any machine designed for a single, dedicated purpose. The machine’s computer brain might have quicker reflexes, but it is programmed to pick up certain, pre-defined signals and then make predictable, pre-written responses. The machines—at least in their current generation—are a “one trick pony.” Give them the wrong signal or one for which they have not been programmed, and their response will be unpredictable—or they will remain completely inert.

To give the machines the appearance of versatility, a programmer must plan for and write instruction sets for more and more conditions, signals, and responses. With large memory capacity and high cycle rates, the machine may be able to store more and more programming able to cope with more and more situations. But a machine—at least in the current generation—will not be able to encounter an entirely new condition, receive a totally unknown signal, and intuit a correct response by comparison with past experience.

Humans also might not do well in a new situation for which they have not practiced. Give a bat to a young boy who has never played baseball, then throw a ball at him, and he might not hit it with any skill or grace. But if he has ever seen a ballgame or watched a brother or sister at play, he might at least try, because mimicry is another human trait. A robot will wait for specific instructions as to how to focus on the ball, position the bat, and swing it.

We humans divert all that excess energy and mental capacity into random fidgets and idle daydreams, because we are omni-purposed beings who live in an unpredictable world.

Sunday, July 17, 2016

Taking Responsibility

In response to a Facebook friend’s recent posting of a video showing drivers supposedly blindfolded when moving among motorcyclists, I replied with my Three Rules for Riding: 1. Nobody’s looking out for you. 2. Even if they’re looking right at you, they don’t see you. 3. Even if they see you, they don’t care! It’s a reminder that drivers don’t understand the vulnerabilities of visibility and stability that a motorcyclist faces, and so it’s my job as a rider to watch out for them and stay out of their way. If I get into an accident—any accident, including being hit by another driver, skidding on a patch of oil or gravel, or running off the road—it’s going to be my fault.

This is part of what I call my “doctrine,” which is a loose and uncatalogued collection of rules and reminders that clicks into place every time I swing a leg over the saddle. One example of the doctrine is what to do if something darts out from the side of the road. In the split second between visual contact and collision, when the SIPRE process starts up,1 I have to decide how to weave so that my line of travel passes either in front of the moving object or behind it, based on its speed, my speed, and other characteristics. Behind is usually the safest choice, because that minimizes the potential for collision by separating the object’s observed trajectory from mine. And even if I miscalculate and hit the object broadside, I’ll be impacting with only the force of my own momentum and not adding the object’s momentum to the equation. If the moving object is a big shape, like a deer or a car, my doctrine says to steer so as to pass behind.

But if the object is small and light, such as a ball or a dog, the doctrine changes. Depending on location and time of day, such as a residential neighborhood and midday or evening, then any objects moving into the street have a high probability of being chased by a human being: a child after his ball or dog, or a mother after her toddler. In those cases, the doctrine says to steer in front of the moving object—and either brake sharply or speed up, depending on velocities—so as not to hit either the object or the person chasing it.

My riding doctrine has many pieces and parts like this, covering dozens of issues such as when to brake or accelerate in an emergency; how to treat any visually dark patch on the road, which might be a pothole, a piece of shredded truck tire, or a patch of oil; mirror checks and head checks when traveling straight, passing, and changing lanes; how much of a safety margin to maintain around the bike, including the “two-second rule”2 and when it’s safe to lane split;3 and so on.

Thinking ahead, being prepared for potential problems, and taking responsibility for your own health and safety, as well as for those around you, applies to more than just motorcycle riding.

I take the concept of margins seriously. This means providing extra space, time, or some other measurable dimension to allow for the unforeseen. In making travel plans, I always allow a time margin to account for extra traffic, an accident, or simply my “running late” before leaving. The thought that, in most cases, nothing will happen and I will arrive early, to be left kicking my heels at the airport or some other venue, does not deter me. That is why I have music and an ebook reader on my iPhone. Being a respectful sort, I would rather arrive early and then waste my own time waiting than arrive late and inconvenience other people—or perhaps miss the flight.

My work in publishing, technical writing, and corporate communications often involved projects on tight schedules. Again, I would build a time margin into my planning to allow for delayed interviews, extra review cycles, and breakdowns at the printing press. Sometimes I would also plan a dollar margin into the budget for these things, but budgeting and accounting were usually outside of my control. Because I had these time and cost margins built in, I often finished my work ahead of schedule and under budget. And that’s always a good place to be.

Taking responsibility for your life and actions—as well as any accidents that might occur, on the presumption that most accidents are foreseeable and therefore avoidable—means that you live carefully. In my life, I have generally known where I was going and made an internal, if not entirely formal, threat evaluation of my next moves. I know there are places which are not safe and avoid them unless pressing business takes me there.

Understanding that future events might sometimes take me into dangerous situations, I took martial arts training at the university.4 I have carried a serviceable pocket knife since I was twelve and transferred to heavier models with at least one locking blade in my early twenties. At about that time, also, I inherited a couple of pistols from within the family and have since added a few more over the years, preferring .45s to lighter loads. It’s not that I’m much of a skilled fighter anymore—doing the martial arts more for exercise and coordination than for battle readiness—and I was never formally trained for knife fighting. I always practice with a new pistol at the range, so that I’m reasonably accurate with it. And I know how to clean, maintain, and store such a weapon safely. But I would not take any pistol outside the house on my daily travels without a concealed-carry permit, and those are difficult to the point of impossible to obtain in California.

The point of all this training and weaponry is not that I plan to attack anyone. But I will not be made a victim, powerless to defend myself, at least in my own mind. If I end up going into uncertain circumstances and being attacked and hurt or killed, I want it to be elective on my part—a failure of will or nerve, for which I will take responsibility—and not because I was foolish and unprepared.

In the larger picture, I try to examine and assess the moral or legal dimensions of my actions and consider their consequences. In my view, integrity depends on doing the right thing, guarding against waste and loss, and living in a way that protects yourself and your family and friends while not endangering or damaging other people. Integrity also means living on good terms with the people around you, your neighbors and fellow citizens, while making as few enemies as possible. Everyone gets a fair shake. Everyone receives as much as possible of what they want and need. Everyone goes home safe at the end of the day. I can’t take responsibility for the entire world, but I can try to ensure that the pool of karmic events surrounding my life remains calm and safe.

Taking responsibility for yourself and your actions is the core of being an adult in this society.

1. See SIPRE as a Way of Life from March 12, 2011. Briefly, SIPRE is the acronym for a defensive driving mechanism by which we see, evaluate, and take action against threats: See, Interpret, Predict, React, Execute. It’s in the React phase that an ingrained rule or reminder kicks in to direct the Execute phase.

2. The two-second rule describes the proper distance for following a vehicle. Pick an object along the side of the road—such as a signpost or even a piece of trash—and start counting when it passes the rear bumper of the vehicle ahead: “One chimpanzee, two chimpanzees …” If the object comes into alignment with my front tire before “two chimpanzees,” I’m following too close to the car ahead—although in California freeway traffic I’ll take one and a half chimpanzees as an acceptable margin. This rule has the beauty of simplicity, doesn’t require you to estimate car lengths, and works at any speed.

3. Lane splitting is a contentious issue among riders. I see some motorcycle riders diving between lines of cars moving at 50 miles an hour rather than slowing down even a fraction and staying in their lane. In order to make splitting worthwhile, you have to be traveling 15 or 20 miles an hour faster than the cars in the lanes on either side. This means that, with the cars traveling 50 miles an hour, you have to maintain a speed of 65 to 70 while trying to look far enough ahead, judge distances and the gap widths between cars, and be extra alert for cars that are signaling or moving around in their lane—a sign the driver is not wedded to his or her choice—or lunging out of it. Since I ride a large bike equipped with saddle bags—meaning I take up even more of the gap width—I will only split if traffic is totally jammed and just creeping along. I’m gambling that it’s safer to be between cars and doing 15 miles an hour than fuddling along in the lane—clutch in–foot down–throttle off, then throttle on–foot up–clutch out, time and time again—and risking getting run over by a careless driver texting behind me. At 15 miles an hour, if someone makes a sudden lunge into the next lane, I might go down; I might be injured; the bike will probably be trashed. But I won’t be thrown under another car or truck going 60 or 80 miles an hour and killed.

4. I used to say that, after four years at Penn State, I graduated with an honors degree in English literature and a black belt in Isshinryu karate. However, I admit that part of my motivation for the latter was the craze for James Bond–style secret-agent movies at the time, which made stylized fighting seem cool to a nerd like me.

Sunday, July 10, 2016

Rules of Engagement

We recently saw the movie Krigen, or A War, about a Danish commander with his country’s forces in Afghanistan who, in the heat of battle, calls down a bombing run on a village compound that, after the fact, turns out to have contained civilians. The movie examines his situation, his motives, and the trial that follows under modern humanitarian rules for the pursuit of war. The crux of the matter—spoiler alert!—is whether the commander had “PID,” positive identification, of gunfire coming from the compound before he called in the strike.

It’s an interesting story, but it left me with an unsettled feeling: what is the sense of trying to make war humane?

We have restrictions in this country, too, on how to conduct our wars, called the “rules of engagement.” Basically, before firing upon or engaging a suspected enemy, a soldier or commander must generally establish that the target is indeed an enemy and has shown hostile intent. Other rules may also apply, depending on time and place. Presumably, we need these rules for two reasons.

First, U.S. forces are not fighting on our own ground in defense of our own country. We haven’t done this since the American Civil War. We have a big country and a strong military; so no one comes here to fight us. Our wars—at least since the two World Wars, and perhaps even then—have been wars of liberation: fighting on someone else’s territory to free them from a third-party aggressor. Korea, Vietnam, Grenada, Kuwait, Iraq, Afghanistan—and any skirmishes I might have left out—have all been on other ground, fighting for other people. In these situations, you have to be careful about who are the friendlies and who are the hostiles—especially since your enemies are usually just friendlies with a different point of view and with different political and strategic backing.

Second, because U.S. forces are not fighting directly in our country’s own national interest—such as beating off a foreign invader—these wars require some political finesse with the people back home. War costs money and, even with the best training and will in the world, will get the sons and daughters of U.S. civilians maimed and killed. So the country, the politicians, have to present the effort as a “just” and “humane” war, with plenty of high-ranking care and consideration, with proper caution about the expenditure of force, and with lots of civilian oversight and debate. And because we are a great power in the world, we also submit to various international conventions on the types of weapons to be used, how local civilians are to be treated, and what actions are allowed or disallowed.1

Our modern enemies, not being stupid, have noticed all this and use it to their advantage. They embed themselves in local villages and cities. They place their headquarters near, or within, schools and hospitals. They take civilian hostages against military reprisals. Their modes of attack are the car bomb, the suicide vest, and the improvised explosive device. Their targets are just as likely to be the recalcitrant or unconvinced civilian population around them as the foreign peacekeeping army they oppose. These practices have become so common that the idea of two armed groups wearing uniforms and engaging on a battlefield outside of town for possession of some strategic objective now seems so 19th century, gentlemanly, and … quaint.

The third reason for fighting a limited and “humane” war is that the major players behind these conflicts now have the capability of fighting a “total” war through first-strike nuclear holocaust. Why fight for this or that objective, why try to eliminate your enemy’s will to fight, when in one stroke you can eliminate your enemy, his countryside, his entire civilian population, and the civilization behind it? The only reason why not is that any enemy worth fighting—generally, until recently, the East vs. the West—possesses enough retaliatory capability to ensure mutual destruction.2 And so differences have come to be resolved through proxy players, regional puppet states, and limited, “humane” conflicts.

Back in the 1950s and ’60s, a common theme among science fiction stories and television programs was the attempt to find other means for conducting a war, now that nuclear weapons had made war so efficient as to make the outcome irrelevant. The notion was always that countries and civilizations would find less brutal ways of resolving their differences. They might hold an Olympic-style games to determine the superior culture and winner of the conflict. Or conduct computerized wargames that match and engage hypothetical forces in tests of strategy that do everything but consume men and matériel. Or play a championship game of chess or go—but perhaps, because accidents can happen and even geniuses sometimes make mistakes, involving three games out of five, or four out of seven.

This kind of alternative thinking is reminiscent of ancient armies that would come together in a designated spot but then, before clashing shield to shield, sent out their best fighters, their champions, to do single combat and perhaps resolve the battle without too much bloodshed. But always, outside the circle where the two champions met, would stand the entire army, ready to pick up weapons and charge if they lost the single combat.

The problem with any of these alternatives to war is that they are not serious, not binding. When you lose the pentathlon or the chess game, you can still send your army—or your missiles—over your opponent’s border. Worse, a tame form of war would encourage all kinds of reckless brinksmanship. Imagine a time when all conflicts were actually resolved by chess games. Imagine if a tiny state with a weak army—say, Thailand, in our current world—were to declare war on a much larger and more powerful country—say, China or the United States. When all you need to do is win a chess game, then you hire the best grand master you can find and cross your fingers. Hey, you might get lucky! And if you win, what do you get? Terms? Territory? Trade concessions? But if the spoils of such a toothless war are too onerous, the loser will simply repudiate them. And then what? You go to war in earnest—men and matériel fighting and dying for ground and a real chance to dictate the peace terms—either that, or you back down.

War is supposed to be difficult, dangerous, hard, and barbarous. That’s because war is the move of last resort, when a state, a country, a people are pushed into a corner from which they have no escape route, defending life, freedom, principles, and ground that they will not yield, cannot surrender, and without which they do not otherwise exist. When talks grind to a halt, negotiations break down, and the enemy’s demands are deemed unacceptable, usually then a people can still find another way. Perhaps they will ally with a stronger power, or prepare to bargain away lesser but still important goods, or cede territory outside the homeland that was actually in dispute from the beginning. But when the choice is existential—that is, fight or die—then a people will go to war. Not because they want to, but because they must.

And at that point, questions of whether they will fight a just or humane war, obey international conventions, and hold tribunals for commanders who win but with the wrong methods—all of that goes by the wayside. War is a serious business. The soldiers who are fighting almost always have no more choice about it than any other civilian involved in the fighting. And they will win by whatever means, using whatever weapons, and sacrificing whatever collateral assets, including women and children, may be necessary.

War is a terrible thing. And I believe we need to keep it terrible so that the urge to use it will remain beyond the reach of the average politician. It should be put on a special shelf, up high, and behind a thick pane of glass, to be used only in emergencies. And that is all the morality anyone can give to war: it must be so terrible that no special justification is necessary.

1. In this I’m reminded of the outraged Harvey Logan in Butch Cassidy and the Sundance Kid: “Rules! In a knife fight? No rules!”

2. And, you know, that works for me. Mutually assured destruction has kept the peace—or at least limited all the nuclear-endowed players to a cold war pursued only through brushfire engagements—for seventy years. This proves that while human beings can be barbaric, they are not entirely stupid.

Sunday, July 3, 2016

Locked In and Locked Out

One day recently I woke up realizing that I was now the person I would be for the rest of my life. This might seem like a trivial observation—well, of course, I’m me!—but as I move into the latter years of my seventh decade, the realization has a whole subset of corollary meanings.

We all are taught to believe in the possibility of personal change. And I still do. Last year, I started taking keyboard lessons.1 In the next few years I may follow my father’s example, from late in his own life, and take flying lessons—if I want to expend the time and money required for that expensive hobby. I’m still writing new books, reading voraciously, learning new things, and growing mentally. I still work out with my karate exercises, watch what I eat—mostly—and try to maintain my physical plant.

But with this early-morning realization came the understanding that, as a person, as a psyche slotted into a physical body and inhabiting a certain place in space and time, I am not going to change very much from here until my last morning and my last breath. My habits are pretty well fixed. So are my likes and dislikes. So are my core beliefs. So are my attention span, personal energy level, and degree of caring or not giving a damn about things and people in the world around me. While I might learn new things and acquire new skills, they are still going to fit into an established pattern, a defined worldview, and a set of psychological reflexes.

In part, this is an example of the Eighty-Twenty—or even, at this point in my life, the Ninety-Ten—Rule.2 The rule has various uses, but basically it states that 20% of your effort generally goes to 80% of your result, and the remaining 80% of effort goes to just 20% of result. So, in business and marketing, 80% of your profits come from 20% of your customers; 80% of your customers take 20% of your staff time; and 80% of your complaints come from 20% of your customers—but generally not the same 20% as those who provide the profits.

In life, I take the rule to mean that, when you’re young, you need to spend only about 20% of your time, effort, and energy to have a lasting effect on your life and future prospects. The years ahead are full of possibilities; your success depends on a wide range of probabilities; and your luck is still accumulating. But when you are much older, you need to invest 80% of your time and effort to make any real change in your life, because the possibilities are now fewer, the probabilities a lot lower, and your luck mostly spent.

This cause-and-effect was shown to me most forcefully when I was at the university. A student in a four-year degree program will generally take eight semesters worth of classes—or twelve quarters, in the schools on that system—between matriculation and graduation. Theoretically, the contribution of each class grade, and the average of grades earned in each semester or quarter, has the same level of effect in establishing your overall grade point average (GPA) as any other class or semester. But in the first semester or quarter of your freshman year, you establish that GPA quickly—low, middling, or high—depending on the quality of your effort. In the second and third terms, you can move that GPA up or down fairly easy, depending on the effort you invest. But soon after that, and with each passing grading period, your average becomes more and more locked in, and movement becomes more sluggish. Until, finally, in the last term of your senior year, you can either work like hell or slack off entirely, and either way you won’t be able to budge your average by a hundredth of a grade point. Your GPA and your destiny are fixed, for good or ill.

How you feel about this slow and steady stabilizing of your personality and limiting of your prospects in a kind of time-hardened amber depends on how you feel about yourself and your life. This is a personal conversation you must have with your angels—or demons—when you reach late middle age.

If you have become the sort of person you aspired to be in your teens and twenties, you can take satisfaction in knowing that not much will be able to change you. In the time remaining to you, you might suffer financial or personal loss; you might lose your house and possessions to fire or flood; you will certainly lose friends and loved ones to the grim reaper; your body and mind both will slowly lose resiliency and elasticity. But you will still remain cheerful and optimistic—if that is your nature. You will still be able to make new friends or establish a new home, and you will mark the losses of body and mind with equanimity, just as you have met other adversities.

But even if you have become exactly the person you meant to be, that early-morning realization still represents a closing off of possibilities. If you have passed out of the job market, as so many of my age cohort have now done, then you know you can never go back at the level you once attained. If your retirement planning and savings fall short, and you need to go to work again, it will be at a different level, probably much lower, in a different field, with different people, and with foreshortened expectations of authority and prestige. You also know that alternative avenues, other possible lives and achievements, are no longer open to you. You won’t ever be President, or a rock star, an astronaut, a big-league ballplayer, or any of those dreams that must start when a person is young and can invest a lifetime’s worth of effort to achieve. You won’t even be able to become a mediocre physician, lawyer, therapist, or other professional—at least not without a total upheaval of your life and current situation, and probably not even then.

And if you are not the person you wanted to be, not in the place you wanted to inhabit, not in the personal relationship or professional situation you wanted to occupy at the age you have become, then that early-morning realization can come on as a sudden feeling of suffocation. Like trying to budge a 2.8 GPA into the high 3’s in your senior year, the opportunities, the possibilities, just aren’t there anymore. You are like a fully loaded airplane with not enough remaining runway or a strong enough headwind to take off. You are mathematically at a dead end. At this point in your life, all you can do is maintenance work. You can try to be nicer to people, make an effort to shed your bad habits, go out and try to meet new people and form new relationships. You can change some of the negatives about yourself that have accumulated over the years, but the chances of changing your basic situation are slim. Your time and your luck have both been spent.

As Bette Davis and Steve Forbes—and probably philosophers and pundits going back to the ancient Sumerian civilization—have observed, growing old is not for sissies. It’s even harder than that, because—as most of my friends have agreed—while you may be aging outwardly, inwardly you still feel about thirty years old, or whatever age was your floruit, your green years of flowering expectations, your good times. And that imagined age will last for decades after the flowering has passed. I suspect you feel thirty or however many years old right up until physical and mental infirmity so distort and limit your life that you bitterly regret the things you once could do that are now impossible.

Time is a bitch. Old age is hardly a blessing. But still, as they say, it’s better than the alternative.

1. I was inspired to do this by a story told at one of the management meetings at the biotech company where I worked. The story related to new cellular regeneration—that is, “stem cell”—technologies and how people will be living longer in the years to come. A woman had turned one hundred years old and was asked by a bright young reporter if she had any regrets. “Yes,” said the woman. “I regret I didn’t start taking violin lessons when I was sixty, because by now I would have been playing for forty years.” Think about that for a moment. You or I might die tomorrow. But if we did live another forty years, what are the skills we might master and the things we might achieve?

2. Also known as the Pareto Principle, after Italian economist Vilfredo Pareto.