Sunday, November 27, 2016

Web of Character and Depth of Detail

There are traps in a major artistic endeavor like writing a novel. Similar traps exist, I imagine, in painting a large picture or mural, or composing a major symphony, but writing stories is what I do and what I know best.

The novelist has many threads to coordinate, especially in multiple-character or “ensemble” stories, such as I like to write. The author must weave together the personal relations among the various characters; the temporal relations among their actions, including initiating choices, reactions, and consequences; and the congruence of the characters’ actions with their established personalities and motivations. All of these, like the highlights and shadows in a painting or the contributions of each instrumental section to a score, must maintain the overall balance, tone, and proportion of the work.

To make a good story, the main characters must not be too passive, just letting things happen to them and then reacting according to their natures. This may be the way many people in real life function, but it makes for a poor figure in a story. But neither can the characters be too dynamic and all-encompassing. It’s fine for fantasies, comic books, and pagan religions to treat with gods and superheroes as superlative beings who can be daunted but never defeated, but you wouldn’t want to meet such a person in real life, and you couldn’t identify with such a character in a serious, modern story.

The draft first-half of my sequel to The Children of Possibility, which is tentatively titled The House at the Crossroads, has two main groups of characters working against each other. One group, the Troupe des Jongleurs from the original novel, has been fairly easy to portray and align, because they are dedicated in their mission, are naturally aggressive, and come to the page fully weaponized. But the second group, a young people whom “the Builders” send back into history to establish and operate the original Crossroads House, have been harder for me. They are scheduled to embark on a mission that suddenly changes because of the Jongleurs’ actions, and the terms of their commitment suddenly become much harder. As originally conceived, these young people were restless and bored, Europeans making life choices in a stale and static job market, and going back in time to become innkeepers at a temporal waystation simply looked like more fun than joining the reserve army. But my outline and my draft had suddenly placed them in a situation where they were forced to abandon their normal lives and undertake what was essentially a suicide mission.

When I sent this first half of the book to a good friend and fellow novelist, who is one of my regular beta readers, he rejected their situation immediately. He doesn’t believe in casually accepting suicide missions or in characters so passive that they will agree to a change in the original deal without convincing rewards or dire compulsions. He pointed out that walking into a buzz saw just because this couple gave their word and signed a binding contract is not a credible motive. And if the Builders pushed them forcibly through the time portal to complete a hopeless mission in a primitive ancient time, most people would disregard their instructions and, instead of lying low to avoid temporal paradoxes, would go full Connecticut Yankee and try to change history to their own liking and for the sake of their own survival.

My bad. This is also perhaps the greatest failing in my storytelling. Personally, I believe that most people are honorable, accepting of their fate, and stick to their commitments. I believe they must be yanked out of their comfortable chairs in order to send them on an adventure, like Bilbo Baggins in The Hobbit. I’m not emotionally in tune with the sort of people who wake up every day searching for action and spoiling for a fight, like Louis Wu in Larry Niven’s Ringworld stories or Kimball Kinnison in E. E. “Doc” Smith’s Lensman series. So my characters often have small dreams amid placid lives until something or someone collides with them sideways, and then they are forced to cope, to demonstrate their resourcefulness, and perhaps to fight for their lives. It’s not a bad approach to storytelling, but it can lead to traps like the one I fell into with House.

My novelist friend thought the fix would be a simple change in attitude, leading off with a few scenes of derring-do for the young couple, and then producing some kind of golden promise by the Builders sending them back on the doomed mission, so that the couple is emboldened, empowered, or coerced into going willingly. My friend was confident that my subconscious1 would easily figure out the necessary incentives. What I faced, however, was one of those “can God conceive of a stone too heavy for Him to lift?” puzzles.2 What incentive can you give daring and aggressive people to go back in time and then patiently wait for an outcome beyond their natural lifespans, meanwhile enduring hardships and eventual ignominious death, without them wanting to—even resolving to—change things?

Sometimes books just go wrong like this. Every novelist has a drawer or a hard disk full of half-baked stories and partial outlines that have struck a motivational or character-improvisational rock and foundered. Sure, the subconscious will figure it out … one day. In the meantime, why not turn to something else with a clearer path and story line? My novelist friend didn’t intend for me to stop telling the House story, because he found it interesting and compelling. And I think he tried to make the disjunction and its possible fix seem a lot smaller and less of a problem than it was.

The other difficulty with this conundrum—especially when the novel has already gone beyond the outline stage into an actual, 50,000-word, partial draft—is that to build up a credible story in the author’s mind, he or she must first give it enough complexity, memorable imagery, and substantiating details to make it come alive in the imagination. As a novel comes together, the telling acquires a depth of detail—layers of moss (for forest imagery) or barnacles (sea imagery)—and the characters acquire their own tastes, quirks, mannerisms, and speech patterns that make it difficult to change or even deflect their sense of self and the story’s direction. All of these details, swirling in the author’s brain and playing peekaboo with the subconscious, are a prerequisite to finally sitting down at the keyboard and telling the story in the reader’s real-time version.3

To change my characters’ intentions and reactions and to discover a reward or compulsion that would make them act against their motivations would mean ripping all this up and starting over. So, momentarily—actually, for about a day and a half—I noodled this unsolvable problem. Then I remembered the novelist’s salvation: the infinite malleability of character, space, time, and story line. If you can’t fix the problem, cheat.4

So that’s what I did. I found the one detail in all of my planning and thinking that had created the hang-up and turned the workable proposition into a suicide mission. And then the clouds parted and beams of sunlight shone down. I had a way forward. I will still have to scrap, envision, outline, and rewrite maybe three or four chapters out of the first fourteen; make some substantive changes to another two or three chapters; and then comb through and make minor deflections throughout the text, including that one hung-up detail. But this work is all doable. Moreover, it will make for a better story with more challenges for the characters to resolve with a hopeful spirit.

Still, this work of changing the story arc, adjusting character expectations and reactions, and revising a cascading series of incidents—all of this is no small matter in a fully developed draft. It is like trying to straighten the Bent Pyramid without taking it apart stone by stone. The author is moving heavy blocks of text in his mind, hearing them grating across the uneven surfaces of underlying stones, and perhaps seeing them grind away details of the story. It may be necessary work, but it takes time, and the experience is … fretful.

This is part of a writer’s working life: solving one problem after another until you can put in place the last dab of paint or the closing bars of the melody.

1. See Working With the Subconscious from September 30, 2012.

2. I had already faced that challenge with the first draft of my first published novel, The Doomsday Effect. It involved a planetary catastrophe with a micro black hole that was devouring the Earth from the inside, and no one could capture and contain it, so humanity was forces to build interstellar ships and flee. Fortunately, a good agent and a good editor made me see that I really had to find a way to solve the overarching problem—but that’s another story.

3. “Reader’s real time” is my shorthand for the ground-level walkthrough of the story. This is the reality that the reader will experience upon meeting the words on the printed or electronic page.

4. “Change the conditions of the test,” in the words of Captain James T. Kirk—said with a wry smile.

Sunday, November 20, 2016

At the Edge of Science

Be warned, this is a rant.1 This is where Crazy Old Uncle Thomas gnashes his dentures, pounds his cane on the floor, and screams things you probably don’t want the children to hear. But I’m going to say it anyway.

First of all, let me say that I love science and technology. Although I never formally majored in any scientific discipline, I am the son of a mechanical engineer, took the basic science and math courses in high school and college, and have worked alongside and reported on the activities of scientists and engineers for most of my professional life. I currently subscribe to a number of science magazines2 and, while I don’t necessarily read every article, I make a point of studying the contents, reading the summaries and articles that interest me, and skimming the rest. I believe the enterprise of science, which humanity has been pursuing diligently since about the 17th century, has made human life immeasurably better in terms of our understanding of the universe, this planet, and ourselves. We have vastly improved our practice of information management, communications, transportation, medicine, and everyday convenience over earlier times. So I’m a fan.

But that doesn’t mean I am a “true believer” in anything and everything. And I’m not an unobservant fool. In the past, oh, twenty years or so, I have noticed a disturbing trend at the leading edge of scientific inquiry that seems almost “postmodern” in its approach. We appear to be in the hands of scientists who have gone over to some kind of scientific fantasy, which replaces observation and fact-based analysis with imagination and mathematical illusion. Here are three examples.

Black Holes

Black holes are predicted by Einstein’s Theory of General Relativity. If you concentrate enough matter in a small enough space—say, by collapsing a massive star in on itself—that mass bends spacetime so much that not even light can travel fast enough to climb out of the gravity well. We have identified stellar objects, such as Cygnus X-1, that appear to have properties consistent with concentrating the masses of tens of suns into a space where no star can be detected. We also have observed effects at the center of our own and other galaxies suggesting that they concentrate the masses of billions on suns in what appears to be empty space.3

Well and good. Something strange is going on, and it would seem to fit with our present and most accepted theory of how time, space, and gravity work. But I have begun to see in the literature suggestions that black holes are not just bottomless garbage bins from which nothing—not even the fastest object in our universe, the photon comprising light and other electromagnetic effects—can escape. Black holes are now supposedly able to give up energy and radiation, such as when the small ones “evaporate” in Stephen Hawking’s theory of simultaneously appearing and disappearing particle/antiparticle pairs. And lately it has been suggested that matter and information can actually come out of a black hole: supposedly, the information is turned into a two-dimensional hologram that continues to exist on the outer surface of the event horizon and can theoretically be retrieved.4

So black holes don’t really have to be black at all. Doesn’t this smack of “I have a novel idea and I can generate the math to prove it”? A black hole is, after all, a theoretically constructed object for which our observations and analyses are frustratingly distant and indirect. That is, they are less imaginary than a unicorn but also less real, from the standpoint of hands-on study, than a horse. So scientists are now embroidering the edges of a theoretical tapestry. This is not necessarily advancing our understanding of what the universe, in all its strangeness, actually is.

Quantum Entanglement

While General Relativity deals with galaxies and stellar-sized masses, quantum mechanics is concerned with particles and forces too small to see with the naked eye—and most of them too small to observe or directly detect using any instrument at all. With its Standard Model, quantum mechanics has generated a menagerie of subatomic particles and their associated fields—that is, forces spread over the surrounding area as a theoretical stand-in for the physical particle and its effects. Most of these particles are in the lower range of size where, if you can detect it at all, you also deflect it. That is, you can know what the particle is, or where it’s going, but not both at the same time.

Most of the particles smaller than the protons, neutrons, electrons, and photos that we’re all familiar with from high-school chemistry have been found in high-energy colliders. These take two beams of common particles traveling at near-light speeds in vacuum and run them together head-on at higher and higher energies. The resulting train wreck gives off fragments traveling at speeds and energies that can be mathematically interpreted as having a given mass. By conducting the experiment over and over and comparing the results—usually in the form of flying pieces which quickly disintegrate into ever smaller pieces—physicists can identify new particles. So far, everything they’ve discovered either fits into, or expands, the Standard Model’s pattern of masses, spins, interactions, and symmetries that include the elementary particles: the leptons such as electrons, positrons, and neutrinos; the bosons such as the photons, gluons, gravitons, and the Higgs boson; and the quarks—in their varieties of “up,” “down,” “charm,” “strange,” “top,” and “bottom”—that make up larger things, the hadrons, such as protons and neutrons. It was by smashing beams together, over and over again, that physicists at CERN’s Large Hadron Collider discovered the disintegration trail of the Higgs boson in 2012.

All well and good. But now quantum mechanics is predicting that some of these particles can become “entangled” over unusually large distances. That is, two electrons or quarks or even large molecules may be separated by distances so great that light or gravity effects would take a measurable amount of time to travel between them, but they can still interact instantaneously. The position, momentum, spin, polarization, or some other characteristic of one in the pair is instantly affected by a change in the corresponding characteristic of the other. This would seem to violate the basic principle in relativity that nothing—not information, not energy, not influence, not gravity effects—can move across the universe faster than the speed of light. If the Sun were to suddenly vanish from our system—poof!—it would still take eight minutes for our view of the Sun from Earth to wink out and for our planet to give up its angular momentum and start heading out into interstellar space in a straight line.

Unless, of course, some particles in the Sun and their correspondents on Earth—no saying which ones, of course—were quantumly entangled, and then we would know of the disaster instantly by observing the corresponding particle here on Earth. So the physicists with this bright idea and the math to prove it have found a way to overcome the traditional prohibition on instantaneous action at a distance. Like wormholes and subspace radios—both of which can supposedly shortcut the vast distances of interstellar space—all of this seems a bit wishful and fanciful.

Catastrophic Global Warming

Okay, here’s where Uncle Tom goes nuts. Of course, climate changes. Any decent appreciation of astronomy, geology, evolution, and the other hard sciences confirms that we live under a variable star on a changeable planet. Eleven thousand years ago—when members of H. sapiens had fully attained our current level of mental and physical capabilities—we came out of an ice age that covered most of Eurasia and North America with ice sheets a mile thick and drew the ocean levels down by about four hundred feet to the edges of the continental shelf. In recorded history we have the Norse traveling to “Vinland” in North America a thousand years ago and finding grapevines in Newfoundland, suggesting that there really was a “Medieval Warm Period.” We also have historical observations from the middle of the last millennium suggesting that humankind experienced a “Little Ice Age,” with much colder climate and “frost fairs” held on European rivers that had frozen over, where now they run freely all year round.

We have been tracking sunspot cycles since Galileo first reported seeing spots on the Sun with his new telescope in 1610. Then, between about 1645 to 1715, the Sun went into a quiet period called a “Maunder minimum,” named for the scientist who first described it.5 Since sunspots increase the star’s release of energy, the number of spots at any given time affects the amount of energy arriving on Earth. From observations over the past four hundred years or so, we have detected within the eleven-year sunspot cycle a larger, four-hundred-year cycle of rising and falling eleven-year peaks. Our last three solar cycles were unusually large in terms of this greater cycle, heading toward a four-hundred-year maximum, while our current cycle that’s just ending, identified as Cycle 24, generated only about half as many sunspots as those previous peaks. Whether we’re heading toward another Maunder minimum or just seeing a freak aberration in this one cycle is not yet apparent. But the 17th century minimum—and the presumed period of declining spots leading up to it—would seem to correspond to the Little Ice Age, and the recent peaks we’ve experienced would seem to correspond to our recent Industrial Age warming spell.

In 1987, I attended Energy Daily’s annual conference in Washington, DC, which discussed issues related to energy production and use. One of the speakers was James Hansen, then head of the NASA Goddard Institute for Space Studies, who presented on the role of carbon dioxide from our energy and transportation industries in increasing global temperatures. One of the points he made was that rising temperatures would not mean that everywhere on the planet would become uniformly and increasingly hotter, but instead some places would get hotter, and others colder, as fluctuations in the climate’s response worked themselves out. But this does kind of leave exact measurement of the system and the extent of the damage open to question, doesn’t it? Another of James Hansen’s points that I remember vividly was that “the man in the street” would be able to see these temperature changes for himself by “the middle of the next decade”—meaning the mid-1990s. Well, I’ve been living in the San Francisco Bay Area for almost half a century now, and my sense from “the street” is that some years are colder and some warmer; some have more rain and some less; the fog still rolls in each summer, making May and September our hottest months; and we still tend to turn the wall heaters on from December to February. If there’s been an obvious change in our weather patterns, indicating a change in climate, I have yet to see it.

In support of global warming or climate change—and the call of climate scientists to make urgent and drastic changes in our energy production and use—Michael Mann of my alma mater, Penn State, produced the “hockey stick” graph. He used recorded temperature observations for as long as we’ve been taking them—and NASA keeps “adjusting” the raw data of these observations downward for the early to mid 20th century—and from the time before that he measures variations in tree ring—which I always understood responded to changes in ambient moisture rather than temperature. His graph shows the period from about 1000 AD up to current times, but curiously it smooths out the fluctuations of the Medieval Warm Period and Little Ice Age. On his graph, temperatures bump along in neutral for a thousand years until the last hundred years or so, when they start taking off.

Since we cannot study climate as a complete system—hell, we can’t even predict the weather much farther out than next week—and since we can’t experiment with effects that encompass land, sea, and sky all at once, climate scientists instead create models of what they think is going on. Models are mathematical structures that assign variables to different effects like incidental sunlight, factors governing land and water absorption and re-radiation of the infrared waves, and atmospheric conditions that govern absorption of the outgoing radiation—the “greenhouse effect.” Carbon dioxide is a weak greenhouse gas, not as good at blocking that re-radiation of heat into space as are, say, water vapor or methane. The climate scientists’ models which predict dire effects in the next century all rely on a positive feedback loop, what they call a “forcing,” in which the carbon dioxide that’s been added to the atmosphere increases the amount of water vapor—and that achieves the predicted greenhouse effect and rising temperatures.

This whole scenario seems problematic to my mind for four reasons. First, models are not testable science. They fall into the realm of “I have a good idea and I can generate the math to prove it.” Since climate involves too many influences and variables to predict accurately, the model makers are forced to choose which ones they will study and which ignore or hold to a constant value. Second, if your model depends entirely on positive feedbacks, you’re missing something. Feedbacks are generally both positive and negative; for example, more water vapor might mean more greenhouse gas blocking re-radiation from land and sea, but it might also mean more clouds, which block the incidental radiation and so result in cooling temperatures. Third, all of these models appear to be anticyclical. That is, they assume straight-line effects that continuously build and reinforce each other. Once the carbon-dioxide influence takes off, it is predicted to continue upward forever. But everything we’ve seen about Earth science involves cycles of rising and falling effects—temperatures, rainfall, storms, ice. More carbon dioxide should eventually force an increase in other factors, like promoting an increase in green plants, which would then absorb that excess carbon. You might adjust the set point somewhat, but no effect goes on forever. Fourth and finally, the observed temperature rises seemed to slow down in the early 21st century, and none of the climate models could account for that—nor indeed for variations observed earlier in the 20th century.

I do not deny that climate does change. I do not doubt that human activity has some effect on the changes. But I doubt that the effects will be as uniformly catastrophic as the models predict. And even if they are, human beings are geniuses at adapting to change. We lived through the Little Ice Age with far less understanding and technological capability than we have today. We’ve expanded our reach over the whole globe—except for Antarctica, where there’s nothing much we need or can live on—and we are now going into space, which is the most hostile climate of all. I think we can move uphill a bit as the sea levels rise over the next hundred years, and we can adapt our buildings, our agriculture, and our lifestyles to an overall increase of a couple of degrees. Besides, as our technology keeps developing and changing, we are bound to see new energy production and usage patterns arise and sweep across the economy faster than a government mandate could ever achieve. Look what smartphones have done to telephone landlines and the recording industry in less than a decade. The pace of technological change and its acceptance will only increase.

Astronomy, physics, and the geosciences have achieved much for humanity, and I have no doubt they will achieve even more in years to come. But that does not mean that every scientist with a nimble imagination and a penchant for writing equations and mathematical models should be granted the mantle of impeccable truth. Human life on Earth is not going to change much, no matter what astronomers predict about black holes, or quantum physicists predict about subatomic particles and their entanglement. And we’re not going to dismantle our modern energy production and use patterns just to head off a rise in temperature of a couple of degrees a hundred years from now.

Here ends the rant. Uncle Tom is now back in his chair, mumbling quietly to himself.

1. For the origins of this rant, you might want to read, among others, Fun with Numbers (I) and (II) from September 19 and 26, 2010, and Fun with (Negative) Numbers from November 3, 2013.

2. Chief among them Science, Nature, Scientific American, and Astronomy.

3. I made a personal study of black holes in preparing to write my first published novel, The Doomsday Effect, from 1986.

4. See, for example, “Stephen Hawking has found a way to escape black holes” from Wired, August 25, 2015.

5. I also made a personal study of the Sun and its cycle of spots to write the novel Flare with Roger Zelazny, published in 1992.

Sunday, November 13, 2016

Excess Spirit

In a recent post1 I considered the ways that two systems, a human being and a robot, would approach the task of hitting a baseball. At the most basic level, both would observe the pitcher’s release and the flight of the ball and then apply either a learned response or an algorithm to interpret the ball’s actual trajectory and select the ideal swing. The difference is that the robot would wait patiently to perform this task, while the human being—with so much else going on his or her body and mind—will fidget, glance around, take practice swings, and remain physically and mentally ready for so much more to happen than simply meeting the oncoming ball with the barrel of the bat.

Having just observed the major league playoffs and the World Series, with their ups and downs,2 I could see another difference between humans and machines—or the artificial intelligence that will run them. Humans have an excess of spirit that no analytical intelligence has yet attained. We express this spirit in terms of expectations, beliefs, hopes and fears, confidence and insecurity—all of which take known or discoverable facts into account and yet sometimes cause us to think and believe otherwise.

This comes up most strongly in differences between the commentary from the announcers and the action on the field. The men and women in the broadcast booth today have instant access to a fantastic computer memory. They not only know and can tell you which teams have met before and what were the outcomes. No, that’s just the sort of statistic an old-time radio announcer could look up in a sports almanac. Today’s broadcaster can tell you how many times and when each batter has faced each pitcher, how many balls and strikes the pitcher has thrown against him, and how many hits for how many bases, or runs batted in, or home runs the batter has made. And these statistics go back for years and across the player’s affiliation with every team in his career. If a batter makes an unusual home run—or an outfielder makes an unusual diving catch—the announcer can find a similar instance from play earlier in the season, or even from years ago, and run a video clip of it before the next player comes to bat.

All of this reminds me of Han Solo in the Star Wars movies: “Never tell me the odds, kid.” The past is only prelude. And, as the financial disclaimers say, “Past performance is not a predictor of future results.” Insurance actuaries, baseball announcers, and robots might live and die by statistical nuance. Human beings almost never do. “I can win this one!” “I can make that jump!” “I can beat that guy!” “This time will be different!” This is the spirit that the human mind—at least in its healthy state—and the instinct for survival generates when faced by daunting and difficult situations and by long odds.

I imagine that, to achieve something like this with an artificial brain, the designers would have to insert a counterfactual circuit that kicks in whenever the algorithm produces negative or undesirable outcomes. Such a circuit would amend or ignore previous experience, or accentuate only certain aspects of that experience that would tend to support a positive outcome. “Yes, eight times out of ten I have struck out against this pitcher, but twice I got a hit—and one of them was a home run.” It would not do to change the performance algorithm itself, because then all sorts of unexpected actions might result, and the system might never find its way back into equilibrium. No, the adjustment would come in the decision-making process: to go ahead and try when the algorithm and previous experience predict a negative outcome.

Computer programmers would be loath to design and install such a circuit. Right now, artificial intelligences are designed for maximum reliability and caution. You want the program that routes your request through the bowels of Amazon.com’s order system to read the tag, make the selection, send the bill, and ship the product. If the product is out of stock, on back order, or no longer available, you don’t want the computer system to engage some kind of I-Can-Do-This! circuit and make an unauthorized substitution. The system is supposed to flag anomalies and put them aside for decision either by a human being or a higher-level system that will query the customer for a preferred choice.

You don’t want the expert system that is reading your blood tests and biometrics, consulting its database of symptoms linked to causes and disease types, and making a diagnosis to suddenly engage an It’s-All-For-the-Best! circuit and suddenly opt for diagnosing a rare but essentially benign condition when the patient is staring a fully developed, stage 4, metastasized cancer in the face. If there is hope to offer, you want the expert system to display and rank all the possibilities, then let a human doctor or a higher-level system explain their meanings and the correct odds to the patient.

You don’t want a self-driving car to look at a gap in traffic that’s just millimeters wider than the car’s fenders and, ignoring deceleration rates, cross winds, and tire traction, switch to the We-Can-Make-This! circuit and lunge for the gap. Not ever—and not even as a possible option that the system would present to the human driver, who might suddenly want to put his or her hands on the wheel and make a wild and death-defying correction. When a ton or two of moving metal is involved, and multiple lives are at stake, you always want the system to err on the side of caution and safety.

Perhaps human beings, when left to operate the order system, make the expert diagnosis, or take the steering wheel, will put hope before either experience or caution and then select the substitute product, offer the most cheerful guidance, or lunge for the gap. But human society has also instituted programs of training and ethics to temper an excess of spirit. We expect human professionals to react more like machines: rules based, odds driven, and cautious. And we expected that of ourselves long before anyone thought of turning complex operations and decisions over to mechanical systems.

But that is in dealings with other human beings, who put their trust in another person’s performance accuracy and decision power to achieve outcomes of life-and-death or even mere customer satisfaction. When dealing for our own sakes—when confronting the possibility of receiving a surprise package, or beating a cancer diagnosis, or squeezing into a narrow gap—we feel at liberty to err on the side of hope.

And we certainly expect our team, our players, and ourselves to express that excess spirit and make a gallant try when life and safety are not on the line. In a baseball game, the batter might know the odds of hitting against a tough pitcher, but who would expect him to pause, reflect on past performance, step out of the box, and refuse to even try? One team might have lost to the other a dozen times in the past, but no one expects them to give up and forfeit. Spirit, hope, and confidence in the face of long odds are what make the rest of us cheer harder when the batter makes a home run or our team wins against the moneyline bet. They let us forgive more easily when the past does indeed turn out to be a predictor of performance.

And when our own life and safety are on the line—when you must jump from the third floor or stay on the ledge and burn, when the gap between two trucks colliding ahead of you is no wider than your fenders, when the doctor pronounces a disease that has every chance of taking your life—then the excess of spirit, the can-do attitude, the refusal to follow the odds are survival traits. When death is likely but not certain, then it’s best to err on the side of hope and take action. We make up stories about this, and in every story the reader wants the hero to strive against the odds. He or she may not succeed—the actual outcome is left to fate and the author’s skilled hand. But for the hero to face reality and give up before the crisis point would not make a good story. Or it would be the story of a depressed or insecure person who is no sort of hero, no role model, who doesn’t deserve to be the focus of a story in the first place.

Excess of spirit is not just an oddity that we find in the human psyche, it’s something we expect from any healthy person.

1. See Excess Energy from July 24, 2016.

2. Yes, and my hometown Giants went down in the fourth game of the National League Division Series, when the bullpen collapsed in the top of the ninth inning. And we had such hopes.

Sunday, November 6, 2016

All Men Created Equal

In the movie Lincoln with Daniel Day-Lewis, a sequence depicts various of his cabinet members wrestling with the Emancipation Proclamation and the question of whether the black population is “equal” to the white citizenry, or merely “equal before the law.” Even one politician who secretly lives with a black woman can only concede the latter proposition but not the former. At the time I saw the movie this whole question left me stumped, and I still consider it a ding-dong situation—meaning the question itself does not apply.

Let’s start with the obvious case. No free person in the mid-19th century would consider a formerly enslaved population that was newly emancipated to be his or her intellectual, moral, or social equals. The free person has lived without overt coercion, without the fear of death and maiming for the slightest disobedience, with the opportunity to live as he or she wants—within reason and restricted only by social norms—and been permitted to obtain as much education as he or she desires. The enslaved person has been denied freedom, subjected to constant coercion, and forbidden an education. It is through the exercise of personal freedom, the use of one’s own reason, and the attainments of education that a person distinguishes him- or herself and finds his or her place in a society of equals. In 1863, the enslaved black population had none of this, and so could not be considered anyone’s social equals.

But this is not the core of my objection. The proposition that one person and another can be true equals in any intellectual, moral, or social sense—and here, by “social,” I mean in terms of obligations tendered and respect offered—is inane. No two people are exactly equal in any measure, not two persons of a similar race and background, not two persons of the same sex, not even two brothers or sisters. One person is always going to be smarter, more clever, or better educated. One is always going to be better natured or morally stronger. One is always going to be better liked, more respected, or due more personal consideration for achievements attained and good works performed. This is part of the human condition, in the same way that one person is always going to be taller, weigh more, or have a longer reach than the other. People come in all physical sizes, bodily shapes, moral characters, mental capacities, learned experiences, and educational developments. To try to make one person or population equal to the other—or to make yourself believe such a proposition—is a fool’s errand.1

But doesn’t it say in one of our founding documents, the Declaration of Independence, “that all men are created equal”? Wasn’t this a core belief—a “self-evident truth”—of the time? Didn’t people originally believe that all human beings could be compared and found to be no different, one from the other?

Well, not exactly. The author, Thomas Jefferson, was no fool. The quotation has to be read in the context of the second and third clauses: “that they are endowed by their Creator with certain unalienable Rights, that among these are Life, Liberty and the pursuit of Happiness.”2 That is, they are equal—not in intellect, character, social standing, or physical attributes—but equal in the sight of their god, in their possession of certain rights, and so equal under the law.

This is why the question in the movie among the politicians of Lincoln’s cabinet so bothers me. How can mere men be asked to determine the equality of other men in any measurable dimension? How can any human being know another person or group so intimately that he or she can measure and find likeness with him- or herself or with another group?3 To make that judgment requires an intellectual and moral precision which stands outside of—and superior to—what is found in humanity. That is, the standing of a god. Other humans need not apply for the job.

So the only question left to politicians, lawyers, judges, and anyone else who operates in a legal, political, or social sphere is whether they, the emancipated black population—or any “other” in terms of the question—are equal under the law. In terms of the Declaration, the answer is “all men”—and by extension, overriding the prejudices of the times, all women, too. Anyone who qualifies as a human being is equal under the law. In the eyes of their creator, all people are equal in standing if not in quality of intellect, character, or other internal and external attributes. In standing before other political entities, such as in our republic under the Constitution, that equality before the law may be reserved for natural-born or naturalized citizens—although our law does not exempt foreigners from deserving respect and proper treatment; it simply denies them certain rights under the laws pertaining to citizens.

In our society—which I think is still one of the best in the world—people are not granted any more rights because they are smarter and better educated, or enjoy higher social standing and better political connections, or have access to more money. We have no natural aristocracy which can expect immunity under our laws. Everyone arrested and taken into custody goes to central holding until they can appear before a judge and try to post bail. That some people with money and connections will never spend a night in jail, no matter what they do, is popularly perceived as an injustice and not a proper application of the law. That people with money can buy a better defense at trial is countered in most judicial districts by the state providing public defenders to anyone without means.

Ours is not a perfect system. Injustices do occur. But this is because our society is managed by human beings; our institutions are established with good intentions but operated through the actions and perceptions of imperfect individuals. We should not try to improve this state of affairs by handing our rights and our fates over to higher orders of being such as angels, robots, or psychiatrists and social scientists. Instead, we live under a democratic system that permits average people—including their self-appointed advocates and journalists—to point out and discuss injustices, suggest remedies and alternatives, and put them up for a vote. This approach is sloppy, slow, and crude, but it works better than many more streamlined, idealized, artificial systems.

While differences in education, social standing, and wealth will not confer or deny rights under the law, and anyone who checks out as human is accepted into society and its protections, we do sometimes have to take into account marked differences in personal capability. Some people—whether through genetic inheritance, defect in the birthing process, disease, or accident—have lost the faculties that make them fit within accepted norms and so be accepted as fully capable in society. In most cases, they lack the mental capacity to function and so become vulnerable to reduced circumstances and predation by others. In some cases, they lack the moral depth or self-restraint expected of the average person and so become a danger to themselves or others. We have—any society should have—means of identifying, evaluating, and segregating these people from the rest of society.4 We do this for their own good and ours.

But these are not minor differences in mental or moral capacity. We do not deny the rights of a person who might be a few IQ points short of the average. Nor do we deny the rights of a person who has performed some minor indiscretion under the laws governing property or interpersonal relations. Our system is—or should be—designed to care for people who are incapable of functioning in society, and to protect society from those who have proved themselves resolute predators on their fellow citizens. And even those who have been distinguished by bad behavior rather than by diminished capacity are still allowed to change their outlook and redeem themselves.

So our society does, in these cases, make distinctions based on equality of intellectual and moral character, but only in the grossest and most obvious sense. We condemn only those falling in the lowest part of the normal spectrum of human development and achievement. Then our intent is only for the protection of the individual and society. And we are still, in these cases, only talking about equality before the law.

In even the most extreme cases, equality of personal essence, of character, or of soul still lies outside of human judgment, in the realm of whatever god or gods there may be.

1. About the only time we can reasonably call for and expect personal equality is in sporting contests. For example, we want two boxers or wrestlers to compete in the same weight class and have similar training and skills as established by previous performance. The same would apply—with obvious exceptions for the different positions played—to members of a baseball or football team. Certainly, if someone is going to bet on the outcome of a contest, he or she would expect a certain match in physical attributes and skills going into it.

2. The fact that Jefferson could believe his own words and at the same time hold black Africans enslaved, denying them liberty and their pursuit of happiness—and their lives, if he so chose—reflects a popular conceit of the 18th century. To the “civilized” white European, the “savage” black tribes of Africa were not entirely human, not fully members of the species H. sapiens. As such, they could not to be granted equal rights with the white race. This is a latent belief that science and an improved morality has long since demolished—at least among people of greater education and better moral character.

3. In the matter of trying to judge a whole group, I side with Sergeant Kilrain, the fictional character in the movie Gettysburg. In a conversation with his colonel, Kilrain says, “Any man who judges by the group is a pea wit.”

4. Or we used to. In the case of people with clearly defined mental illness, our old system of care through certification and commitment to a state-run hospital has been overridden by concerns about the ill person’s rights. Essentially, we have lost the ability to distinguish between a healthy person deserving of full rights and an incapacitated person who cannot function in society. Where once we took care of them in hospitals, we now leave them to family care or let them roam the streets in proud, defiant misery with access to only occasional and poorly funded services. Something has broken down in our society, and we need the moral courage to fix it.