Sunday, December 28, 2014

Brothers Under the Exoskeleton

Anyone who has been following my weekly blogs will know that I am a convinced evolutionist. For ten years I worked at the biotech company that supplied genetic sequencing equipment for the Human Genome Project, DNA analysis tools for forensic and paternity testing, and the machines and reagents for hundreds of other research and clinical applications. I worked alongside chemists, biologists, and engineers and picked their brains whenever I could, as well as doing my own reading on the subject. In later years, it was my job to explain their work and the company’s products to our nontechnical employees as the industry moved from the genome and proteome to the epigenome, the metabolome, and other areas of study, deeper and deeper into life’s molecular secrets.1 From all of this, I know—not just believe, but know—that evolution is the model for the development of life on this planet. It’s literally written into our DNA.

The implication of this is that all life on Earth is related. We share a heritage not just with other mammals but with all the animals and even with plants, bacteria, and fungi. For one example, the DNA/RNA/protein coding system that these organisms use is the same as in our human cells.2 The proof of this is that we can manufacture a human protein in a mammalian or yeast host cell through the process of recombinant DNA. The resulting protein is not “just as good as” the ones made in our own bodies; it is chemically indistinguishable.

For another example, consider our shared structure. You wouldn’t think that an ant and a human have much in common. Apart from the size difference, ants are structured with a tough, inflexible outer shell—an exoskeleton—and hold their organs and other soft parts inside each jointed segment, while we humans have a bony internal skeleton that supports our vital organs by wrapping them in a bag of skin and connective tissue.

But ants and humans, along with every other insect and animal you know—except for worms, jellyfish, and all the radially symmetrical sea life, like sea urchins and octopi—have a common arrangement. We all have a head that encloses our major neural ganglia, or brain. The head also holds our external sensory apparatus for sight, sound, and chemical receptors—that is, our eyes, ears, nose, and tongue—which connect directly to the brain. Further, the head contains our mouth for ingesting food. Human, dog, horse, cow, kangaroo, sloth, dinosaur, reptile, frog, fish, ant, spider, scorpion—we all have this same structural arrangement at what we generally think of as the “top” or “front end” of the body.

After the head comes our thorax, or chest, with the heart, lungs, and other equipment for breathing and circulation. And after the thorax, lower down or toward the rear, comes the abdomen—whether separated by a muscle called the diaphragm in humans and other mammals, reptiles, and amphibians, or by segmentation of body parts in insects. Organs in the abdomen process food, eliminate wastes, and engage in reproduction. Human, dog, dinosaur, fish, spider—all put these functions in more or less the same place.

And while ants may have six legs that grow out of their thorax segment—and spiders the same general arrangement, except with eight legs—we humans along with every other vertebrate animal that walks on land and descended from the line of fishes all have two limbs that grow from shoulderblades attached to the thorax and two limbs that grow from a pelvis attached to the spine near the abdomen. This is the tetrapod—or “four-footed”—super class of animals. Count the front limbs—whether arms, wings, or flippers—and you only come up with two. Count the hind limbs—whether tipped with claws, hooves, or toes—and again only two.3

This is why the chimeras of classical mythology and the medieval bestiaries seem so strange and mysterious. Pegasus has the four legs of a horse plus a pair of wings. Griffins have the wings and legs of an eagle with the hindquarters and tail of a lion. Centaurs have the legs of a horse and the torso of a human. Angels have legs, arms, and wings.4 All of these supposed creatures are six limbed, like the insects, and that violates the tetrapod morphology.

More than this, can we imagine a creature whose mouth was in its stomach? That would make the most sense, wouldn’t it? Give the stomach direct access to the outside world, rather than processing all that bulky food inside the head first and then passing it with a long tube—through the constriction of the neck, which must already contain the spine, muscles, tendons, arteries, and veins supporting the head—down past the heart and lungs and into the abdomen. Or can we imagine a creature with its eyes mounted on stalks alongside or atop its wrists and ankles? That would make controlling the feet in running and the hands doing in close work more convenient, wouldn’t it? We could also look around corners and over windowsills without exposing our fragile faces and heads to surprise attacks and hurled objects.

But these morphological improvements are not the way our bodies work—not in fish, frogs, reptiles, dinosaurs, dogs, or people.

These arrangements go back to an ancient set of genes called the homeobox, which is sometimes shortened to “hox.” These genes don’t code for proteins, because we don’t have a “head” protein or a “chest” protein. Instead they code for “transcription factors,” which are bits of RNA that stay inside the cell’s nucleus and promote other genes during the earliest stages of embryonic development. Hox genes control the cascade of gene activity and the resulting proteins that create our most basic structure.

Animals, plants, and fungi—practically any organism with more than one cell, and so the need to tell each cell in the developing organism where to go and what to become—has a set of hox genes. They are arranged differently and create different structures in animals and plants. But in the blueprint of the final organism, they are the first sketches that set the whole building job into motion.

The hox gene set is remarkably conserved. That means we share a lot of our genes and our resulting structure with most animals but less so with plants.5 Molecular biologists have studied the homeobox gene set most closely in the genus Drosophila, or fruit flies. They have—but did not originate—the same head-with-brain-and-eyes, thorax-with-heart-and-lungs, abdomen-with-stomach-and-reproduction arrangement that fish, frogs, reptiles, and mammals all have.

The interesting thing about fruit flies is that researchers can play with the hox gene set and not get a reprimand from the Society for the Prevention of Cruelty to Animals—that, and the flies breed new generations relatively fast, so you can study changes on a convenient time scale. What we’ve learned is that if you mutate these genes, or silence them, you can change the animal’s structure. You can breed fruit flies with legs growing where their antenna should be, or with eyes in the middle of their wings. Of course, if you change the number and arrangement of these genes too much—say, trying to place the head between the thorax and the abdomen—you simply kill the embryonic fly by creating a totally scrambled, nonviable structure.

The fact that we share so many proteins, so many of the genes that make them, and the genes that create our basic structures with other animals—and the DNA/RNA system that records and transmits all this with all other life on Earth—is proof enough to me that we are all related. And that relationship is mediated by gradual adaptation through many generations. The foreleg of the early tetrapod changes and adapts over time to become the wing of a bird or bat, the leg of a horse or cat, or the grasping hand of a monkey or a man. The compound, prismatic eye of the fruit fly occupies the same position in the head as the single-focal-plane, liquid eye of the horse or the human.

We are all brothers under the exoskeleton.

1. What are all these “omes”? In current molecular biology, an “ome” is the domain of a particular system under study. The genome is concerned with the operation of the genes: the DNA/RNA system. The proteome is the study of proteins and their interactions. The epigenome concerns itself with environmental and chemical effects that modify DNA expression. And the metabolome deals with the metabolism, its inputs and products, on either the cellular or bodily level. The field of molecular biology is widening all the time and simultaneously becoming intertwined, as researchers explore and link up all these different pathways and their effect upon one another.

2. With minor mechanical exceptions. For example, the ribosome—the RNA-based molecule which translates the coding of messenger RNA to assemble amino acids in making the body’s proteins—differs between eukaryotes (multi-celled organisms whose DNA sits inside a nucleus) and prokaryotes (single-celled organisms whose DNA floats around in the cell). Almost all antibiotics work to inhibit the operation of the ribosomes in prokaryotes but not in eukaryotes—which is why they kill the bacteria inside our bodies but not us or our livestock and plants. This is also why antibiotics won’t protect you from a virus, because viruses hijack the host’s genetic system to transcribe and translate their DNA.

3. But what about whales and dolphins? They descended from land animals that went back into the sea, and they have no legs. Neither do snakes. But these animals generally have vestigial hips and leg bones hidden inside their bodies. Even if they don’t use them, the genes for these features remain to make themselves felt. And of course, the tails of whales and dolphins are a different, boneless appendage not related to the organism’s skeletal structure.

4. How does the musculature of the angel’s human-appearing upper arm and shoulder cross over and coincide with the musculature of its birdlike wing? And which muscle system dominates the mechanics of the shoulderblade? It’s all a mystery.

5. Consider that most plants have their food-processing organs in their deep roots, their reproductive organs in their flowering tops, their lungs in their leaves, no hearts to speak of, because they rely on capillary action to pass fluids up and down, and no need for legs, because they spend their adult lives in one spot.. They all share a homeobox organization that is just downright alien compared with that of mice and men.

Sunday, December 21, 2014

On Graffiti and Vandalism

Let me say right at the beginning that I hate both graffiti and vandalism. They are visual blights, signs of decay, and represent a loosening of the social order. You see scrawled signs, elaborate and indecipherable signatures, and spiky paint bombs in places where nobody is watching. You see broken windows, wrecked cars, shot-out street lights, and shot-up road signs in places where nobody cares. At the very least, they are marks of carelessness and disrespect for property rights. At the worst, they signal anger, despair, frustration, and hopelessness. Scrawled curses and broken windows are too often the salt crust left over from tears of rage.

And yet … I try to imagine a world where no one sprays graffiti, where no one breaks untended panes of glass. I think through the logical implications of this, and I don’t much like them.

Consider a world in which whole square yards of empty concrete and the sides of railroad cars and bridge abutments remain as visually empty as the day they were made. Consider a world in which abandoned buildings are never broken into and entered, where abandoned cars are allowed to rust gently into the topsoil, and where windows with nothing going on behind them gather only dust and sunlight and never the occasionally thrown stone. Perhaps that’s a world where everyone has good intentions, a liberal education, and a solid middle-class upbringing, with parents who teach their children to respect the property rights of others, think of the consequences of reckless impulse, and keep their hands to themselves. Such a world would belong to the proper little Ralphs among us.1

But not everyone—not by a long shot—has such a proper and respectful upbringing, such positive influences on their young impulses. For those among us humans who were not raised by a stern father and a reproachful mother, what would such a clean and orderly world signify?

Something missing, is my guess. A world in which young people—and those who still had the impulses of youth—did not itch to leave their mark in fresh paint, to break the abandoned windowpane, to rebel against the clean surfaces and orderly functions that others had left behind … such a world would be inhabited by drones. When left with idle time and no instructions to follow, they would fold their hands in their laps and sit quietly. They would contemplate the infinite and sink into their souls, like little Zen masters. Or they would simply switch off, like robots which had outrun their programming. Such is not human behavior.

Imagine a world where the young did not act out, did not test their strength against the inanimate landscape, did not break the rules. Imagine a world where idle people did not break into empty buildings to see what might be inside. Imagine a world where children did not roam the neighborhood, climb trees and walls so they could leap from their heights on a dare. Where they did not dig into rocky hillsides, looking for gold and treasure. Where they did not climb over the construction sites of new housing, free to hang from the door frames and scuff across the bare boards with their sneakers.2 It would be a world of little old people—or of insects and reptiles, hard-wired into certain mental and emotional patterns from birth. It would be an inhuman world.

Now I try to see graffiti as a sign of human creativity. Some person with a need for personal expression is experimenting with a new and exotic signature. Or trying to draw an elaborate haiku in unknown glyphs without ever lifting the brush and stopping the flow of paint. Some artist is trying to express the inexpressible, in loops and twists of an untrained imagination, using the only canvas that may be available to him or her, an unmarked wall or a sidetracked railcar.

I try to see broken windows as a sign of untested energy. Some bored youngster—or someone young in spirit—has picked up a stone and tested his or her skill in throwing it accurately; the crash and tinkle of breaking glass is his or her reward for a well placed shot. Note that I’m differentiating here between the broken windows, stripped doorknobs, and trashed interiors of an obviously abandoned building and the damage done to an occupied home where people live behind the windows and inside closed doors. The former is idle play and reckless disregard; the latter is premeditated terrorism, which is wholly evil in intent.

Graffiti and vandalism are expressions of the human soul in rebellion. It would be better, of course, for the graffiti artist to be given a clean sheet of vellum, an orderly box of colored crayons or paints, and instruction in useful visual expression. It would be better if the vandal were given a hammer, nails, fresh boards, and the invitation to build up rather than tear down. But those, again, are the responses of a socially motivated, middle-class mentality. Spray paint on concrete and a stone breaking a window are what the untamed human being finds in the wild and seizes on for his or her own satisfaction.

These expressions are part of what makes us human. We are a restless, invasive, encroaching, seeking, striving, overturning species. We are not respecters of limits. We are not mindful of the ghostly property rights left behind on empty walls and in abandoned buildings. We climb fences and sleep in other people’s barns. We break windows to test our own skill and strength. We spray paint to mark our passage through the world. Graffiti and vandalism are part of what drives us forward.

I suppose we could change human nature to erase these blighted landscapes. We could try to eliminate the impulse to put our mark where nobody has made a claim, to break the glass that nobody seems to own. With enough patience—or sufficient socially focused violence—we could turn these restless humans into good drones.

The justification would be that we are no longer creatures living in the wild. Those social scientists bent on changing human nature would say that humans must now become more sociable animals, mindful of the feelings and property rights of others, because every square foot of the Earth by now belongs to someone else, somewhere else. And anything that is not already claimed, either here or out among the planets and the stars, must be left in its natural, untouched state, because wilderness has its own set of rights and priorities.

Sure, we could change our innermost nature. Some would say that we must change in order to build a stable, urban society. But I doubt it can be achieved with any amount of patience or socially focused violence. Humans cannot become insects or reptiles, hard-wired to calm obedience. We cannot become drones or robots or little old people with our hands quietly folded. We are a violent, untamed mammalian species, and that has been the key to our success in the world.

If you take away our fierce natures, we will surely begin to die as a species.

1. From William Golding’s Lord of the Flies. Ralph is the fair-haired protagonist who stands for personal responsibility, social order, fair treatment of others, and individual rights. As I recall the story, he doesn’t fare well.

2. When I was a youngster in my aughts and early teens, we lived in a new housing subdivision in the Boston suburbs. There the expanding periphery consisted of cleared lots, poured concrete foundations, and the rising frames of single-family homes nailed together in two-by-fours and one-by-eights. Exploring these building sites—not to damage them but simply to climb and play—was part of my childhood. From this experience, I also learned a fair amount about concrete forms, carpentry, and house construction just by observing how these new homes progressed.

Sunday, December 14, 2014

The Art of the Possible

German Chancellor Otto von Bismarck is quoted as saying, “Politics is the art of the possible.” He also said, “Politics is not an exact science.” I subscribe to those notions.

In any group—clan, tribe, municipality, state, nation, or empire—you will find people having different ideals and needs, holding to different values and opinions, following different paradigms, and drawing upon different bases of information. Whether your system of government is a pure plebiscite democracy like the ancient Greek city-states, a republic like ancient Rome, a monarchy or dictatorship with some kind of council of nobles or ministers if not a full-blown parliament, or even an absolute autocracy supported by a cabinet of hand-picked bureaucrats—at some point politics will enter the picture. People will have different ways of doing things and form into groups of like mind.

Even if the dictator or autocrat has stated his wishes and commands in excruciating detail, he must eventually leave them to his administrators for execution. In any endeavor larger than fetching the king or tsar a cup of tea, those supporters will have to interpret the commands, decide how to carry them out, and make sensible decisions when questions and conflicts arise. Politics is inevitable, because nothing having to do with human beings is ever simple and obvious. And the more human brains and voices that are involved in any question, the more complex it becomes.

Sometimes—rarely, but it happens—one group of like mind will be so strong that its values, paradigm, or interpretation of the information at hand is paramount, and its members have the power to override all discussion and work their will. But if the group is too large, that consensus will not last for long. Even the most monolithic authority base quickly develops its own splinter groups, offshoots, and intent contrarians arguing on the most closely held of questions. Ask the Muslims about Sunnis and Shiites. Ask the old Russian Social-Democrats about Mensheviks and Bolsheviks. The party in power always has its internal feuds.

And when monolithic power breaks down, the result is politics.1

Politics is all about negotiation and compromise. Lacking the power to force your will, you must resort to working with your opponents, bartering concessions for cooperation, giving in order to get. It’s a messy business, because you don’t know, can’t know, ahead of time, what positions your opposition will value most, or be willing to trade for, or could care less about surrendering. Politics is that “inexact science,” because every negotiation is different, as different as the people sitting around the table. Politics is also the “art of the possible,” because until you sit down to deal, you don’t know what you can actually achieve.

The older generation—the ones, at least, who have survived and now thrive—knows this. They have fought their battles and, having lost about as often as they’ve won, are prepared to make the best bargain they can. It’s the young and idealistic who are fixated upon their ideals, who are absolute in their loyalty to the current paradigm, who view concession as capitulation, who vow never to give up, never to surrender.

We’re facing that situation today in the United States. The two parties, Democratic and Republican alike, have both—but at different times in the last couple of decades and under different circumstances—fallen prey to the opinions and ideals of their extreme wings.2 They have both tried to force their centrists, their “squishy middle,” into lockstep with their most extreme policies. And in both cases, the spell of pure thought and lofty ideals über alles has worked for a while, and the party has wielded power in an almost dreamlike state.

But then reality returns, as it always does, as it must, because the people of a clan, a nation, or an empire are not all of one mind. And the essence of what consensual mind does emerge is never at either extreme of the political spectrum of the day, but instead somewhere in the middle. That’s why they call it “the middle.”

Any politician or political party which does not understand this and tries to impose their programs by conducting one-sided votes, issuing executive orders, and making regulations beyond the scope of the legislative mandate is acting like a naïve child. Such a politician reveals him- or herself as either an inexperienced neophyte or someone who has confused winning an election with staging a revolution.3 It may feel good to remain pure of heart and wedded to your ideals, but it’s not the way to remain in power long.

But it sometimes happens. I can think of a few cases—the Nazis under Hitler, the Soviets under Stalin—where a clique at the top maintained both a relatively pure ideal and their own vision of power. But these examples do not bear repeating, because their methods included harsh repression, purges and cleansings, scapegoating, prison camps, and—when all else failed—a resort to war to steel the population and conceal the government’s true purposes under the flag of patriotism. And in the end those systems collapsed anyway, causing widespread confusion and misery. Not, in my opinion, the way to go.

So, at the end of the day, at the end of all your speeches and campaigning, you have to sit down and deal with your fiercest opponents and your squishy middle. It’s inexact, it’s messy, but in a universe of competing values and ideals, it’s the only sensible way to govern.

1. Or war, which is “a continuation of political intercourse carried on with other means.” That’s a quote from Carl von Clausewitz, another German political theoretician.

2. I have a litmus test for finding those extremities: simply ask someone if there is a difference between the two parties. If he or she can see no difference, then that person is operating from a paradigm far to the left or right of where the two parties rub shoulders.

3. So, I’ll reveal my conservative bias here. The extreme left wing of the Democratic party has, in my view, never weaned itself from the revolutionary politics to which its now aging, Baby Boomer members pledged themselves the heyday of the 1960s. They self-identified with guerrilla opposition groups in Cuba and Vietnam, and with underground, iconoclastic movements within the industrialized West. They became intent on bringing down the monolithic power of the “military-industrial complex,” on opposing “the man,” and on achieving an impossibly utopian state of being. Such dreams and ideals make one an inspiring advocate for a radical viewpoint but a poor candidate for actually taking power, resolving crises, and governing successfully. Every revolution that ever succeeded has had to go through a period of struggle where power-holding realists had to contain and eventually eliminate the revolutionary idealists. Ask Leon Trotsky. Ask T. E. Lawrence.

Sunday, December 7, 2014

The Insurance Model for Health Care

I’ve written on this topic before, but it’s time to repeat some obvious—at least to me—truths in the matter of modern medicine, health care, and the insurance model that pays for most of the care in this country.1 That model would seem to have outlived its usefulness.

The concept of “insurance” grew out of mercantile transactions in the coffee houses of 17th century London, when transoceanic shipping was a relatively new and dangerous business, subject to piracy, storms, navigation errors, and sudden groundings. The loss of just one ship could break the finances of the merchant who moved his goods with it. So the merchants as a group and the moneymen of the time got together and paid into a pool that would pay out on the loss of any covered ship and cargo that met with such a mishap. If nineteen ships out of twenty returned to port and only one foundered, the deal was pretty good for the insurers. They would collect nineteen initial payments and suffer only one claim of loss, keeping the rest of the money for themselves and future ventures. However, if only fifteen out of twenty ships returned, then the ship owners would be relieved of worry while the insurers might be broken financially. The transaction was all about the odds, and figuring those odds more and more closely increased humanity’s understanding of issues like probability and risk.

You insure an asset against an improbable—or at least not all that likely—occurrence: against your car being wrecked or stolen, or your house burning down, or the liability for face with operating a car or owning a house. The less likely the event, the less you pay to insure against it. Most drivers can keep their car in a locked garage and use it responsibly, so the risk of loss is negligible. Most homeowners can build with fire-resistant materials, install smoke detectors, and keep a fire extinguisher handy, so the risk of loss is again vanishingly small. Most people can afford to insure these valuable items, whose loss might be financially devastating, because the risk is low. On the other hand, people who live on a flood plain or in an active seismic zone will find that flood and earthquake insurance are prohibitively expensive, because the river rises every spring and the earth’s crust is constantly moving.

Some people think this is unfair. They believe insurance companies should cover everyone equally. The insurers should just expand the pool of risk so that the near-certainty of the homeowner living on a riverbank being flooded out will be covered by flood insurance payments from people living on hilltops or in deserts. And people living in Iowa and Nebraska should pay significant earthquake insurance premiums in order to cover the losses of people in California, so the latter can more easily pick themselves up after the next Big One. But that is not the insurance model. Instead, it would be some kind of disaster-relief fund. It would be the promise that, come what may, whatever you do as an individual, your choices will be riskless, your life situation without discomfort, and your future protected from all serious losses.

Now that’s a nice idea—a heartwarming idea. But at the same time it would remove individual choice, foresight, and responsibility from the course of one’s life. It would also raise the insurance rates for everyone to uneconomic levels. And the more completely any insurance carrier, even one operating at government expense, tried to eliminate all risk, the more money it would have to collect against the inevitable losses.

Finally, no one insures a house for the costs of routine maintenance like painting the walls and repairing the roof, or a car for oil changes and wear-and-tear on tires. These are the expected expenses of ownership, not the results of catastrophic loss. And yet those who want a risk-free life also tend to expect a cost-free life as well.

For years, people paid “health insurance” that was really protection against the unexpected costs of a “major medical” complication like a broken leg or cancer treatments. The insurance covered big hospital bills and doctor’s fees, but not routine checkups and minor coughs, colds, cuts, and abrasions. Then in the 1970s and ’80s, Health Maintenance Organizations were formed to cover more of an individual’s or family’s health bills, including those checkups and preventive measures. This reflected the growing realization that sickness wasn’t something that just fell on people out of the blue but a condition they could, to some extent, control through good diet, exercise, restraint from smoking and drinking, and early discovery of potentially harmful conditions.

It was a good idea. But it obscured the realities of the insurance model. People came to think that every procedure related to health should be paid for by someone else. Insurance was no longer coverage against the big losses due to catastrophic accident and illness, but instead became the way to pay for all routine health care, minus a modest copay that did not rock your pocketbook. This was not that much different from expecting your car insurance to pay for oil changes and new tires, or your house insurance to cover a painting crew every five to ten years.

The insurance model as it applies to health is broken in another way. Sooner or later every asset reaches the end of its design life and then of its useful life. The timbers of a hundred-year-old house become too old, eaten by termites, or infected with dry rot to support the structure. The frame and body panels of a car—or at least those in the East, where they put salt on the roads in winter—become too rusted to last another season. The cost of repairs, compared with the cost of tearing down the house or scrapping the car and building or buying a new one, no longer makes sense. Unless the house is a national monument, or the car a rare and valuable model, or one with sentimental value, the owner makes the inevitable decision.

We don’t do that with our bodies, even though they also have a built-in design life and a point at which further health care will only prolong life on a constantly diminishing scale. People invoke more complex and invasive procedures as they age, spending increasing amounts of health-care dollars, to preserve their quality of life and indeed to preserve life itself. Traditionally, a person’s greatest medical expenses come in the last six months of his or her life. This is like demanding a new engine and transmission, new suspension, and new paint and seat covers on a rusted hulk that has 300,000 miles on the odometer. And yet, people are not cars, and we innately resist the idea that anyone should suffer for lack of adequate care, no matter at what point in his or her life.

The Patient Protection and Affordable Care Act (PPACA) that passed in 2010 was ostensibly designed to extend individual health insurance coverage at fair rates to all Americans. Indeed, it included rules that require employers to provide health insurance or pay a penalty, and rules for individuals to obtain health insurance or pay a penalty. It was sold to insurance companies as vastly expanding their customer base and so increasing the scale of their business. This would seem to be a resounding government investment in the health insurance model.

And yet, the Affordable Care Act included many features that work against this all-inclusive vision. For one example, the penalties for employers and individuals are substantially less than the projected costs of the insurance which the act mandates. By forcing employers to offer coverage for all of a person’s health costs, including such maintenance items as birth control—similar to requiring auto insurance to include oil changes—the act increases the potential cost of that insurance. Although the employer penalties have been artfully delayed for political reasons, the actual effect will be to discourage employers—who are by far the greatest source of individual and family insurance coverage since the wage controls of World War II—from continuing to provide this benefit. For another example, the act requires insurers to cover all individuals at the same rate, regardless of their state of health. This “community rating” is like charging homeowners the same for fire insurance regardless of whether the house is made of brick or wood, or drivers the same for auto insurance regardless of their accident and arrest record. The actual effect will be to increase rates for everybody, like charging people in deserts for flood insurance to pay for the losses of people living along riverbanks.

In my mind, all of these conflicting features cannot be explained simply as sloppy rulemaking—politicians trying to give everyone on both sides of the deal, insurers and insured alike, everything they want. Instead, I think the act was actually designed to break the insurance model of health care. And then, when the insurance companies have been forced out of business, the state will be required to step in and provide public health care on the Medicare and Medicaid model. In my mind, this act was very carefully designed to destroy the existing payment system.

But then, I don’t like the insurance model in the first place. People are not like houses or cars. Human life is a different order of proposition, and its maintenance and continuation should not be subject to economic considerations. And finally, as explained in my earlier blog, the current and future advances in medical technology, from genetic analysis to stem cell reprogramming, are blowing apart our earlier conceptions of health, sickness, aging, and even death itself. Two hundred years ago—before the germ theory of disease and during the reign of the four humors, black bile, and bloodletting—medical practice was a matter for royalty and the very rich. Everyone else went to the local wise woman or witch doctor. Today, modern medicine serves a real purpose in life improvement and extension. It has become a necessity of life.

It is my belief that in twenty or thirty years, through the combined action of institutional and academic researchers across the country and around the world, we will have defined every chemical process and reaction in the human body. We will be able to manipulate and regenerate tissues. We will be able to address the causes of sickness and aging, repair broken and deteriorating bodies, and reshape the human destiny far beyond “three score and ten.” And because these technologies will be applied on the model of the printing press and the assembly line, with modularized components, the costs will come down dramatically.2 This is the point of my most recent novel, Coming of Age.

So I should be happy that the Affordable Care Act is rushing us toward a large-scale remaking of the health care industry. Except … I distrust large bureaucracies and globalized offerings. When the state tries to run everything along top-down, command-and-control principles—as it did in the Soviet Union for seventy years—the result is always stagnation, smothering of innovation, and loss of individual choice. As they say in the clothing business, one size does not fit all.

Instead, I would look for a multitude of patient-and-provider options, along the line of cooperatives and subscription services. Kaiser Permanente is a good model of this, where the doctors and support staff form a provider organization, and patients buy its services on the installment plan. Of course, it is common knowledge among users that the quality of your experience with Kaiser depends on how the local organization is run. Some areas provide great service, others not so much. But this is only a problem if restrictions exist on the formation of new, competing service organizations. When choice is restricted, the incentive to do better goes away. The tendency of people and organizations to excel when faced with competition on price and service offering is a built-in feature of free markets.

Taking the long view, I’m not too much bothered by the havoc that the Affordable Care Act will create in the current health care industry. A collapse was coming anyway, due to the changes in medical technology that are barreling down upon us like an express train. The old system of paying for catastrophic coverage and pre-existing conditions could not survive when every illness has its detectable precursor and every accident has its optimum repair.

But in the meantime, to paraphrase Bette Davis in All About Eve: “Fasten your seatbelts, it’s going to be a bumpy ride.”

1. See Personalized Medicine and the Insurance Model from December 26, 2010.

2. Anyone who doubts this should compare the cost of pioneering medical procedures, from organ transplants to laser vision surgery, with the cost of these services today. With widespread use and improvements in practice, the price comes down tenfold. For that matter, consider the cost of the first personal computers or cell phones with the equipment and prices available today. In a technologically oriented world, everything becomes relatively cheaper. See Gutenberg and Automation from February 20, 2011.

Sunday, November 30, 2014

My Problems with Interstellar

I finally broke down and saw the movie Interstellar in a theater last week, rather than waiting for it to come out on disk, as I do with most movies these days. There are many fine things in this film. It has good characters, good actors, an interesting picture of love arcing across the years, and a superficially satisfying ending. But I have a few nits to pick with the actual science and, at the end, a plot hole you can drive a starship through. (Spoiler alert at this point! However, I’m assuming most of the people who care have already seen the film.)

First, the science. In a distant galaxy reached through a mysterious wormhole,1 the crew of the starship Endurance is tasked with visiting at least three planets which are candidates for a new Earth. The first planet they go to is orbiting “the rim” of a black hole. As one of the characters aboard the starship mentions in passing, this is some kind of special black hole—tame? tired? incompetent? incomplete?—I didn’t catch exactly how it was special.2 The plot point is that if they land on this planet, their experience of time will be slowed, so that one hour on the surface will equal seven years for anyone back on board the starship or on Earth. So they will have to move quickly and get out of there fast. The way they protect the starship from this time dilation is by not going into orbit around the planet itself but instead staying in an orbit around the black hole just beyond the planet’s orbit but somehow keeping pace with the planet so that travel down to its surface is still covers a manageable distance.3

My major science nit is that black holes and their environment are not magical time dilators, as the film appears to suggest. Small black holes can have fierce tidal effects that tear you apart, and large ones can draw you into an orbital acceleration that tends to break up matter into an accretion disk full of plasma and particles. But the only way to have your personal time slowed so significantly, compared with “normal time” for the rest of us, is either to increase your speed dramatically or to visit an area with high gravitational acceleration. This is because the calculations of general relativity make no distinction between mass acquired through inertia (going fast) and that conferred by the acceleration of gravity (getting heavy). At near lightspeed or deep inside a gravity well, a person’s time slows markedly, as in the film, so that hours spent at speed or being heavy become years for someone not so accelerated. And at the speed of light itself, the traveler’s or mass-gainer’s time stops completely, relative to outside observers.4

Inside a deep gravitational field, a person’s time also slows dramatically, such as this one hour for seven years exchange. But how strong does that gravity acceleration have to be to create a noticeable effect? The Earth’s gravity, one g, slows our clocks by about 0.02 seconds per year compared to an observer hanging around—i.e., not traveling or orbiting—out in interstellar space. A clock on the surface of the Sun, at about 28 g, loses 66 seconds per year compared to clocks on Earth. So the gravitational field of a small stellar mass, such as our Sun’s, has negligible effect on a visitor’s clocks. If I vacationed on the Sun for a year—having found a solid surface on which to stand and managing not to burn up—and then came back, my relative lack of aging would hardly arouse my doctor’s suspicions. But long before that the gravity load would have flattened me, because 28 g is not physically sustainable for humans.

In Interstellar, a visitor to the first planet orbiting the film’s black hole near to, but still somewhere outside of, its event horizon is said to lose 61,362 hours—seven years’ worth of Earthly hours—for each hour spent on the surface. For comparison, you only lose 0.00753 seconds for each hour spent on the surface of our Sun. By my rough calculations—and not trying to figure out radial distances and the black hole’s Schwarzschild limit—to create a time dilation on this order of magnitude, you would need a black hole with a mass 2.93 x 1010 times the mass of our Sun.5

To orbit close to the event horizon of this monster, you would be traveling very fast, probably close to the speed of light—assuming you could accelerate fast enough to establish a stable orbit and not just spiral inward toward the event horizon. Any planet that got so close would be torn apart, rather than simply experience massive tidal waves. The planet’s crust would bulge monstrously in time to the planet’s rotation, unless its own rotation were first locked to its orbit, as our Moon’s rotation and revolution are locked. The atmosphere and the ocean would be torn away. People landing on this planet could ignore its puny gravity and would live under the gravitation of the black hole itself, which would smear them into a thin paste of plasma and particles.

But there was no hint in the film that the watery planet was orbiting the black hole at anything approaching the speed of light. If it were, the starship would need to match that speed if it was going to orbit at a comfortable traveling distance beyond the planet’s own orbit. And whether by inertial or gravitational acceleration, the astronaut left aboard the starship would have experienced roughly the same time dilation as the crew that landed on the planet. In the film, however, this lone astronaut and the people back on Earth experience 23 years of time while the crew that lands experiences only a few hours. In any case, attaining the speed needed to match that planet’s orbit around the black hole, or to break away from that orbit later, would seem to be beyond the starship’s capabilities—or else why did it take the crew two years to travel from Earth to the vicinity of Saturn, where the mysterious wormhole awaited their passage?

Second, the plot hole. Because Cooper (Matthew McConaughey) and Brand (Anne Hathaway) spend three hours on the water planet, the story in the rest of the universe is advanced by 23 years. Cooper’s daughter Murph is now a young woman (Jessica Chastain). Brand’s father (Michael Caine) has become an old man near death. Then Cooper and Brand and the aging scientist who stayed aboard the starship go on to visit a second planet, where they are not apparently affected by any time dilation. On that planet, Dr. Mann (Matt Damon) has been faking his data about the planet’s habitability, turns homicidal when his deception is about to be discovered, tries to steal their starship, and in the process disables it.6 So, to reach the third, most distant, and yet most favorable planet of all, where Brand’s long-lost love is waiting to be rescued, Cooper and the ship’s robot assistant must drive the two remaining landers and use their thrust to slingshot the wrecked starship around the black hole to begin the journey.

Because of what I believe is a misinterpretation of Newton’s Third Law about action and reaction, Cooper and the robot agree that the starship must discard the two landers and their pilots when their fuel runs out. Or maybe they’re just shedding excess weight, because they mention that as a reason, too. The misinterpretation has to do with the difference between simply dropping off the excess material and actually accelerating it away from the starship as reaction mass under Newton’s law. If discarded weight added to your boost—the misinterpretation of Newton—then NASA missions would get a useful kick when they dropped their first and second stages, or when the Shuttle dropped its solid-fuel boosters and main fuel tank—and they don’t. Cooper and the robot then fall into the black hole, while Brand proceeds to the third planet and the discovery of a livable world.

Inside the black hole, Cooper and the robot are able to experience multiple instances of time in a single place—his daughter’s bedroom, where some strange things have been happening throughout the film—and the robot is able to make observations about gravity that Cooper then communicates to Murph. The young woman has been trying to solve equations related to gravity in order to move the entire population of Earth off the planet and into space. To do this, she needs to answer some unspecified question about gravity that apparently you can only find if you’ve experienced the inside of a black hole.7

Once Cooper gives Murph the new data, he realizes that he and the robot have somehow created this whole multi-dimensional effect, the wormhole and everything else, as an expression of their own will—and this is another gray area in the story line. With this realization, the tesseract, or multidimensional cube, that they inhabit, along with every second of time passing in that bedroom, automatically begins collapsing. They are somehow ejected from the black hole, returned to their own galaxy through the wormhole, and picked up by off-planet Earthlings about 100 years into Cooper’s future. He and the robot have experienced so much time dilation that Murph is now a frail old woman (Ellen Burstyn) about to die.

But Cooper finds nothing to interest him in this new world of off-planet living inside humankind’s new O’Neill colonies.8 He commandeers a small scout ship to go back through the wormhole to find Brand on that third planet where, inexplicably, the love of her life has now died and been buried. The plot hole I find is this: if Murph has gone from her 30s to her 90s while Cooper was inside the black hole, then why wouldn’t Brand, who never entered the black hole, have similarly aged and now be an old woman? She’s shown—apparently in real time, and not just in Cooper’s imagination—at the same age as when she rode the starship away from the black hole toward the planet of her lover.

As I said, Interstellar as a film offers many fine moments, good acting, and some interesting perspectives on time and human history. But these science issues and this plot hole are serious matters for me. Any working science fiction writer who brought this manuscript to a publisher would feel slightly embarrassed, knowing that corners had been cut. Any conscientious editor would require him or her to address these problems—and fixing them would necessarily change the story in significant ways. Otherwise, the author would be left with vigorous arm-waving, insisting this is a special black hole and the new galaxy is just different. And careful, caring readers would be left sputtering, “But, but, but …” The whole project would diverge into realms of science fantasy and magic. And that’s just not satisfying in a story so strongly dependent on its use of science.

1. Okay, first minor quibble. Wormholes are accepted science fiction motifs for accomplishing faster-than-light, interstellar travel. We blink at them in movies like Stargate and in television shows like Deep Space Nine. But wormholes are mere conjecture, based on the unfounded premise that space is somehow tightly folded through alternate dimensions beyond the three—x, y, and z—that we actually perceive and experience. They are a mathematical game, not an artifact of accepted physical science. Like time travel, wormholes belong more properly to the realm of science fantasy than to serious speculative fiction.

2. From my reading to prepare for writing The Doomsday Effect, black holes are of two types: rotating and nonrotating. Other than that, the only distinguishing feature is their mass, which determines the depth of their gravity well and the size of their event horizon. At heart, the “hole” is simply an infinitesimal point, a singularity, harboring all that mass. And the event horizon is simply the distance at which the escape speed from the gravity well exceeds the speed of light. More than this, science—and all our theories—sayeth not.
       My understanding is that the spin of a rotating black hole is only important because the singularity cannot revolve around itself but instead describes a tiny circle. If you could dive through that circle—good luck with that!—you might travel outside the spreading “time cone” as described by the speed of light in normal space. (I planned something of this nature for a fragment of Kornilov’s wrist bone in a possible sequel to The Doomsday Effect.) I've never heard that the spin has any effect on surrounding space, unless it is to create gravity waves, much as from a rapidly spinning neutron star or pulsar. However, opinions on this differ: see the entry on black holes from The Physics of the Universe. But I would argue with the last paragraph to the extent that it’s not the event horizon you can never quite reach but the singularity itself. And again, all of this is conjecture supported by mathematics, not by our experience or direct observation.

3. Orbital mechanics are difficult and the first thing most theatrical depictions of science get wrong. (Remember that early Star Trek episode, where to stay in orbit a landing craft had to fire its thrusters? Unh-uh!) So here, if you want to orbit a primary like the black hole and still match speeds with another object in orbit, like the planet, you have to enter the same orbit as the object of your desire. To maintain an orbit just beyond it or further out from the primary, you must move at a different speed; you cannot “pace” the planet from a higher orbit as it goes around the black hole.

4. Which leads to a conundrum: if you ride a light wave, you experience time normally, while to observers outside your frame of reference you would appear frozen. If time actually “stops” for you, then it would follow that the universe around you experiences an infinite amount of time compared to your experience. When you finally get off the wave and return to a more manageable velocity, the universe will have expanded to a thin gas, the stars burned out, and you are left in a cold, dark, empty place. This is why travel at lightspeed is theoretically impossible. When you get where you’re going, it won’t be there anymore.

5. That’s 29,300,000,000 solar masses—a truly galactic-scale black hole! Even if my math is wrong by a couple of decimal places, we’re not dealing with a black hole formed by the collapse of any star we know about. This one eats out the hearts of entire galaxies.

6. Apparently, the starship Endurance is such a rickety contraption of modules assembled in a rotating ring that the explosive outgassing from overriding the airlock controls can blow it apart. One wonders how the IQs at NASA could have dropped so sharply since the Apollo missions. But I haven’t had so much fun with a scene since Dave Bowman crossed over to Discovery without his space helmet.

7. While it might be great fun to fall inside the event horizon of a black hole, the information you obtained would, in my opinion, be minimal. You would accelerate toward the singularity until you reached its terminal velocity. At some point you might reach the speed of light, experience time stoppage, and continue to exist in your own time frame, eating, drinking, laughing, and scratching, but not becoming aware of anything happening outside yourself. Before that happened, however, you’d probably fragment and turn into plasma and particles. In any case, you wouldn’t learn much about gravity—no matter how many spatial dimensions you invoked—and if you did learn anything, you wouldn’t be able to communicate it to the world outside the black hole. Regardless of what Stephen Hawking predicts about virtual particles appearing and annihilating each other—or not—black holes are famous for not giving up light rays, information, or their dead.

8. These are huge cylinders spinning in space to create the acceleration of artificial gravity on their inside surfaces. For stability, one usually places them at the Lagrange points in an orbital system, such as around the Moon. Creating them requires no unusual information or interpretation of gravity. You do, however, need to transport a lot of rebar, concrete, glass, hardware, potting soil, and money to some distant point in space.

Sunday, November 23, 2014

Stranger Than Fiction

The old adage holds that “Truth is stranger than fiction,” to which I reply, “Because fiction must be realer than truth.” That is, fiction must be plausible, while the truth only has to be verifiable. … But sometimes you gotta wonder.

As proof of the adage—at least as far as popularly accepted history goes—I offer the events of 61 years and one day ago: the assassination in Dallas of President John F. Kennedy. If I were to submit an idea for a manuscript or screenplay based on that sequence of events, I would be laughed out of both New York and Hollywood.

Consider: a U.S. president was shot in broad daylight during a motorcade along a crowded parade route. The assassin was apprehended almost immediately, held for two days, and then was himself assassinated. An investigative commission headed by a sitting Supreme Court Chief Justice was launched to analyze the event but could not say with any certainty how many bullets were actually fired, where they came from, or where they went. One bullet, a full metal jacketed round consistent with the type of rifle found in the book depository and which the assassin presumably fired, a round which passed through the President’s body in the back seat of the limousine and then through the torso and wrist of the Governor of Texas sitting in front of him, was later recovered from a hospital gurney but appeared suspiciously pristine. A second bullet, supposedly of the same type from the same gun, went through the back of the President’s skull and fragmented inside his head. Evidence was produced—the Zapruder film, the puff of smoke, the grassy knoll—suggesting that the actual shooter was not in the book depository at all but instead to one side along the motorcade route, or perhaps firing from the front. And finally, the Warren Commission published its official findings and sealed all of the evidence for 75 years.

So much about the event did not add up, or was left unanswered, that a deluge of conspiracy theories quickly ensued. Key evidence was considered mishandled, misinterpreted, or forged. This is not the way history is supposed to be made.

The current popular theory, presented in Oliver Stone’s movie JFK, suggests that Lee Harvey Oswald was merely a patsy, a fall guy, and the real shooter or shooters were on the knoll. This story puts forth a conspiracy between FBI Director J. Edgar Hoover and the CIA at the behest of the “military-industrial complex” to assassinate the popular president because he was about to pull out of our involvement in Vietnam. The fact that Oswald’s assassin, Jack Ruby, appears to have had CIA connections is offered as proof: Oswald had to be killed before he could talk and show that he was just an innocent loony who could not have pulled off the job alone but was in fact working for government spies.

The Stone version is unbelievable on the face of it. First, although the anti-war Left would love to claim the martyred Kennedy as one of their own, the fact is that he subscribed to the Domino Theory and was pushing this country deeper into the Vietnam conflict because of that conviction, rather than pulling us out of it. I remember this, because I was alive then and conscious of the news. If anything, the “military-industrial complex”1 would have wanted to unseat Kennedy because of recent fiascoes like the Bay of Pigs invasion of Cuba. And if you’re an industry getting rich on government contracts, you wouldn’t relish the prospect of the United States going further into a sideshow conflict like the Vietnam insurgency, which could promise only procurements of M16 rifles, army boots, and light helicopters. Instead, you would want to focus on the Cold War and the big strategic picture, which would continue to net you lots of complex and expensive contracts for aircraft carriers, ballistic missile submarines, and heavy bombers.

Second, Earl Warren was an eminently respected jurist and a man of great intellectual honesty. If he had obtained evidence that J. Edgar Hoover had connived to assassinate a sitting U.S. president, Warren would not have hesitated to make those findings public, whatever the outcome. The idea that he might have helped to cover up an attempted coup is ludicrous.

Third, why seal everything for 75 years? Presumably that’s one whole lifetime—of anyone born on the day of the assassination—in order to protect the “innocent persons” connected with the case? What innocent persons? And how would an innocent person be damaged by the truth?

I have a theory about the assassination. I believe it fits all the facts. And I think I know why the CIA was involved and why the Warren Commission sealed their unpublished findings.

I believe Lee Harvey Oswald did in fact kill Kennedy. He was a trained marksman from the U.S. Marines, and his position in the book depository was the right one for the attempt. With the motorcade moving away from the building in a more or less straight line, he had time to align and take his shots. Of course, the range was increasing all the time, which would make the first shot most important and any that followed increasingly uncertain. It might have been better to take a position on the bridge, in front of the motorcade, so that the range would be decreasing and the chances with second and third shots getting better all the time. But that would have put the shooter in daylight, with every eye in the motorcade looking toward him. No, the shot from above and behind was the right one.

If Oswald were the patsy, the fall guy to hide the actual shooters, then these presumed professionals gave him the best shooting position. And instead they took the least likely: from the side, where a gunman would have to lead a target moving at an unpredictable speed, where a stray bullet would go off into the crowd standing opposite or, worse, end up in the limousine’s side bodywork, proving that there had been more than one shooter and that Oswald, the patsy, was not working alone. The firing position on the knoll was in the open, too exposed to passersby, while Oswald’s position on the sixth floor of the book depository was inside a window, hidden in the shadows.

Oswald was a disaffected young American. Before the assassination, he defected to the Soviet Union, spent time in Minsk at an unsatisfying job in an electrical plant, and married a Russian woman. Then he asked to be repatriated to the United States. To me, these are significant facts that tend to get lost in the high weeds of the conspiracy theories.

So here is what I think happened. Somewhere in the Soviet Union he was contacted by a low-level KGB agent. The agent recruited Oswald in a plot to assassinate Kennedy. Not being senior level, this agent possibly believed he would be making his superiors happy because of the embarrassment Kennedy had caused the Soviets by standing firm during the Cuban missile crisis. Or maybe the agent just thought, “What the hell,” and tossed Oswald over the wall to see what might happen.

Then, wonder of wonders, this unhappy dweeb actually manages to kill the President under the eyes of the Secret Service and a very large crowd. The low-level agent tells his KGB bosses, “Hey, look what I did!” But they know, as he does not, that decapitating a foreign power is the worst possible move to make during tense diplomatic times.2 They quietly dispose of their rogue agent and then call their opposite numbers in the CIA through back channels. “Ah, look, this is a big, big mess,” the Russians say. “We’ve handled it from our end, but you have to take care of your end or we’ve all got terrible problems.” To this the CIA guys say, “Okay, but remember, you owe us one.” The CIA then sends in Ruby to shoot Oswald before he stops denying his guilt and starts proclaiming his status as a new hero of the Soviet Union.

All of this goes into the lap of the Warren Commission. Everyone involved knows that if the assassination is proven to be the work of a Soviet agent then, given the current international tensions, the U.S. Congress will want to declare war. And, given the nuclear capability on both sides, this could be the end of civilization. So the commission is pleased to play up all possible alternate theories: single bullets, alternate shooters, puffs of smoke, and grassy knolls. They spread the conspiracy theories as a smoke screen, then they declare Oswald to be a lone and disaffected shooter. Nothing to see here, folks, just a disturbed young man. And finally they seal all the evidence for 75 years.

Those 75 years are up in 2039, long after the fact. By lucky coincidence, the Cold War and the Soviet Union itself will have been gone for 50 years by then. So will most of the “innocent persons” who might otherwise have been incinerated in World War III. By that time, all the conspiracy theorists will be dead, too. Only the historians will be alive to care.

Gosh, I hope I live long enough to hear the truth. I’ll be 91 by then and hope I have the wits to understand what I hear.

1. I use quotes here because, while the military and industry in this country have strong links and mutual desires, no single group of men and women or any one organization, not even the Pentagon, exists to act in their interest with concerted purpose. The “military-industrial complex” is a large and varied group of enterprises, often in competition with one another and sometimes working at cross purposes, even within the Pentagon. It’s certainly not a hierarchical body with a responsible leadership like the Roman Catholic Church or the Democratic Party.

2. Okay, Kennedy and the CIA didn’t know this either, as their repeated attempts to assassinate Castro proved. But that still doesn’t make decapitation a sensible policy.

Sunday, November 16, 2014

Look Out Among the Stars

Look up at the night sky, look through a telescope, look at the thousands of images brought back by the Hubble Space Telescope and similar huge “light buckets.” What do you see? What do you see in your mind’s eye when you think of space? Vastness, emptiness, vacuum. Coldly shining stars which, up close, are actually maelstroms of searing fire, writhing gases, fractured plasma, and immense gravity. Stellar neighborhoods—our own included—which are fields of deadly radiation at all frequencies and with particles flying through at nearly the speed of light. Interstellar neighborhoods filled with dust, ice chips, and gases that are probably poisonous and definitely unbreathable.

The universe upon which we gaze is a place of chaos, silence, and death. … Or so one would imagine.

Despite this surface appearance, I believe the universe teems with life. Life is out there, waiting to meet us, maybe to greet us, maybe to eat us. But life exists. Life, this wondrous “temporary reversal of entropy”—and ultimately the consciousness that life has led to, at least here on Earth—is the whole point of having a universe.1 Otherwise it is just empty space and explosively fusing hydrogen.

Or consider the contrary proposition: that it only happened once, and it happened here. Of all the hundreds of billions of galaxies, each containing hundreds of billions of stars, only Sol, a minor sun a third of the way out from the center of the Milky Way, and only our own Earth, a medium-sized rocky planet in Sol’s inner orbit, were visited by this phenomenal accident of chemistry. Everywhere else, just fire, gas, and inert dust.

That’s like thinking your family’s house out in the suburbs was the actual birthplace of the arts of cooking, reading and writing, or television simply because you first encountered them under that one roof. A toddler thinks this for about five minutes between the ages of two and four. Then he or she discovers neighboring children and the household next door.

Why do I think life is common? First, because we find its building blocks elsewhere in the solar system. Amino acids, the precursors to proteins, have been found on comets,2 which means that they were scattered—perhaps intentionally seeded—among the dusts and gases out of which our star and its planets formed. And carbon-based, organic chemicals are found in quantity beyond Earth’s sky. For example, the atmosphere of Saturn’s moon Titan is rich in methane. The possibility also exists that at least some of the fossil fuels we drill from the Earth’s crust were not the products of decayed life on earth—old swamp forests becoming seams of coal and methane, and tiny diatoms becoming pools of oil and domes of natural gas—but instead some of these hydrocarbons existed in the coalescing planetary matrix and were squeezed together during the Earth’s formation, rising toward the surface like the veins of gold and uranium.

Second, if you look at the Earth today, you see a planet covered with life, teeming with life, and transformed by life. But it was not always so. The atmosphere contains breathable oxygen and the soil contains nutrients only because the first microbes and then the larger organisms that evolved from them have been softening up the rock and freeing up gases for more than two billion years.3 If you had come to Earth back then, before the first life got to work on the planet, you would have found a stony surface, sterile seas, and an unbreathable atmosphere composed mostly of nitrogen with admixtures of carbon dioxide, methane, ammonia, and water vapor.4 This is a planet shaped by life, made habitable for life by its own processes. Life is the ultimate terraforming service and, through the process of evolution, it always gets the details exactly right for the type of life that will eventually live there.

If humans were to discover this solar system from afar during an interstellar scouting expedition, we would see two obvious places to look for our kind of life. First would be the Earth itself, which we would deem a remarkable and wholly unaccountable paradise. Second would be the watery world under Europa’s icy crust. We can suppose that heat from an active inner core or gravitational kneading from nearby Jupiter keeps the water under Europa’s ice liquid and perhaps warm enough for life’s processes. Europa may even offer volcanic vents rich in minerals, like those under Earth’s oceans, capable of supporting its own active biological ecology.

Mars may once have held life, although it now seems pretty dead. Perhaps some planetary disaster killed Mars. But perhaps any early atmosphere that Mars possessed leaked away because the planet was too small, its gravity too weak, to hold onto gas molecules lighter than carbon dioxide. Perhaps Mars once had an active iron core that was able to generate a magnetosphere to deflect the solar wind, as Earth’s core does. But now Mars has a thin atmosphere and high surface radiation, unsuitable for any kind of advanced life.

Venus may once have held life, because its starting conditions were very much like those on Earth, and its orbit, while nearer to the Sun, is not so near as to account for the planet’s high ambient temperature—about 800 degrees Fahrenheit—all by itself. Venus apparently suffered some kind of runaway greenhouse accident that increased both the atmospheric pressure and the cloud base. Its atmosphere is also not very nice, being mostly carbon dioxide like Mars’s but vastly denser. The sky rains sulfuric acid because of the high concentration of sulfur dioxide. In addition, radar studies of the planet’s surface indicate suspiciously few “astroblemes,” or visible impact structures. This apparent lack of asteroid bombardment, which is found everywhere else in the solar system, suggests that either the weather wears old craters away at a phenomenal rate, or the lithosphere lacks a tectonic plate structure like that of Earth. Without shifting plates and occasional volcanic eruptions along the plate boundaries, the heat from Venus’s core might build up until the entire surface simply melts and subsides, as if it was being groomed by a some kind of planetary Zamboni machine.

As for the probability of finding life, the rest of our solar system would appear to that scouting mission as either gas giants with no discernible surface or barren rocks and balls of ice without atmospheres—Titan excepted. And both of our immediate neighbors, Mars and Venus, have interesting and perhaps tragic histories that render them inimical to life. But tragic for whom? If Earth was reshaped by life as it grew here, what influences might a different kind of life have brought to these planets?

We define life fairly closely.5 We recognize our kind of life by carbon-based chemical processes, a fragile cellular structure, and some degree of mobility and interaction with the environment. That may be an overly narrow definition. Who is to say that some kind of piezoelectric circuitry flowing through a crystal, or a chemical reaction among various metals, could not create a fully satisfied form of life, perhaps one with an active and questing intelligence?

In our journeys out among the stars, we will have to step carefully. Otherwise, we might mistake the local inhabitants for paperweights or some aggressive form of corrosion. But my bet is we’ll meet a fair number of carbon-based life forms out there, with some analogue of proteins shaping cellular structures that are active, mobile, and interacting with their environment. My guess is they’ll come right up and tap us on the shoulder—either to greet us or proceed to eat us.

1. “Or what’s a heaven for?”—in the words of Robert Browning.

2. See Found: First Amino Acid on a Comet, from New Scientist, August 17, 2009. Similar articles trace further discoveries. At the time of this posting, I expect the European Space Agency’s Philae probe to find similar chemicals on comet 67P/Churyumov-Gerasimenko.

3. See my blog post DNA is Everywhere from September 5, 2010.

4. But scientific views of the early Earth’s atmosphere differ. See, for example, Earth’s Early Atmosphere in Astrobiology Magazine from December 2, 2011.

5. See Between Life and Not-Life from November 9, 2014.

Sunday, November 9, 2014

Between Life and Not-Life

Recently I was fixing a broken clock,1 and that set me thinking about things that move by themselves but are not actually alive. The biological definition of life is quite exact: what differentiates animals, plants, fungi, and microbes from inorganic materials, even those of complex and exquisite design that happen to move—a leaf blowing on the wind, for example—is that living things can grow, change, sustain themselves, exhibit functional activity, react to their environment, reproduce their own kind, and eventually die. The broken clock, aside from once exhibiting functional activity and now an apparent state of death, does none of these things.

Definitions are tricky things, however. They are trying to put a precise meaning in words to a state which can easily be understood by observation and sensed in the gut, but which may be a slippery thing for the intellect to grasp. For example, many people I know have not reproduced themselves, being childless. Many people in a vegetative state cannot sustain themselves or even react to their environment. Many more people may grow and change in the barest organic sense, but not in any intellectual, emotional, or spiritual capacity. And finally, death is not proven for any of us—especially now, with more new medical techniques being developed every year—until it actually comes. So, are these cases of people who are, according to the definition, not alive?2

Modern advances in robotics and cybernetics are going to test that definition of life even more stringently. A software program can be reproduced quite easily, and it’s no stretch of the imagination to think of a program with the right internal commands—let alone the volition of an artificial intelligence—that can replicate its own code, package the result, and send it down the line into a new computing environment.3 So, would such a program qualify as alive under at least one parameter of the biological definition?

Software fulfills other specifications for life as well. Depending on the type of code, it certainly exhibits functional activity, can grow and change, and can react to its environment in the form of received inputs and commands and internally generated outputs and displays. One might argue that a piece of code cannot sustain itself without a computer’s central processor, memory chips, storage space, and the electricity to run these bits of hardware. But then, a human being cannot sustain him- or herself without the environment of a suitable planet or space station, externally provided pieces of hardware in terms of clothing, furniture, tools, entertainments, and other comforts, and the food—generally grown and processed by others and shipped over long distances—required to sustain the human organism.

The fact that the software has required—at least initially—a human mind and human invention to create both the code and the machine on which it runs has no bearing on the definition of life. After all, humans did not create the planet on which they live, the solar energy that drives its climate and crops, or the air they need to breathe. And humans did not create themselves from first principles, either. Questions about a creator god, or the origins of the organic chemical reactions necessary to promote molecular biology, are outside the biological definition of life.

So life according to the old biological definition may be complete for animals and plants, but insufficient to encompass the new world that technology is making for us. And it may not be sufficient for all the types of beings we might find out among the stars.

If you asked me to create an iron-clad definition of life, suitable for all purposes, I would strip the biological definition down to its barest chassis. Life is an open-ended process, reflecting functional activity that is usually, but not always, carried out by an underlying mechanism employing material substances which interact through energy inputs. And that process must be susceptible to interruption and cessation.4

Under these terms, an artificial intelligence operating on a computer chip or inside a robot could be considered alive. So could an automated factory or a fire engine. Questions about volition, free will, or freedom of action and movement are outside my stripped-down definition of life, as they are for the biological definition. After all, clams, mussels, and corals are all alive and yet have no volition or free will to do other than settle down on a rock or in the sand and perform the filter-feeding for which their bodies were designed. A domesticated horse that lives in a stable, gets hitched to a plow or wagon everyday, and eats what, when, and where its human owner directs is hardly able to exercise its innate volition. And humans who are enslaved or under the psychological domination of another person have vastly diminished capacity for free will and freedom of action.

In my definition, the difference between a horse and a fire engine is that the horse can refuse the orders of its master, can fight the bit, shy back from the harness, and balk at the feedbag. The slave can decline into a psychological depression, lie down, and choose to die. But the fire engine goes where its driver directs—even if that means plowing into a brick wall at sixty miles per hour. But exercise of free will is still not part of the definition of life.

When we go out among the stars we are going to find many strange and wonderful things, and not all of them will be in their first flower of growth and development. We will discover decaying worlds full of automated machinery and robots, either waiting dormant or still actively functioning, but left over from the organic civilizations that invented them, perfected them, used them, and then died out either slowly or rapidly, leaving them behind. We will discover slave cultures whose biology, capacities, and expectations were manipulated into a state of perfection—perfect in terms of what the manipulators desired as to nutritional requirements, mental and physical capacities, and personal direction and volition—and then left to collapse or evolve on their own when the master race died out.

And note that in my stripped-down definition I added “but not always” to the part about an underlying substantive mechanisms. This allows for the sort of life forms made of pure energy—presumably, once-organic creatures who have surpassed their physical bodies and become pure spirit—that are found in much of science fiction.5 Who is to say that patterns previously established in the electrochemical circuitry of neural nets or the silicon pathways of chip sets might not reform and propagate outside those physical structures? Certainly, self-sustaining electrical circuits are not known in our definition of physics. But we know that radio waves propagate in complex patterns of photons outside any conducting aether or physical fluid. And who is to say that our definition of physics is so very complete?6

Perhaps one day we will discover that leaves blowing on the wind do have minds of their own.

1. No, not fancy clockwork filled with gears and springs, as in the picture—that’s just to draw your attention. I am not that mechanically gifted. Instead, it was one of those black boxes with a three-layered drive shaft and place for a Double-A battery, which then fits through any fancy clockface as part of an art project. But the principle is the same: as an object with its own functional activity, the little black box had stopped working.

2. I know, I’m treading the distinction between group characteristics of the species Homo sapiens and the individual characteristics of a John or Jane Doe. But the definition of life is also generally stated without making such a distinction.

3. That’s the basis of my early novel ME: A Novel of Self-Discovery. But you don’t have to look to fiction for self-replicating software: tapeworms, viruses, and other malware do it all the time—and with a persistence and tenacity that mimics life itself.

4. I’ve thrown in the notion of death just so we don’t have to consider the roiling plasma inside a star as some kind of life form. And come to think of it, every star grows, changes, is self-sustaining—so long as its gravity and its fusible elements remain in proper proportion—and eventually it dies out with either a bang or a cinder.

5. Not to mention a few gothic horror and ghost stories.

6. See Three Things We Don’t Know About Physics (I) and (II) from December 30, 2012, and January 6, 2013.

Sunday, November 2, 2014

The Saving Grace of Democracy

Although I am conservative by nature, my political leaning of is that of a “little-D democrat.” That is, I believe in the innate wisdom of crowds and trust in the consensus of a large group of people.

In John Brunner’s 1975 novel of an internet-connected future, Shockwave Rider, the main character at one point operates a Delphi poll. The Delphi method is a technique for establishing consensus by asking a group of experts the same question and comparing their answers to find some kind of empirical truth. In Brunner’s novel, using the vastly more powerful resources of the internet, the character posts questions online, invites thousands of average people to respond, and tabulates the results. The questions could be specific—such as “How many cars does Ford make in a year?” or “How many hospitals are there in the U.S.?”—or they could be purely speculative—such as “When will humankind have a base on Mars?” Some people will possess expert knowledge of the subject, like Ford production managers, hospital accreditation examiners, and NASA administrators. And some will just take wild guesses. Brunner’s point was that it didn’t matter. If you averaged the results, you would come eerily close to the correct answer or, in the case of speculative questions, a surprisingly reasonable estimate.1

Brunner’s summation on the Delphi poll: “While nobody knows what’s going on around here, everybody knows what’s going on around here.”

The Germans call it Zeitgeist, the “spirit of the times,” or more loosely the consensus of a culture and the society it produces. This is the collected knowledge, the folk wisdom, and the expectations and limitations within which any individual operates. It’s the tide along with, or against which, any individual swims. So the Delphi poll and the consensus it samples works within a broadly defined group. You can ask 21st-century Americans about Ford’s production figures and get a pretty good reading. Try the same question with a group of Somalis or Sudanese, and you will probably be less satisfied.

This kind of collective wisdom harks back to the Lincoln quote: “You can fool all the people some of the time, and some of the people all the time, but you cannot fool all the people all the time”—where the fooling involves working against what the culture holds as common knowledge or accepted wisdom. So Lincoln was a little-D democrat, too. Any one person can be a damn fool at some time or another, but the mass of men—the common run of the country—is collectively going to be pretty smart.

But as a conservative, I also believe in the wisdom of individuals and trust in the insight that the dedicated expert may amass by training and study. I believe in individual genius, personal responsibility for actions and intentions, and the virtues of contrarian thinking. So I am also a “little-L libertarian.” Furthermore, as a thorny individualist myself, I don’t relish my intentions and actions being judged, let alone constrained, by a crowd of people who operate on guesses and instinct coupled with the fancies and folktales their grandmothers whispered to them in their childhood.

It’s a conundrum—and one that faces all western technological societies most strongly in the 21st century.

The commonest form of government in the developed countries of the West is some kind of representational democracy. Local people elect representatives to go to Washington, London, Berlin, or Tokyo and sit in Congress, Parliament, the Bundestag, or the Diet to decide the issues of the day and make laws for the whole country. Some kind of overall leader may exist, whether a prime minister chosen from among the legislators or a separate executive elected at large, but the power still resides in the collected group of representatives. This form of democracy developed, not from any philosophical principle, but from pure logistics. In the 17th and 18th centuries, when these systems were codified, it was actually impossible to bring all the citizenry of a sprawling, agrarian country together to hear and debate issues. The direct and personal democracy of the village, the tribe, or the Greek city-state did not work among millions of people scattered across a whole country, let alone a continent.

Even then, it might take weeks for an elected representative to travel between his constituency and the national capital, and to communicate with the electorate from day to day in any detail on the current issues—whether by letters, newspaper articles, or published pamphlets—was simply impractical. So each locale voted for its representative, sent him off to the big city, and trusted in his judgment.2 As issues became more complex, as laws became more exacting and tried to encompass more specifics and exceptions, the staffs of the individual legislators had to grow in order to keep up with the tide. And the staffs of the various cabinet posts, government departments, and other agencies and commissions charged with executing and administering the laws also grew. Our laws are now written by staffs recruited from lobbyists, interest groups, and lawyers, and executed by departments full of civil servants and registered contractors.

Today, modern elective governments are pulled in two directions. On side, the actual power of decision in the execution and enforcement all these complex laws is in the hands of non-elected experts who were hired into those government departments, agencies, and commissions. Presumably, they were hired because they had the education, skills, and insight to deal with the actual cases that might arise. So one could hope that people working in the Department of Energy know something about electricity, gas, nuclear power, transmission, and science, and that people working in the Department of Education know something about children, teaching, and psychology. If I am right about trusting educated and dedicated experts, then we would seem to be in good hands.

On the other side of the question, today our modern, computerized communication technologies like the internet, social media, polling techniques, and data mining tend to make the representative nature of modern democracy almost charmingly obsolete. Why pick an individual to speak for a locale, pack him on a horse, and send him off to the big city when any citizen with a telephone and a computer can dial into a polling service, post a blog or comment on one, share in a viral meme, and make his or her views instantly known? We are coming back around to the Greek ideal, where every citizen can pick up an electronic potsherd, make his or her mark, and put it in the computerized jar. If I am right about trusting the wisdom of crowds, then we might be in even more capable hands by working through the new digital democracy.

Except … except … we can see that kind of direct democracy at work in California, where statewide propositions are launched by petition to make laws that bypass the state senate and assembly. Sometimes good laws are made and survive judicial review. But sometimes mischief is set afoot that requires eventual backtracking and rebuilding to set things right. And sometimes the result is just a nuisance.3

So a government of experts might be preferable … except that even the smartest, most educated, most distinguished genius can still have his or her moments of foolishness or suffer from personal quirks that render a considered opinion or decision foolish in one dimension or another. And an expert who has been granted authority of position and the power of decision can lose all humility and come to believe that he or she exercises some kind of divine right over the rest of us. Arrogance is a special province of fools.

So the conundrum remains. Should we trust the wisdom of the many? Or of the few?

I don’t have an answer for this. But sometimes the issue makes me want to retreat to a high mountain valley that has a good water source and defendable passes, and then barricade the road behind me. I would take along only my family and a few friends—but then how are we to organize and govern ourselves? Still a conundrum. …

1. This is not unlike the old carnival challenge of guessing the number of jellybeans in a jar. Any one person’s guess might be laughably far off, but the average of all guesses would fall within a few beans of the exact count. And about a decade ago the magazine Popular Science ran a Delphi poll on its back cover, asking readers to answer speculative questions about science and technology. Although I never saw their published results, I’m sure the editors were looking for insight into the future.
       Something similar takes place in any bookmaking operation: by taking bets on a sporting proposition—such as which horse is faster or which fighter hits harder and has more stamina—the bookie establishes the odds or probabilities based on the wisdom of a crowd of gamblers. Some of them will bet with a keen eye for horseflesh or human heft. Some will bet because they like the color of the jockey’s silks or the look in the fighter’s eye. And some will bet knowing that the fix is in. … It all averages out.

2. Here I would use the gender-equivalent “him or her,” except in this case women were not part of the equation until the early 20th century—and our society bears more shame for that.

3. For an example of mischief, consider Proposition 13 from 1978, which effectively freezes property tax rates for people who don’t move around much but settle in one home for years at a time. Intended to control public spending by slowing the power of taxation, that initiative has created an imbalance in public participation which leaves many cities and counties still struggling.
       For a nuisance, consider Proposition 65 from 1986, which requires every product and public building to post a standardized notice about possible toxins found inside. Since we all live in a dangerous world, and modern science can detect chemicals in increasingly minute concentrations, the warnings are posted everywhere. No sane person stops at the door because of a Proposition 65 warning anymore, and so we are desensitized to all possible hazards.

Sunday, October 26, 2014

Storytelling the Future

Back in my university days we had a time, around my senior year, when campus radicals and their calls for “relevance” in the curriculum inspired a number of new, unorthodox, and generally short-lived courses of study. It was an admittedly silly season, when serious professors tried perhaps unserious things. For example, I took a course in magic and witchcraft that was actually a hybrid of comparative literature and anthropology and was remarkably instructive for a young writer interested in science fiction and fantasy.

I also took a course with my mentor, Philip Klass, about predicting the future, and this was more grist for my science fiction imagination.1 We read from noted futurists like Alvin Toffler and historians and economists like Robert Heilbroner. We studied probability. We learned about trend analysis and about the danger of relying too heavily on current trends.2 That course became an overview of historical analysis and was useful to me because it knit together ideas from many of the required courses I’d taken over the years in the College of Liberal Arts.

As a science fiction writer, I tend to read a lot of history as well as science. But I don’t dwell on—or live in—the past. I read historical novels with pleasure, but I’m not much interested in writing them.3 My entire focus is directed forward. Personally, I’m always anticipating and living in the next six months to a year, rather than looking backward over my life. Politically and economically, I look forward through the next couple of decades—even beyond the years when I’m likely still to be alive. So the problems that people around me perceive as most important right now I generally see as either hiccups or aberrations, to be fixed through advancing technology or ongoing political processes. I’m more concerned with the problems and opportunities that are coming down the road.4 I once quipped to a colleague at work that I actually commute here each day from about thirty years into the future. That is where my mind lives.

Predicting the future in general is hard, as I learned in that class back at the university. Predicting it with great accuracy—calling for precise dates and descriptions of events and their players—is impossible. But seeing the broader curve, knowing which way it bends, and understanding that for every sudden rise you can expect a comparably sudden fall … that kind of sorcery is always possible. Commodities traders do it everyday, and the good ones make money at it.

Writers do this kind of prediction, too. The processes of plotting, outlining, and then writing a novel are acts of projection. The writer takes a starting situation—the main character, his or her past life and current prospects, and the prehistory of the story’s setting—and then projects from there what the character will do and what will happen next. And from that point, the writer then projects the succeeding set of circumstances and reactions … on and on, until the story comes to an end. Plotting and outlining are like viewing the broad curve and bold strokes of a future history. The actual writing is like living moment-to-moment with the character and experiencing that future as it unfolds.

The writer does have one advantage that the futurist lacks: the past of any character is not fixed and immovable, as it would be in a history. True, the historical circumstances of a story set in contemporary times may be fixed,5 but the character’s personal history, upbringing, education, and even his or her personality itself are still fluid. The writer can go back and change the precursors to the story in order to make any desired outcome logical and necessary.

But that’s not what it feels like as the writer works on the story. The characters must be “real people” in the writer’s imagination. Details can certainly be retrofitted to create drama and foreshadowing—and to manipulate the reader’s expectations—but the main characters and major events in any story must have a degree of solidity, of fixed and opaque nature, or else the whole process of writing his or her experiences falls apart in a flurry of forced choices, logical inconsistencies, and factual incoherence.

Like the futurist, the novelist must consider many factors in creating a “future history” for his or her characters. These include the character’s past actions, current intentions, and personality traits; the actions and intentions of other characters in the story; the probable events and dangers inherent in the setting and the time covered by the story; and the intended reader’s level of understanding, sensitivities, and capacity for disbelief. Miss one or two details, or get them wrong, and the reader might pass them off as annoyance or register subtle dissatisfaction with the novel. Miss a major story arc or get a significant detail out of place, and you’ll have the reader sputtering, “But, but, but …” and perhaps even throwing the book across the room.

The future does not yet exist, until we live through it, and that’s what makes predicting the future so exciting and dangerous. The story of a novel does not yet exist, until the author sets the words—and the images and actions they represent—in final order, and that’s what makes writing so exciting and dangerous.

Books, whether set in the past, present, or future, are actually histories that unfold first in the writer’s mind and then in the reader’s. The narrative takes us to a place and time that may never have existed and gives us a chance to meet people who never lived. But for the book to be successful, the reader must feel—at least for the moments of immersion in the story—that it is a true record of events, and that the characters actually lived in the story.

I don’t know of any act so perfectly satisfying as creating out of pure imagination and common English words on the page an actual, living, breathing, beating piece of imaginative history. Maybe the work of painters and composers—carried out in the different media of color and sound—or of film directors—who work both with a script in words and through the talents of actors, set dressers, wardrobe designers, and location scouts—can approach this sensation of creating something out of nothing. But for the writer it is completely enveloping, because the novel includes colors and sounds, smells, location and action, personal reactions, and big dollops of believable history along with the story.

Of course, another way of looking at the process is that the writer is simply a bald-faced liar, creating stories out of imagination. But as with any successful liar, the stories have to work. They must account for all details, include just enough of that ah-ha! quirkiness to ring true, but not offend the hearer’s or the reader’s sense of logic and proportion.

As such, effective storytelling can be a lot harder than trying to predict where the stock market will be a year from now, or when and where the next war will start.

1. Philip Klass wrote science fiction under the pen name William Tenn and created remarkable and yet warmly human stories about alternate realities.

2. It’s called the “if this goes on” fallacy, where the futurist fails to consider other, countervailing influences. I had a chance to put this in action—at least in the privacy of my own head—while sitting in a quarterly departmental meeting soon after joining the biotech company. Our division vice president was reporting on the currently strong sales of the reagents sold to support processing with the company’s genetic analysis equipment. In the past couple of years, most of that equipment had been acquired by the Human Genome Project and other laboratories attempting to sequence the genome. The vice president’s chart showed this bump in sales over the previous two years and projected from its peak a straight, dotted line right up into the stratosphere. Our future was secure! We were going to be rich! But I sat there thinking that the first draft of the genome had just been published, so this burst of activity was probably going to end. True, in the long run we did sell lots of reagents, but not along the same sales curve as the run-up to the Human Genome Project.

3. For example, when I thought about writing a biography of Julius Caesar, because a lively and interesting text did not seem to exist at the time, I ended up re-imagining Caesar’s life projected into the American future in First Citizen. In fact, the only time my writing has ever delved into the past was my two recent works of general fiction, The Judge’s Daughter and The Professor’s Mistress, which were attempts to look at influences in the mid- to late-20th century and how they had shaped my life.

4. My latest novel, Coming of Age, which has just been published in two volumes, is much more my kind of story. Through stem-cell technologies, the two main characters live for another century beyond the traditional “three score and ten.” To write that, I had to project the next hundred years of American history. Whoo-eee!

5. But even then the writer can take certain liberties, especially in the realm of science fiction, where it’s standard practice to create alternate histories.