Sunday, October 26, 2014

Storytelling the Future

Back in my university days we had a time, around my senior year, when campus radicals and their calls for “relevance” in the curriculum inspired a number of new, unorthodox, and generally short-lived courses of study. It was an admittedly silly season, when serious professors tried perhaps unserious things. For example, I took a course in magic and witchcraft that was actually a hybrid of comparative literature and anthropology and was remarkably instructive for a young writer interested in science fiction and fantasy.

I also took a course with my mentor, Philip Klass, about predicting the future, and this was more grist for my science fiction imagination.1 We read from noted futurists like Alvin Toffler and historians and economists like Robert Heilbroner. We studied probability. We learned about trend analysis and about the danger of relying too heavily on current trends.2 That course became an overview of historical analysis and was useful to me because it knit together ideas from many of the required courses I’d taken over the years in the College of Liberal Arts.

As a science fiction writer, I tend to read a lot of history as well as science. But I don’t dwell on—or live in—the past. I read historical novels with pleasure, but I’m not much interested in writing them.3 My entire focus is directed forward. Personally, I’m always anticipating and living in the next six months to a year, rather than looking backward over my life. Politically and economically, I look forward through the next couple of decades—even beyond the years when I’m likely still to be alive. So the problems that people around me perceive as most important right now I generally see as either hiccups or aberrations, to be fixed through advancing technology or ongoing political processes. I’m more concerned with the problems and opportunities that are coming down the road.4 I once quipped to a colleague at work that I actually commute here each day from about thirty years into the future. That is where my mind lives.

Predicting the future in general is hard, as I learned in that class back at the university. Predicting it with great accuracy—calling for precise dates and descriptions of events and their players—is impossible. But seeing the broader curve, knowing which way it bends, and understanding that for every sudden rise you can expect a comparably sudden fall … that kind of sorcery is always possible. Commodities traders do it everyday, and the good ones make money at it.

Writers do this kind of prediction, too. The processes of plotting, outlining, and then writing a novel are acts of projection. The writer takes a starting situation—the main character, his or her past life and current prospects, and the prehistory of the story’s setting—and then projects from there what the character will do and what will happen next. And from that point, the writer then projects the succeeding set of circumstances and reactions … on and on, until the story comes to an end. Plotting and outlining are like viewing the broad curve and bold strokes of a future history. The actual writing is like living moment-to-moment with the character and experiencing that future as it unfolds.

The writer does have one advantage that the futurist lacks: the past of any character is not fixed and immovable, as it would be in a history. True, the historical circumstances of a story set in contemporary times may be fixed,5 but the character’s personal history, upbringing, education, and even his or her personality itself are still fluid. The writer can go back and change the precursors to the story in order to make any desired outcome logical and necessary.

But that’s not what it feels like as the writer works on the story. The characters must be “real people” in the writer’s imagination. Details can certainly be retrofitted to create drama and foreshadowing—and to manipulate the reader’s expectations—but the main characters and major events in any story must have a degree of solidity, of fixed and opaque nature, or else the whole process of writing his or her experiences falls apart in a flurry of forced choices, logical inconsistencies, and factual incoherence.

Like the futurist, the novelist must consider many factors in creating a “future history” for his or her characters. These include the character’s past actions, current intentions, and personality traits; the actions and intentions of other characters in the story; the probable events and dangers inherent in the setting and the time covered by the story; and the intended reader’s level of understanding, sensitivities, and capacity for disbelief. Miss one or two details, or get them wrong, and the reader might pass them off as annoyance or register subtle dissatisfaction with the novel. Miss a major story arc or get a significant detail out of place, and you’ll have the reader sputtering, “But, but, but …” and perhaps even throwing the book across the room.

The future does not yet exist, until we live through it, and that’s what makes predicting the future so exciting and dangerous. The story of a novel does not yet exist, until the author sets the words—and the images and actions they represent—in final order, and that’s what makes writing so exciting and dangerous.

Books, whether set in the past, present, or future, are actually histories that unfold first in the writer’s mind and then in the reader’s. The narrative takes us to a place and time that may never have existed and gives us a chance to meet people who never lived. But for the book to be successful, the reader must feel—at least for the moments of immersion in the story—that it is a true record of events, and that the characters actually lived in the story.

I don’t know of any act so perfectly satisfying as creating out of pure imagination and common English words on the page an actual, living, breathing, beating piece of imaginative history. Maybe the work of painters and composers—carried out in the different media of color and sound—or of film directors—who work both with a script in words and through the talents of actors, set dressers, wardrobe designers, and location scouts—can approach this sensation of creating something out of nothing. But for the writer it is completely enveloping, because the novel includes colors and sounds, smells, location and action, personal reactions, and big dollops of believable history along with the story.

Of course, another way of looking at the process is that the writer is simply a bald-faced liar, creating stories out of imagination. But as with any successful liar, the stories have to work. They must account for all details, include just enough of that ah-ha! quirkiness to ring true, but not offend the hearer’s or the reader’s sense of logic and proportion.

As such, effective storytelling can be a lot harder than trying to predict where the stock market will be a year from now, or when and where the next war will start.

1. Philip Klass wrote science fiction under the pen name William Tenn and created remarkable and yet warmly human stories about alternate realities.

2. It’s called the “if this goes on” fallacy, where the futurist fails to consider other, countervailing influences. I had a chance to put this in action—at least in the privacy of my own head—while sitting in a quarterly departmental meeting soon after joining the biotech company. Our division vice president was reporting on the currently strong sales of the reagents sold to support processing with the company’s genetic analysis equipment. In the past couple of years, most of that equipment had been acquired by the Human Genome Project and other laboratories attempting to sequence the genome. The vice president’s chart showed this bump in sales over the previous two years and projected from its peak a straight, dotted line right up into the stratosphere. Our future was secure! We were going to be rich! But I sat there thinking that the first draft of the genome had just been published, so this burst of activity was probably going to end. True, in the long run we did sell lots of reagents, but not along the same sales curve as the run-up to the Human Genome Project.

3. For example, when I thought about writing a biography of Julius Caesar, because a lively and interesting text did not seem to exist at the time, I ended up re-imagining Caesar’s life projected into the American future in First Citizen. In fact, the only time my writing has ever delved into the past was my two recent works of general fiction, The Judge’s Daughter and The Professor’s Mistress, which were attempts to look at influences in the mid- to late-20th century and how they had shaped my life.

4. My latest novel, Coming of Age, which has just been published in two volumes, is much more my kind of story. Through stem-cell technologies, the two main characters live for another century beyond the traditional “three score and ten.” To write that, I had to project the next hundred years of American history. Whoo-eee!

5. But even then the writer can take certain liberties, especially in the realm of science fiction, where it’s standard practice to create alternate histories.

Sunday, October 19, 2014

A God I Can Believe In

I’ve said many times before that I’m an atheist.1 That is, I do not credit the existence of any supernatural being, any intelligence or cause of action, which stands outside of time and the laws of physics, controls the birth and direction of the universe, and responds to the actions, prayers, pleadings, and moral consequences of human beings here on Earth. But that does not mean I necessarily think people who do believe in an actual, living, personal god are foolish, deluded, or less discerning. Rather, I imagine I lack some gene for the neurological mechanism which lets people perceive a spiritual world beyond their physical senses and respond to invisible beings outside themselves, whether gods, angels, devils, or some other manifestation of this spiritual world.

That also does not mean I would discredit the historical, literary, or social importance of this vision of a god or gods as having a powerful affect on people’s decisions, imaginations, and actions. Something exists that drives human belief, and as a rational, thinking animal I must react to and account for it.

I can accept a god or gods not as actual spiritual beings apart from us, but as the embodiment of humanity’s, or a society’s, or a person’s highest conception of “the good.” And by good I mean all of the things a human being can value and strive for that preserve us from chaos, evil intent, and personal futility and annihilation. Among the underpinnings of “the good,” I would place ideas of reciprocity and fairness between people; of justice and karmic retribution with regard to actions; of loving kindness and acceptance in dealing with individuals who do not happen to be our own kith and kin; of striving against and persevering in the face of adversity; and of sharing with and giving to those who are in need. Without these attributes, we are no more than wolves tearing at the stranger’s throat with our teeth while trying to disembowel him with our claws.

In this sense, a god is an ideal for the individual to strive to attain. This god is a vision, not of humanity as the big-brained, thin-skinned, opposable-thumbed animal that we physically and biologically happen to be, but as the perfectible, spiritual, inspired creatures that our expanded intellect, refined emotional sensitivity, personal sense of self-awareness, and future-oriented projective capacity promise we might become.

Concepts of godliness, of higher striving, of right and wrong action, of charity and forgiveness, and all the other attributes of “the good” are a powerful tool for breaking the wolf-spirit that lives in every human child and training that young, self-centered mental and emotional complex to become an accepting, committed, and useful member of his or her society. Perhaps other means exist to achieve this transformation—perhaps through pure rationality and observations drawn from the laws of mathematics and physics—but none is so directly applicable to the human psyche as instilling the belief in an all-powerful parent figure who exists apart from the individual’s biological and humanly fallible parents, who demands right action and good intention from each of us, and who will punish wrongdoing and bad intentions in ways beyond our immediate observation and testing. That is, a spirit in the sky whom you cannot see or question directly, who demands your obedience and, if you don’t comply, will send you to a burning place for all eternity in that unknown time after you die.

Once the individual as a child has absorbed and internalized these teachings—the precepts of right and wrong, forgiveness, charity, and all the rest that comprise “the good”—and so become a stable, contributing member of his or her society, then the mythology of the omniscient parent figure can be allowed to fade under the weight of doubts, counterarguments, and cynical observations which we acquire through comparative education, critical thinking, and emotional testing. The teachings will still exist, and the emerging adolescent and ultimately the adult will know what the “good” is without needing to be constantly reminded of an all-seeing supernatural watcher or the prospect of an eternity in burning fire.

Something similar to this idea is found in Buddhism with the concept of a person’s “Buddha nature.” This is the core person, the undefiled person, the being that lives deep in our consciousness, beneath our changing opinions, random ideas, and restless seeking after advantage. This deep being is naturally attuned to the universe and its ways. It is receptive to ideas of reciprocity, balance, acceptance, peace, loving kindness, and other elements of “the good.” This is the person we were meant to be before we became enmeshed in the chaotic disturbances and passions of life and went off to pursue vain and fanciful things. Buddha nature allows any human being to reach enlightenment.

The difference between these two conceptions—Buddha nature and the internalized godhead of “the good”—is that the internalized god is something the child has to be taught in order to separate him or her from the toddler’s demanding wolf-spirit, while Buddha nature is supposedly something every human being is born with and which exists apart from his or her socialization. I can live with this unresolved difference, although I find it hard to imagine how a child raised by wolves and retaining the wolf-spirit would ever uncover that supposedly preexisting and peaceful Buddha nature. I think the process would be much easier if one were instead raised by Buddhists.

In this view of god as internalized precepts of “the good,” what purpose does prayer then serve? Certainly, the kinds of prayer that resemble pleading and petitioning a supernatural being who stands outside time and the laws of physics have no effect. “Please send me a pony.” “Please don’t let it rain on my wedding day.” “Please help my little sister recover from fever.” All such requests, attempting to alter the chains of cause and effect, tempered by probability and random chance, which govern human affairs, are in vain. But such thoughts do serve to focus the mind on what the individual actually wants or should want, remind him or her of the things that are really important and necessary, and frame the mind to accept—through reference to those internalized notions of “the good”—what is actually going to happen.

Prayer as personal reflection, as the search for guidance during times of trouble, as the seeking after acceptance and peace—this kind of prayer is a direct reference to those internalized ideals. These prayers are the person asking to be reminded of what he or she already knows and may have temporarily forgotten. They are a reference to our higher natures and the conception of human beings as closer to gods and angels than to dumb, suffering brutes.

The god I can believe in is a reflection of the human psyche and the processes that bend it from the wild, self-centered, animal nature of the just-born human into a temperate, socialized, reflective member of an interdependent group, and then a person on the way to becoming wise. This is a god who has no extant form but exists in the imagination inside the human mind. And that is a far more powerful place from which to operate than any existence outside of time, the laws of physics, and the human frame of reference.

1. See also Believer or Seeker? from December 29, 2013, A New God for the Scientific Age from May 26, 2013, and What I Believe from March 25, 2012.

Sunday, October 12, 2014

Continuing Mysteries of the Genome

I find the mechanics of life itself endlessly fascinating. Consider just DNA and its ability to turn a coded string of molecular modules into a complete and functioning physical structure.1

Back in the early 1950s, when Watson, Crick, and Franklin were defining the DNA molecule’s helical structure through x-ray crystallography, concepts around the idea of inheritance were simple and straightforward. From the work of Gregor Mendel a century earlier, we understood that everyone has at least two sets of inheritable factors—one from the mother, another from the father—and this allowed children to turn out similar to, but not always identical with, either of their parents. And molecular biologists could readily link these factors to the two sets of chromosomes that were microscopically visible inside the nucleus of every cell. At that time the “Central Dogma” of genetics decreed that information flowed only in one direction: that DNA in the chromosomes inside the nucleus is transcribed into messenger RNA, which goes out into the cell body, where it is translated by molecular machines called ribosomes that assemble amino acids into proteins. It all seemed quite obvious.

A decade later, by the mid-1960s—and through the efforts of various researchers in universities and laboratories around the world—we had “cracked the code.” That is, we knew which DNA or RNA bases, arranged in groups of three called codons, were used to specify which of the amino acids that would go into a protein and in what order. And when that amino acid string was allowed to fold naturally, according to the arrangement of covalent charges inherent to its molecular structure, the protein would form three-dimensional bits of organic material that could build up the body’s cells, mediate its various chemical reactions, and carry signals back and forth throughout the organism.

After another decade, by the mid- to late-’70s, and again with the work of an ever-growing army of researchers, we were sequencing—that is, telling off the base pairs in order—of whole genomes. To be sure, we started small, with the microbes and simplest organisms. It would be another decade or so, up to the early ’90s, before researchers were ready to tackle the human genome laid out in some 3.2 billion base pairs along those 23 chromosomes in each human cell’s nucleus.

The original Human Genome Project was established in research facilities around the world. They were able to pull genes out of the chromosomes because the genetic sequence always included a start code (ATG) and a stop code (which could be TAA, TGA, or TAG). Find a start code and a stop code, and anything in between them was a gene. It was like hooking fish in a lake: find one, reel it in, and go look for another. But the process was expensive and took a long time. Researchers figured they would need about fifteen years to piece together the entire human genome.2

Then along came Craig Venter, a biologically oriented entrepreneur who had a unique way of looking at things. He asked why you would bother to hook those fish one at a time. Instead, why not just drain the lake and pick up all the fish at once? He approached Applied Biosystems—a company that made tools for genetic analysis, including the first protein sequencers and later several types of gene sequencers and synthesizers—about starting an effort to drain the lake. The result was a sister company to Applied Biosystems called Celera, based on the Latin root for “fast.”

Essentially, Venter and his team chopped the entire human DNA into tiny, random pieces, each about 50 bases long. Then they sequenced all these fragments to determine each string’s base pairs—that is, the A’s, C’s, G’s, and T’s that make up the genetic code. Finally, they fed all those millions of tiny sequences into a super computer and let it mull over them. The computer was programmed to find all the duplicated and overlapping letter strings and put them together into longer and longer pieces. This approach worked so well that the research centers of the Human Genome Project had to adopt it or be left out of the running. Two years later—in the year 2000, when I joined Applied Biosystems—the first draft of the human genome had just been published from among five test subjects at a cost of about $4 million.3

The big surprise at the time was that not all the genome was made up of genes—if you defined a “gene” by the Central Dogma as a sequence that coded for a protein. Only about 10% of the 3.2 billion base pairs formed this kind of gene. The other 90% seemed to be nonsense or “junk.” The researchers at the time just assumed this junk was the crumbling sequences of genes from early in our evolutionary history—that is, genes left over from our microbe, fish, lizard, mouse, and primate ancestry, representing proteins that were no longer used in human biology and sequences that were slowly mutating into mush. Interestingly, if the original Human Genome Project had gone to completion with its fishing expedition using just start and stop codes, they might never have noticed this disproportion between genes and junk.

Another surprise was the ubiquitous use that the human genome makes of what’s called alternative splicing. Most genes consist of a promoter region upstream of the start code, and then after the start code comes the sequence that codes for the specified protein, followed by the stop code. But that coding sequence doesn’t always come in one piece. It often consists of an expressed part, called an exon, and a non-expressed part that intrudes between the coding parts, called an intron. At first, introns looked like just more junk interfering with the gene’s coding. But molecular biologists quickly figured out that many proteins are alike in having similar structural parts. By knitting together different patches of exons, presumably under the instruction of the introns, a single gene could be used to make many different but related proteins. Pretty damn clever. Efficient, too. Evolution—or, if you prefer, God—had invented the principle of modular construction long before human engineers ever thought of it.

But there still was the problem of all that “junk DNA” in the genome. I remember one day walking across the Applied Biosystems campus with one of our chemists, who said she flat-out didn’t believe in junk. The body spends too much energy copying those useless sequences every time a cell divides, she said. Those sequences had to mean something.4

Along about 2004 we began hearing about “microRNAs.” These were fragments of RNA only about 50 bases long that seemed to interfere with gene expression. A plant whose genetic code produced blue flowers might instead produce only white flowers if you added a certain microRNA sequence to its cell nucleus. So microRNAs had something to do with the regulation of genes. Researchers quickly determined that small fragments of RNA annealed to the promoter region upstream of a gene and to the introns inside the gene splices in order to tell the DNA when and how to express the messenger RNA strand that would go out into the cell body to make a protein. So the Central Dogma was stood on its head: sometimes DNA transcribes into little bits of RNA that go and tell other DNA sequences when to start making their proteins.

Not long after this, Eric Davidson at the California Institute of Technology demonstrated how the process of promoting genes functioned in differentiating cells and creating divergent tissues during the development of sea urchin embryos into complete organisms. And similar processes presumably function in all other animals and plants as well. Suddenly, it became clear that the 10% of the genome that codes for proteins is just the body’s parts list. The other 90% is the body’s interactive assembly manual.

A few years later we started to hear about the “epigenome.” DNA and RNA were not the whole story, it seems, because other chemicals—specifically, a methyl group, CH3—could become involved with the microRNA control of gene expression. Promoter regions that became clogged with methyl groups no longer accepted their microRNAs, and the genes would become inactive. This might have seemed like an accident, some kind of environmental contamination, except that the cell also produces an enzyme, methyltransferase, which copies the pattern of methylation from one DNA strand to the next as the cell divides. If blocking the expression of a gene is an accident, it’s one the body has an interest in preserving.

Having a particular DNA sequence in your genome is no guarantee that a particular gene will be activated or a particular protein will get produced. And this makes sense because the DNA in each cell nucleus is the same, but not every protein is needed by every cell and tissue type. Liver cells need to make proteins necessary to their function, but those proteins would be useless and perhaps even toxic if they began appearing in a brain or muscle cell. So acquiring methylation and producing only certain microRNAs and not others is the way cells differentiate and stay different during fetal development and on into childhood.5

And now—and as part of the reason for this meditation—we have just learned that inheriting a particular genetic sequence from your parents is not the only way you can acquire a mutation. A recent article in Science magazine, “Harmful Mutations Can Fly Under the Radar,” suggests that genetic mutations which occur while the embryo is still developing and its cells are differentiating may then appear in one or more parts of the body but not in every part. This is a process called “mosaicism,” because the distribution of mutated and non-mutated cells can resemble a patchwork, mosaic design throughout the body’s tissues. Why is this important? Because a simple cheek swab of cells that researchers then sequence to show your genome or to look for a particular set of mutations may not show everything going on inside your body. Mutations that could be causing a disease condition or susceptibility somewhere else in the body might not show up in your mouth. It also means that examining the genetic profiles of parents may not indicate the susceptibilities of their children, because no one is willing to take apart each egg and sperm to sequence it before those two germ cells combine to create a zygote, which then goes on to become an embryo, which finally grows up to become you.

Life and its coding are no longer simple and straightforward. The more we learn, the more wonderfully complex the process becomes. It’s way more complicated than just the sixty-four possibilities inherent in the four base pairs forming a three-codon reading frame that’s used to select the next amino acid in order to make the next protein. We are unique individuals, and within our bodies are cells that have become unique based on how they use their share of the genetic code. And now we learn that even the coding within those cells may sometimes be unique.

It’s a wonder that we humans are able to get born, grow up, walk around, draw breath, learn algebra, think great thoughts, and achieve great things for as long as sixty or seventy years at a time. Ain’t life grand!

1. For similar blogs along these lines, see The Chemistry of Control from May 11, 2014, and The Flowering of Life from August 25, 2013, among others.

2. The U.S. Congress originally funded the Human Genome Project in 1990 with an estimated total cost of about $3 billion and a projected finish date in 2005.

3. Today faster, smaller machines can sequence an individual’s entire genome in a couple of hours for about $1,000.

4. Why does copying DNA require any energy at all? Because DNA is a polymer made up of repeating ribose sugar rings which are cross-connected from one strand of the double helix to the other by their attached structures of adenosine (A) linked to thymine (T) and cytosine (C) linked to guanine (G). But the backbone of each strand, connecting the sugar rings up and down the strand, is a phosphate group. Phosphate—a phosphorus atom bound with four oxygen atoms—is the energy source inside the cell. Making adenosine triphosphate is the business of the organelles called mitochondria, which convert the energy in your food into assembling those three phosphate groups into one molecule headed by an adenosine group. Breaking down those adenosine triphosphate molecules into adenosine diphosphate is how the rest of the cell extracts that energy. Anything that uses up phosphate groups, like copying one strand of DNA to make its complement during cell division, creates a drain on the cellular economy.

5. And figuring out how to strip that methylation and reactivate those microRNAs is one way to turn a fully developed and differentiated somatic cell back into a stem cell which keeps its options open and can be used to repair and replace many different kinds of tissues.

Sunday, October 5, 2014

The Impermanence of Things

An approach to problem solving that is popular with Six Sigma1 and regulatory mindsets says you cannot find a permanent resolution to a processing error or equipment failure until you identify its “root cause.” This mindset implies that every problem, every failure, is the result of a causality chain that leads backward and down to a single source. The presumption is that you must find and eliminate that root cause in order to put things right. Any attempted fix which does not address and include the root cause is doomed to fail and serves as only a short-term, cosmetic, feel-good, band-aid solution which will allow the problem to resurface some minutes, weeks, or years later.

This approach makes sense in terms of process analysis and error trapping, dealing with temporal and ephemeral situations, which is the true realm of Six Sigma. The difficulty begins when you try to apply that level of analysis, that particular hammer, to every problem in your life, assuming that you’ve just found a nail.

We had an experience of this recently with the parking garage at our condominium. The garage is a huge structure made of reinforced concrete, three levels deep, able to park about five hundred vehicles. It’s an old structure, built almost forty years ago. Over this time the concrete has cracked and water has entered from leaks in the roof structure, from rain trailed in by car tires in the winter, and from some historic leaks in the rooftop swimming pool which have since been fixed. The result in some areas is spalling—shallow pits—in the concrete, white salt-like deposits on its surface, some exposed rebar, and rust stains. This damage has caused concern among the homeowners for the overall integrity of the structure, and the condo association has paid a concrete expert to examine the problem areas and propose repairs.

After much probing, mapping, testing, and chemical analysis the expert reported no major issues with the structure but recommended patching the spalls, routing out and sealing the cracks, applying a surface finish to the exposed concrete, and other measures that combine damage repair with preventive maintenance. He explained that concrete naturally contracts and cracks as it sets, and that water gets into the cracks, carries salts with it, and causes the rust.

That answer satisfied most people, but not a minority who want to know the root cause of the problem. The answer that water getting into cracks was the cause of the spalling, salt deposits, and rust stains has not been good enough. They want to get down to the core of the problem and feel that any fix that doesn’t address an underlying problem is doomed to fail.

And that is why today I’m thinking—brooding actually—about root cause analysis. If you must name a root cause, then was the garage badly built from the beginning? No, because the concrete expert, who is also a structural engineer, found no fault with the original design nor with its implementation during construction. Was the concrete mix or the pouring technique faulty? No, because his testing showed the concrete’s composition and strength actually exceed specifications. Was the rebar badly designed or installed? No, because the steel is still in almost perfect shape.

If you had to find a root cause for the failures in the garage, you would have to say it was built in an area that gets rain for half the year—when California isn’t experiencing a drought—rather than on a dry, high desert plateau, and it was built of concrete and steel instead of less permeable and more lasting materials like glass and titanium. Also, the condo association has failed in recent years to provide a maintenance program that might have found and sealed new cracks as they developed, repaired the pool leaks in a timely fashion, and applied a waterproof coating to all surfaces of the exposed concrete slab.

That is, once the garage was completed forty years ago, the owners made the untested assumption that it was a permanent and imperishable structure, requiring no further work and expense, like the Pyramids at Giza. Concrete and steel are supposed to be immortal, aren’t they?

You wouldn’t make such an assumption about something small and mechanical, like your car or a bicycle. They have moving parts which need regular oiling and adjustment. They have parts subject to abrasion and wear, like tires and brake pads, which need regular replacement.2 If you viewed the wear and tear of engine components and the wearing out of tires and brakes as a process failure and looked for a root cause, you would have to conclude that the source of the failure was not having made the machine out of magical, incorruptible, imperishable materials that do not yet exist in human technology.

Although a building or a structure—like our condo garage—looks solid and permanent, it is actually a type of machine. Cars driving into and out of it, bumping over expansion joints, and braking and turning, all create vibrations and stresses that move and shake the underlying fabric of concrete and steel. Water dripping off the cars, blowing in from outside, or coming through unpatched leaks in the roof will do its inexorable work of leaching chemicals out of the concrete, rusting the steel, and weakening that fabric. Heat makes the structure expand by a few millimeters or so during the summer, and cold makes it contract during the winter. And the minor temblors of a seismically active area add their own stresses. The structure is a machine that works against itself and gravity all the time.

The takeaway from all this, and the point of my musing, is that everything is subject to wear and tear, to the eroding processes of time, weather, and use. Even the Pyramids bake in the noonday sun, freeze at night, and lose a few grains of rock each year to the infrequent Egyptian rains. They have lasted 4,500 years so far,3 but one day they will all weather away to mere lumps.

We humans mentally divide up the world for ourselves: things that are perishable and those that are permanent; things we expect to replace and those we take for granted; problems we should try to solve and situations we should accept and, perhaps, merely strive to hold our own. But those mental cubbyholes are really just placeholders along the spectrum of a vast and slippery existence. We expect the car we drive to wear out and not be worth fixing after a couple of hundred thousand miles, if not sooner. We don’t expect grandma’s dining room table ever to wear out and need replacing—unless we find it too big for our current apartment or too antique for our modern lifestyle. And we want to believe that the garage structure where we park our fragile cars will last forever.

But existence is a process, trending from one state to another, and not one subject to root cause analysis and resolution. One day, even our Sun will burn out, and that will not be a design defect but a simple fact of existence.

1. “Six sigmas” is a reference in statistics to a degree of accuracy or consistency, in which a production run is defect-free or a process performs accurately 99.999% of the time. Technically, the term represents six standard deviations between the process mean and the nearest specification limit—and if I could explain that to you in English, I’d be an engineer. The initial practice of Six Sigma techniques originated at Motorola in the 1960s, was picked up by Jack Welch at General Electric, and has since become a standard in many industries to reduce process variation, increase process efficiency, and improve overall product quality. People take formal training in these techniques by proposing and completing improvement projects and are awarded with metaphorical green and black belts, much like a martial art.

2. And once, long ago, cars regularly needed new sparkplugs and contact points in their ignition systems, while bicycles and motorcycles needed new drive chains. Improving technology has gradually redesigned, toughened, or eliminated many of these fragile components until the replacement cycle is beyond the casual awareness of the general public.

3. The pyramids once were sheathed in a smooth layer of white limestone, creating a brilliant, reflecting surface. An earthquake in the 14th century loosened these casing stones, and the locals carted them away to build Cairo. That left the blocky, stepped appearance we know today.