Sunday, February 23, 2014

Coffee Took Us to the Moon

Back when my wife was working at the Bancroft Library on the campus of the University of California at Berkeley, she put together an exhibit of books on the history of coffee and tea. One of the insights this exhibit brought home was that, until Europeans began importing these beans and leaves from the Far East, brewing them up in boiling water, and drinking the result, they had to make do with drinking the possible alternatives.

The water resources around human habitations have always been marginally potable, at least until the advent of modern plumbing and treatment plants.1 People since the fall of Rome had been drinking water purified—well, at least somewhat sanitized—by fermentation into a mildly alcoholic beverage, where the alcohol helped to kill germs, kept bacteria and algae from growing, and made the liquid drinkable. It wasn’t that the Europeans were drinking beer and wine just with dinner; they were drinking small beer, cider, and wine with breakfast and lunch, too. Everyone had a mild buzz on all day long and was pretty plotzed by the late afternoon.

People imbued with the glow of intoxication can still accomplish a lot. The Europeans managed to invent many simple and useful machines like the wheeled plow, the stirrup, and other equine tack. They invented a good deal of common law and rough justice in the early part of the last millennium. They rediscovered classical thought in art and literature, learned mathematics from the Arabs along with the concept of zero and the balancing equations of algebra, adopted printing and gunpowder from the Chinese, and studied the stars for themselves and refined their calendars. They raised up local poets like Dante, Petrarch, and Shakespeare as well as painters like Michelangelo and Da Vinci. They founded universities, had themselves a Renaissance and a Reformation, burned a great many heretics, and lived in a near-continuous state of war.2

Still, alcohol is a depressant. It brightens you up for a little while—long enough to say funny and obnoxious things at parties—and then it puts you to sleep. Coffee and tea, on the other hand, are pure stimulants. The caffeine perks you up, sharpens your wits, gives you laser-like insight, and makes you coldly daring.3 When coffee and tea began trickling into Europe in the early 17th century, it created a cultural revolution. People still went to the weinstube, the ale house, and the pub in the evening for their nip of alcohol. But during the day most people—at least those in the cities not chained to the land—could go to the coffeehouse or the tearoom for their cuppa.

Coffeehouses became places of trade and economic invention. Lloyd’s Coffee House in London is credited with the start of modern insurance underwriting, which changed people’s concepts of risk from bad luck or the will of malevolent gods to a proposition that can be analyzed mathematically and hedged with judicious amounts of investment. At about the same time, Isaac Newton in England and Gottfried Wilhelm von Leibniz in Germany were taking mathematics to a whole new plane with the invention of calculus. Galileo in Italy and Newton in England were redefining optics, physics, and astronomy. Anton van Leeuwenhoek in the Netherlands was opening up a new world for biology with his “animalcula” or microbes. And a century later Antoine Lavoisier was doing experiments that would refine the haphazard guesses alchemy into the rigorous relationships of chemistry. I can’t say that all of them were coffee drinkers, but suddenly the time was right for bright-eyed people to sit down and think new and daring thoughts.4

So the scientific revolution kicked off at just about the time coffee and tea came to Europe. Ideas about the nature of knowing—literally, “science”—had been incubating since Aristotle and the Arab philosophers, and got a push from Roger Bacon in 13th-century England. These notions now came together with philosophers like Descartes, Galileo, and Newton and were eventually formalized as the Scientific Method. Observe, hypothesize, and experiment—and stand ready to disprove your hypothesis—became the way to know what was happening in the real world. The printed page, based on movable type born in 15th-century Germany, became the medium to disseminate all this new knowledge. The Europeans subsequently had an Enlightenment, which encompassed philosophical and political thought as well as scientific, and sparked the secularization of literature, art, and music.

Humanity put its foot on an escalator of understanding that shot our western civilization forward into the future. One discovery, recorded in a pamphlet or book and disseminated through space and time by printing, led to other discoveries by other minds in other places and times. Soon a body of solid knowledge formed and began accreting, like the layers of coral in a reef. That reef has been growing without pause ever since. It has flowered into new branches of knowledge and their stepchildren, new technologies. We picked up the steam engine—the first mechanical contrivance and motive power that didn’t depend on animal muscles, the kinetic energy of water flowing downward under gravity, or the pressure of wind pushing on some kind of sail—in the 18th century, and then the practical uses of electricity in the 19th century, and we have literally never looked back.5

The notion of distributing risk through insurance underwriting by personal subscription, pioneered at Lloyd’s Coffee House, quickly morphed into selling shares of stock in new ventures. That changed capitalism from the province of a few rich banking families like the Medici and the Mellons to a process of investing by anyone with spare cash and funding for anyone with a good idea. The orderly means of acquiring capital for a business venture, keeping it secure, paying for its use, and ultimately distributing the proceeds back to shareholders became the foundation of a system that has spread wealth through all classes, created jobs and opportunities even for people without that spare cash, and advanced the world immeasurably.6

Coffee and tea changed our culture, and their use has not at any time faded away. The English still make a ritual of their late-afternoon, low-blood-sugar meal served with tea. Americans are offered coffee or tea as accompaniment with every meal in every restaurant. Coffee shops are the new gathering places for people communicating on the internet—sitting in a real space, working in virtual space—and any woe betide any Starbucks or Peet’s franchise that doesn’t offer WIFI. In the break rooms of every corporation in America, the one perk offered for free or massively subsidized is coffee and tea. And every business negotiation starts with a polite offer of coffee or tea, not with small beer and cider. Everyone in the corporate world knows that keeping employees jazzed with caffeine is good for productivity and good for business.

The American space program may have invented Tang as a way to serve orange juice with its Vitamin C content inside a space capsule, but it was coffee and tea that provided the substructure of math and physics which designed the rockets, predicted their orbits, and powered the thinking of the engineers who brought it all together.

I’m not saying that all of this could not have happened without the arrival of coffee and tea in Europe and then their transference to America. Certainly, people plotzed on alcoholic beverages could have invented telescopes, calculus, steam engines, and electric generators. Civilized society has always included its share of inebriates who can’t start the day without a taste of liquor or keep running all day long without nips at flasks hidden about their persons and hits from the bottles in their desk drawers. But they are not the majority. Most of us have long since sobered up, drink from a cup of coffee at our desks or in the break room, and don’t unwind with alcohol until after business hours.

So long as the coffee and tea keep flowing, we’ll continue developing new sciences, finding new ways to fund and manage our businesses, and keep sending out space probes. When humanity finally sets up a colony on the Moon or Mars, among the first imports from Earth—along with bottled oxygen and water—will be coffee beans and tea leaves to brew in that water and make the colonist’s preferred beverage. The hydroponic gardens under the agricultural dome will have its stimulants section. When we go to the stars, we’ll take along the seeds of coffee and tea trees. And if some kind of weird biological plague were ever to decimate the coffee and tea plantations of Earth, look forward to the failure of the Enlightenment, the end of civilization, and a new dark age.

1. In large towns and cities, you can’t dig your wells far enough away from the privies to find unpolluted groundwater. And everyone’s piece of the river is downstream from someone else’s, and so it’s polluted with the runoff from their outhouses, farms, and sewers. The only people who had a really good solution to this problem was the Romans, who located pure springs in the hills near their major cities and built aqueducts that ran for miles across hills and valleys to bring springwater, fresh and aerated, to holding tanks and public fountains inside the city.

2. But continuous war has been the case right up to the present, too.

3. You can tell I love the coffee bean in all its forms, can’t you?

4. Nicotine deserves some credit here, too. It came to Europe from America in the 16th century and caught on at about the same time coffee and tea did. Nicotine is readily absorbed into the bloodstream and causes release of the hormone epinephrine, which stimulates the nervous system, and the hormone beta-endorphin, which dulls the sense of pain. But then, like alcohol, these stimulations drop off, and nicotine has an overall depressant effect. Still, nothing goes so well with a cup of coffee as a cigarette, and tobacco had a hand in kicking science in the pants.

5. See Coming at You from October 24, 2010.

6. See The Economy as an Ecology from November 14, 2011.

Sunday, February 16, 2014

Seeding the Stars With Dirty Snowballs

It’s the age-old question: where did we come from? The answer is lost in the mists of time, so far back among the beginnings of our world that any credible answer may not exist. And therefore, given the following thought experiment, we might have to consider some incredible answers.1

Consider that the Earth developed with the rest of our solar system out of the proto-Sun’s accretion disk about 4.6 billion years ago. Consider also that this fresh, new planet appears to have harbored life in the form of single-celled bacteria for approximately 3.8 billion of those years.2 This suggests that in the first 750 million to one billion years, while the planet was still in the late stages of coming together as dust and chunks of tumbling rock, then suffering repeated asteroid impacts and surface volcanism, and finally suffering repeated cometary impacts in order to collect its first surface water, the complex chemical regimes that would later lead to life were able to assemble and sustain themselves.

First would come ribose nucleic acid (RNA), which forms a single strand of sugar rings connected by phosphate bonds, like beads on a string in a relatively straight line. Each of the beads contains a nucleotide base which is compatible with and binds with certain other nucleotide bases. That bonding action allows the RNA string to assemble copies of itself. It supposedly became the first self-replicating molecule, and thus was able to retain and transmit a particular chemical structure.

Next would come deoxyribose nucleic acid (DNA), which forms as paired strands that twist around each other in the double-helix pattern, cross-connecting through those nucleotide bases. RNA is differentiated from DNA by having an OH group attached to the second carbon atom of each ribose ring, where DNA has lost that OH and substituted a hydrogen atom (and thus is “deoxy”). Perhaps the lack of that OH radical allowed DNA to curl around on itself protectively, while the RNA molecule remained straight and wobbly. Another difference between the two molecules is that one of the four bases3 used as attachment points in RNA, uracil, has been replaced by thymine in DNA. The fact that the RNA molecule and its operation are simpler suggests to me that this molecule came first and that DNA is an evolutionary step toward more complexity.

In either case, RNA—and later DNA—would have had no particular reason to exist without its primary cellular function of assembling amino acids into long strands of proteins, which are the workhorse molecules of all life. Simply making copies of the RNA itself from one generation to the next is hardly a viable function—i.e., it ain’t much of a living—for such a complex molecule in a chemically violent world. The mechanism for reading the bases and assembling proteins, the ribosome, itself consists of a complex of both RNA molecules and proteins. So it would seem that both RNA and some proteins, and their reconfiguration into the ribosome, had to be present somewhere near the beginning of life.4

A billion years is a long time. But still, I find it hard to think that the DNA-RNA-protein regime could arise and comprehensively beat out all possible competing chemical processes, and so have established the world’s only complete and functioning prokaryotic cell, in this relatively short amount of time. Or that it could accomplish this amid all the colossal disturbance and heat from volcanism and comet and asteroid impacts.

The mechanism of evolution—trial and testing of an organism to establish its fit with its environment—suggests that other life processes may have existed briefly, been tried but found less appropriate, less functional, less capable than the DNA-RNA-protein regime. But, strangely, we find no evidence on Earth of any other system ever existing. Certainly, fragile molecules that failed to create cells—which were protected, first, with simple phospholipid membranes and then, later, with complex, robust body structures and skins—would tend to disintegrate and disappear, rather than leaving traces in mudstones and other fossil formations. If life ever made a trial start with other chemicals, it certainly left no trace on Earth.

But consider that even the smallest details of the DNA-RNA-protein regime are ubiquitous on this planet, with no surviving competitors to be found. Even the most far-flung and isolated creatures, such as those clustered around deep-sea vents and drawing their life’s energy from synthesizing sulfur compounds through volcanic heat, instead of photosynthesizing carbon compounds from sunlight, all use the same DNA-RNA-protein regime as us humans and our closest ancestors.

All life on Earth not only uses the same carbon-based DNA as its recording and transcribing medium, and RNA and its ribosomes as its translating medium, but all life uses them in exactly the same way. DNA bases taken in groups of three as a reading frame call for exactly 20 out of the 500 possible amino acids in exactly the same way to produce the myriads of possible proteins.5 No living creature or fragmentary ancestor has ever been found that uses a two-base or a four- or five-base reading frame, or whose genetic code calls for just 18 or 19, or 21 or 22, amino acids. Other such systems might have been more robust, more or less susceptible to mutation and the beneficial effects of evolution, but we wouldn’t know. All life on Earth came down from the winner of the evolutionary lottery that’s encoded in our genes.

And isn’t that odd? We find no variations in the basic system—except for minor discrepancies, like the fact that thymine replaces uracil in the transition from RNA to DNA coding, or that two extra amino acids are added to some proteins in rare instances by non-genetic means. But even these are ubiquitous cases. Nowhere on Earth do we find coding systems using different amino acids, or a wider or narrower reading frame, or a different translation scheme. Surely, in the competitive arena of evolution, chemical systems must have arisen and might have remained functional that were almost as good as the current version of the DNA-RNA-protein regime, that offered parallel strategies for success, and that sometimes made a better fit to a slightly different environment—like those volcanic vents—which would reward a different mutation rate or provide a richer lode of different chemicals.

It might be possible, for example, to produce an analog to the DNA-RNA-protein regime using other elements with similar electron configurations that form similar covalent bonds.6 For example, silicon can form the same types of molecules as carbon, because it has four valence electrons—available for sharing in molecular bonding—the same as carbon. Since the available electrons in a silicon atom are in the third electron shell, while those of carbon are in the second shell, the silicon bonds will be farther away from the atom’s nucleus and so relatively weaker. Similarly, arsenic has the same electron configuration as phosphorus. Indeed, for a while, biologists studying bacteria from Mono Lake in California—where large amounts of arsenic are naturally present—believed they had found arsenate groups substituted for phosphate groups in the backbone of a certain bacteria’s DNA. That finding has since been disproved.7

On a planet rich in silicon and arsenic, and relatively poor in carbon and phosphorus, an alternate set of life molecules similar to the DNA-RNA-protein regime might evolve. Creatures made with these molecules would be heavier and denser than Earth life, because silicon has an average atomic weight of 28 compared to carbon’s 12, while arsenic has an average weight of 75 compared to phosphorus’s 31. But such life would also be more fragile—more prone to mutation, molecular breakage, and even disintegration—because the electron bonds holding the molecules together would be weaker.

It’s possible, of course, that the lottery-winning, carbon-based DNA-RNA-protein regime originated on Earth, just as different chemicals might form different life molecules on other planets with other environments. A billion years is, after all a long time. And the right combination of chemicals only has to come together once—if it’s robust and yet flexible enough to survive and thrive. It’s possible the Earth produced one clear winner in the evolutionary lottery, one system to dominate the planet, coming out of the time when the liveliest thing on Earth was a string of sugar rings held together with phosphate bonds—and that all competing systems suddenly became eternal losers, washed away in the sea of life.

But still, until we learn different, the possibility exists that the DNA-RNA-protein regime did not originate here. “Learning different” might involve someday finding an alternative chemical regime that evolved here on Earth, or finding evidence of the DNA-RNA-protein regime in the sands and clays of Mars, on the moons of the outer planets, or on the planets of other stars. And if it did not originate here, then the DNA-RNA-protein regime might have been seeded here when the right conditions existed on Earth: not too hot, not too cold, offering liquid water that isn’t frozen over or boiling away, offering adequate amounts of the right free gases like oxygen and nitrogen, and not too much ultraviolet sunlight coming through the atmosphere.

The seeding mechanism need not have involved silvery spaceships landing among the rubble heaps of the early Earth and gloved hands pouring out beakers of chain-linked chemicals, or even simple cell structures, into the early seas. The seeding could have been accomplished by shooting snowballs laden with chemicals and spores into the universe, hoping that one of them would add its secret to the Oort cloud of icy particles left over from the accretion disk surrounding a Sun-like star. Then all it would take is an orbital jumble and a fumble to send a laden comet down to crash into the primeval sea of a possible planet in the star’s habitable zone. One snowball might be all it took. And all the rest might as well drift forever through the galaxy or perish in impacts on a burning Mercury or a hostile Io, where compatible life is not possible.

Achieving this kind of scattershot seeding would not be a random act. It’s not likely that an asteroid impact on a green planet in some distant star system would throw up a cupful of water bearing microbes, which would then freeze in space, find its way out of the planet’s gravity well, then out of the star’s gravity well. We know from human experience how hard it is to eject a probe from the neighborhood of the Sun: developing and launching Voyagers I and II took concentrated effort. As our own Oort cloud and its comets prove, things in the neighborhood of a star tend to fall inward, not outward. So it’s unlikely to happen by chance that a cupful of sea water left its native planet even once, let alone the thousands or millions of times needed to satisfy the long odds of its microbes reaching Earth.

No, if our genetic stuff was made elsewhere, its delivery—either by hand or by the scatter shot of dirty snowballs—was a conscious act. Some intelligence had to want the universe in general or the Earth in particular to share the gift of life. And how those star-seeding intelligences got their own life in the first place … well, as the old woman said about a cosmology that rode on the back of a giant turtle, “It’s turtles all the way down, young man”—for all we know, an infinite regression.

This does not mean that any aliens we are likely to meet will be humanoid. Such a conclusion does not follow any more than the notion that the evolved primates called humans, kin to chimpanzees and gorillas, are the only form of life that can reach intelligence. Evidence of intelligence among dolphins, whales, and elephants aside, we have no way of knowing that dinosaurs were not philosophers of exquisite insight, and only the lack of long arms with prehensile thumbs kept them from expressing that intelligence through art, architecture, and written lines that might have survived the Chicxulub disaster to be found among the mudstones today.

No, if we were seeded here, it was as a mere possibility, a potential written into the primitive code of a one-celled animal, or even into the humble mechanism of an RNA strand, perhaps as a virus with a lipid coating. What we became, what all life on Earth became, was prefigured more by the planet’s environment and the adaptations it forced, than by the code string that enabled those adaptations.

Whether that first glimmer of life developed here on Earth or elsewhere, its flowering here into all the different forms of life—some of them bearing self-awareness and enough intelligence to look up at the stars and wonder—is the true miracle.

1. For the background of my thinking that the DNA-RNA-protein regime may not be native to Earth, see Communicating with Aliens from July 28, 2013, and DNA is Everywhere from September 5, 2010.

2. From G. M. Cooper, The Cell: A Molecular Approach, 2nd edition (Sunderland, MA: Sinauer Associates, 2000). Available at the National Center for Biotechnology Information. The page gives a pretty good view of the origin and evolution of cells.

3. The base rings of both the DNA and RNA molecules serve as attachment points for their complementary strands. These rings are built mostly out of nitrogen atoms, and their complementary bonding consist of pairing one of the purines—adenosine (A) and guanine (G)—with its matching pyrimidine—either thymine (T) or uracil (U) with adenosine and cytosine (C) with guanine.

4. The other part of the process—the twenty amino acids from which all of the earthly proteins are constructed, or at least their precursors—have been detected on dusty grains of ice floating in interstellar space. See “DNA and amino-acid precursor molecules discovered in interstellar space” from the Kurzweil Accelerating Intelligence news site for March 2, 2013.

5. Actually, 22 amino acids are sometimes used in protein synthesis, but two of them—selenocysteine and pyrrolysine—are added to the protein string by other biological mechanisms, rather than encoded by the genetic material.

6. Alternative chemistries have been proposed and artificially created in the search for life’s origins. For example, some scientists have substituted threose, a molecule similar to ribose, to create a long-strand molecular chain capable of self-replication. See “Strange cousins: Molecular alternatives to DNA, RNA offer new insight into life’s origins” from Science Daily of April 19, 2013. To my knowledge, none of these alternatives have been found in nature. Another page at the National Center for Biotechnology Information offers an overview of alternative structures for DNA, such as different spiral patterns, crosses, and loops.

7. See “Study Confirms Bacterium Proteins Bond To Phosphate, Not Arsenate” at the RedOrbit news site from October 4, 2012.

Sunday, February 9, 2014

Behind Enemy Lines

If there’s a theme to all of my books, it may be about an individual living and functioning in a world, or part of it, that is slightly foreign, slightly hostile to his or her nature, a world not of his or her own choosing or making. In such a world, the main character has to be on guard, move carefully, and remain watchful, because he or she is essentially operating behind enemy lines.

I’ve always felt like a bit of an outsider, someone not quite part of the group. This might have been a residue of my upbringing. My father was a mechanical engineer, and so every five to seven years we moved to a new place as he took up a new job. This was a common theme of the 1950s and ’60s, when a man—rarely a woman back then—who was climbing the corporate ladder had to be flexible about locations and assignments. Unless you worked for a small local company or a municipality, you and your family got regularly uprooted and sent to a new division or district, usually in a different state or part of the country.

One of those moves while I was still in elementary school took me from Long Island, near New York City, to a suburb on the North Shore near Boston—not so far in geography miles, but another country in terms of language accents, word choices, attitudes, allegiances, history, and juvenile enthusiasms.1 The only greater cultural dislocation could have been if our Yankee family had relocated south of the Mason-Dixon line. And then later, in high school, we moved again, from the suburbs of Boston to a small town in central Pennsylvania, and I had to adapt to another kind of culture.2

This was back in the day when television and radio programming and Hollywood movies were only part-way through the process of disseminating a universal American accent. And it was long before the internet and social media would make our culture one big, coast-to-coast schmear.

So, for middle-class, upwardly mobile people of a certain age, who were moved around as kids but didn’t have the cocoon of a military culture to buffer their experience, my sense of being lost behind enemy lines will have a familiar ring.

Things didn’t become better when I grew to adulthood. As an English major, someone trained more in stories, music, art, and ideas than in math, science, and commerce, I’ve always had to scramble in the world of business. At least, in my first job, as a book editor at the university press, my position was central to the world of publishing. Editors are the backbone of the business, which is preparing the author’s manuscript to become a saleable book. While authors don’t always like their editors, they usually acknowledge the need for the process.

At my second job, however, at a hardcover trade publisher which celebrated railroad histories, California and western Americana, and anything to do with steam locomotion, I might have been the book editor, but I was also a transplanted easterner from Pennsylvania. So I had to become a quick study in those strange niches of history. And then when I moved on, into technical editing at an engineering and construction company, not only did I enter an new world of technology but I was also definitely on hostile ground. I had to prove to the engineers—the numbers guys, where science and numbers had never been my strength—that an English major could keep up, not mangle or misinterpret their engineering proposals and reports, and could actually add value to their work.

It was no different when I later changed jobs and moved successively to a public utility, a pharmaceutical company, and a biotech firm. Each editing and technical writing role presented new requirements and complications, new areas of expertise to master, new norms to adopt as if I were born to them. I became a perpetual student—which means I was always moving into unknown territory full of intellectual dangers and pitfalls, exposing myself to scorn and ridicule. I sometimes had to think fast and move even faster. I gained a reputation for listening closely to what the experts had to say and asking savvy questions.

In various forms, this same sense of being a stranger in the land has driven or inhabited most of my fictional characters. In general, they are like left-handed people living in a right-handed world. They are connected emotionally and intellectually to an earlier time or to different social conditions, brought up with different values and expectations, having training and skills, and bearing a different set of attachment points, than the people around them. They are, in their own terms, representative of “the other” in the greater society that surrounds them.

Ariel Ceram in The Doomsday Effect starts as a geology professor who notices a quirk in her data, is pulled into a national program to announce the threat of planetary annihilation and then try to save the planet, and ends up an astronaut on Mars. Her academic background hardly prepares her for becoming a media sensation, engaging with both engineers and cyborgs, and rescuing a Martian colony.

Granny Corbin in First Citizen is the ultimate outsider. He was trained as a lawyer and proved to be a middling failure at it, becomes moderately successful as an entrepreneur of garbage recycling, gets pulled into politics after the nuclear attack on Washington, DC, and ends up invading his own country in the Second American Civil War.

Of course, the artificially intelligent computer program and spy in ME: A Novel of Self-Discovery exists to hack into the operating systems of other computers, so by definition he lives behind enemy lines.

Robert Wheelock in The Judge’s Daughter is a Harvard-educated law student and the son of a local industrialist in central Pennsylvania, who finds himself stranded when his father dies and must make his way in a small town among people he hardly knows. To survive and hold onto his father’s legacy, he develops a practice amid farmers and shopkeepers, lumbermen and geologists, eventually becoming the judge whom they all respect.

Robert’s own son, William Henry Wheelock, in The Professor’s Mistress is a war veteran and classics professor tossed into the intellectual and political turmoil of 1960s campus radicalism. He also endures the pain of a wife who goes quietly mad. In his loneliness, he develops a passion for an old steamboat that reminds him of a more gentle and gracious past, although no one around him can understand his attachment. He sails away one summer to pursue a dream of freedom that he barely can define for himself.

A far-future time traveler, or Jongleur, Merola Tsverin in The Children of Possibility operates constantly behind enemy lines. Her job is to explore the distant past and take DNA samples from people so primitive and so far behind her own time that their cultural, political, and economic lives often survive only in rumors. She must disguise her genetically advanced body by posing as a prepubescent girl and guard constantly against detection and exposure.

The aging and crippled detective Jean Metis of Crygender is thrown into a dangerous new world of medical crime and punishment. His sensibilities, his reflexes, and even his spinal prosthesis are out of date for dealing with the adversaries he mush overcome.

The economic analyst William Clive of Trojan Horse is turned into an international spy by a shadowy economics bureau. They assign him to track a dangerous biological entity and the people trying to steal it. He travels from the Bay Area to Paris and back to a drilling rig offshore in the Gulf of Mexico. Because his skill set is all wrong for such intrigues, he is forced to make up his own tactics as he goes along.

The presidential aide Harley Waters in Sunflowers is yanked from a private law practice in the Pacific Northwest to campaign for his friend in a national election, support the new administration during investigation of a major terrorist attack against Hoover Dam, and then practically singlehandedly build the renewable energy program that will recover the dam’s lost megawatts. All the while he must work against engineering delays, political obstructions, and a savage international terrorist.

And finally, the two middle-aged main characters of the novel I’m working on now—with the tentative title Coming of Age—embark on vastly extended lives through advanced medicine based on genetically manipulated stem cells. Each of them experiences the changed perspective and alienation of someone taken out of his or her own time and thrust forward into the future. They must deal with new problems they never expected to face and hobnob with great-grandchildren they never expected to meet.

In writing these stories, I’m betting that this feeling—of dislocation, of not belonging, of operating perhaps under false pretenses and behind enemy lines—is not unique to me or just to those of my generation. Other people out there must feel that the world has changed around them. The old certainties, the games we learned to play, the perceptions and propositions that we all bought into, have all changed in recent years.

Certainly, anyone who has simply grown older will feel this. The books, movies, music, and manners that we grew up with, the values and the media of exchange, all migrate with the generations. The people we thought of as adults and the figures to whom we looked up have since become wrinkled masks and feeble voices. The popular figures and authoritative voices that everyone respects nowadays seem to come from the faces of children in childish accents; so it’s sometimes hard to take them and their views quite seriously. The world has become a different place, with sometimes cheaper values, more violent attitudes, less acceptance, more arrogance. Some of this is the world’s fault. Some of it is our own. But still, we feel we don’t quite belong.

I’m not implying that I write—or ever intended to—for the dispossessed or the downtrodden.3 Many of my characters are experienced and powerful people in their own place and time, and some are wealthy. But skill and training are no armor against alienation, against the feeling that the ground has moved under your feet and the world become a subtly hostile place.

It happens to all of us in the face of challenges, in the face of catastrophe, and in the face of moving time. That’s the particular vein of gold I’m working in the quartz mine of modern literature.

1. Examples: In New York, we drank Coke or Pepsi; in New England, the drink of choice was a strange, malevolent herbal brew called Moxie. This was like a demon-inspired root beer, about which I immediately—and not very kindly, but then I was always a smart-mouthed kid—observed, “Goes down harsh, comes up bitter.” In New York, my mother’s sister was called my “ehnt”; in New England, she was my “ahnt.” In New York, you could root for the Yankees, the Dodgers, or the Giants—my move was that long ago—so it was a pretty open field; in New England, you were brain dead if you didn’t love the Red Sox. In New York, you ordered a “milkshake”; in New England, you ordered a “frappe”—but pronounced it “frap,” not the French way, “frappé.” In New York, something you liked was “good” or “great”; in New England, it was “wicked.” It was a strange and hostile land for an eight-year-old to enter unannounced.

2. The cultural dislocation from suburb to small town was not as great—not for a boy who had only been a trespasser in the Boston area to begin with—but the socioeconomic differences went much deeper. Life in the suburbs was uniformly middle class. Everyone’s family was structured the same: mother took care of the children while father went off to that strange place called “work.” Even though we kids might be ethnically Italian, Jewish, Russian, Polish, or whatever I was—some variation of WASP, I think—we all wore the same clothes, played the same games, and treated each other pretty much as equals. And we all were all conscious of the great city, Boston or before that New York, just over the horizon with its strange and accessible delights—Chinese and Italian food, world-famous department and specialty stores, first-run movies—that we could visit on a weekend trip.
       But central Pennsylvania was deep in the woods. The middle-class kids who made up the college track in high school were a stratum among the farmers and mechanics who took classes like shop and home economics seriously, because that was their future. It was there I met my first classmates who actually owned guns and went out in the fall to shoot and butcher deer for food. And the ethnic divide was between the colony of Swedes in town and everyone else. The nearest big city was ninety miles away in either direction. Specialty cuisines like Chinese, Italian, or Mexican were unknown; hell, imported foods like artichokes and avocados were practically unknown. And there was one movie theater in town which got films a couple of weeks behind the rest of the world. But rural isolation also had its charms for a young person, because we missed out on big city attitudes and problems, such as the reputed mob hit that took place in my north-of-Boston neighborhood, and drug use was virtually unknown.

3. People who are living totally on the outside of society—the enslaved, refugees, victims of fire, flood, or persecution—have their own set of stories. They also, generally, have a group they can identify with that is set against the greater society. My characters are usually individuals alienated from the society which, in normal circumstances, would lay claim to them, including their colleagues, friends, and family.
       To tell the truth, I also have a conceptual problem with characters struggling on the lowest rungs of society. To the extent that they have skills and ambition and are ready to seize opportunity, they will have interesting stories to tell. But to the extent that they feel victimized and oppressed, they lose interest for me. I believe everyone has at least some skills, resources, and the means to succeed, however he or she might measure “success.” Even the poorest person can borrow books, learn through reading, and develop a rich inner life. But if the character believes all skills and options are missing from his or her life, then the result is indistinguishable from that character actually lacking them. I don’t know how to help such people with their stories.

Sunday, February 2, 2014

Getting Into the Zone

Writing is a complex mental process, working several parts of the brain at once. I think of it as the ultimate form of multitasking.

First, the author uses imagination to know what comes next in the piece of writing. In the case of nonfiction, this might be the next step of the process described in a technical document, the next logical corollary to the argument put forward in a legal brief or blog, or the supporting facts and references to buttress the theme of a journal article. In fiction, the imagination probes what happens next in the story, the consequences of the character’s previous actions, the response to a line of dialogue, the next thing that will drop out of the blue sky and land on the character’s head. Imagination, the projective part of the mind, the part that “sees ahead,” is lodged in the prefrontal cortex, which has the responsibility for orchestrating thoughts and actions in line with personal goals.1

If nothing else, writing is the act of aligning your thoughts with a goal or intention. Without that goal—the purpose of the writing—the process does not even begin. The goal can be a formal statement somewhere in the author’s notes, or a few simple ideas that attracted the author to the subject in the first place. For a longer work, of which the day’s writing exercise may be just a part, the goal will be expressed in an outline or story notes that break the bigger project into simpler, more manageable pieces.

Second, the author uses the brain’s speech center to formulate the words that will be used to put these products of imagination—these ideas, arguments, actions, images, and lines of dialogue—into discrete, defined words and then place the words into orderly sentences and paragraphs. The words come out of the speech center in the cerebral cortex in a raw form, not unlike rambling speech. Before those words can reach a final form on the page, they must pass through a mental filter, a kind of “editing grid,” or a mental spelling-and-grammar-check mechanism. This grid arranges the words into the structure of grammatical clauses and prepositional phrases; tightens up loose ends and adjusts the passage’s overall meaning; challenges the definitions and connotations of the words and finds more appropriate or evocative substitutes as necessary, working against the passage’s intended reading level and/or the character’s dialect and word choices; checks a thousand and one details of grammar and style, such as subject-verb agreement, appropriate use of pronouns and conjunctions, contractions, colloquialisms; and finally inserts appropriate capitalization and punctuation.

This grid doesn’t exist in the mind naturally. It is something the writer has to build over the years through practice and self-correction, driven by a belief in the importance of detail and the necessity of getting things right the first time. Without this grid in place, the act of writing is little better than a kind of verbal brainstorming, creating a pile of slush that must be corrected later through tedious revisions—or left to the ministrations of an editor. In time, the writer learns that it’s simply easier and faster to go through this process of mentally testing words, sentence structures, grammar, and punctuation before finalizing them on paper. It brings you closer to what you want to express the first time around.

Third and finally, the author engages the brain’s motor functions to deposit those words and sentences, complete with punctuation and paragraph spacing, onto some formatted base layer, such as a computer screen or piece of paper, by using a mechanical device such as a keyboard; a pen, pencil, or stylus on paper or tablet; or some form of dictation. So deeply does the writing process rely on this mechanical function that many writers become locked into the physical actions of using it. They can only write easily with a pencil on lined notepaper, or with a fountain pen on good vellum, or with one or another computer keyboard and word processor. They can only write when the mechanics are so ingrained as to become invisible to the working mind.2

With time, an author can bring all these steps—imagination, formulation, and physical translation—into action quickly and easily, surrendering to the process and “getting into the zone.” And then, once the mind is in writing mode, the author’s world disappears. The author’s own self disappears. His or her involvement with worldly cares and problems, other than the textual subject at hand, or with actual people, unless they appear as characters in the story, and even with physical sensations such as heat and cold, distracting noises or smells, and cramps or butt numbness—all of it entirely disappears. The author’s mind is suspended halfway out of his or her head, halfway onto the screen or page, in a trance of imagining, formulating, testing and deciding, creating and adjusting that is a bit like pure bliss.3

But the process is not automatic.4 Two conditions will keep me from entering the zone. They don’t just shut it down; they make it impossible to begin.

The first is not being clear on what to write. This is a problem with the goal-orientation of the prefrontal cortex. It goes deeper than not having a complete outline written down on paper, it belongs to not having an idea—or the right idea—to begin with. Sometimes I will have an outline for the next part of the novel that I think is solid, or pages of notes from which to write an article, but when I sit down at the keyboard, nothing comes. I’ve learned from painful experience that this is not the time to force my brain. If I try to pound out the words through sheer willpower, I’ll get something, and it may even look like coherent English sentences, but it won’t be right. It will not go in the right direction, reach the right conclusions, achieve the right tone, or make the right cut—in terms of intellectual level and understanding—at the material or the story. It will be a string of words that even serious revision can’t save. They will eventually need to be deleted and the process started over.

In this case, I know that something in my outline or my notes is undecided. Either I have a plot or logical problem that still needs to be addressed, or I have not reached the deepest level of emotional truth and understanding of the situation or the material.5 Until I fix this problem or find the proper approach, my prefrontal cortex is blocked and the writing process won’t start.

The second inhibiting condition is the lack of a starting point. In order to write a nonfiction article, I need to have an insight to share with the reader, a question that I know will appeal to the reader’s mind or current concerns, or a common public perception or proposal to which the article responds. This is the insertion point, the gateway into the maze, the first incision in the surgery, the cleavage plane in the diamond that the jeweler taps with his hammer. With the right insertion point, the right fault line, the article opens up like a flower. With the wrong insertion point, the wrong incision, it will take paragraphs and pages to find its feet.

In order to write a piece of fiction, I need a thought, a word, an action, an Aha! insight, or a sensory perception—a sound, a smell, a visual image—that introduces the reality of this moment in time for the point-of-view character. With that first stab, so like the surgeon’s first cut, I can release the story’s latent emotional momentum, and the rest of the writing process is just following my built-in sense of direction.

I call this starting point—the article’s entry point, the fiction’s placement in time and space—the “downbeat.” Like the movement of a conductor’s baton, it signals to the rest of my waiting brain “Start here.” I know when I’ve got a downbeat. It comes to me as a word or image in the moments that precede my sitting down at the keyboard and starting to write. If I don’t have it, I don’t sit. I may pace up and down, go get a cup of coffee, take a shower,6 or fiddle with something else. I can usually force a downbeat. Lacking one is not as critical to the process as having a hidden plot problem, a misunderstanding within my notes, or a hole in my thinking. But until that word or action or sense image makes itself known, the writing process is stuck at top dead center, like a piston at the top of the crankshaft that doesn’t know which way to turn it.

But give me a clear objective and a starting point, and the word generator starts putt-putting and my consciousness flows. That’s how I get into the writing zone.

1. The prefrontal cortex is that part of the brain right behind and above the eye sockets. It’s the structure which mental health professionals used to detach through surgery, or damage with a thin metal spike thrust upward through the socket, in order to calm psychotic patients. The effectiveness of this therapy is not hard to understand: without the brain’s executive function, without the ability to order your thoughts and plan your actions according to some goal or belief system, you naturally become passive and calm. You can’t think much about the future, plan your day, or anticipate the consequences of your actions and possible responses to other people’s intelligent conversation.
       The doctors who performed these lobotomies thought they were helping the patient deal with fears, frustrations, and anxieties. What they were doing actually yanked the patient out of the mental time-stream, out of the conveyor belt of ideation and anticipation that looks forward to the future, plans a response in the now, and tests it against past action. The patient entered a Zen-like state of inexpectant, untroubled present experience—one from which he or she could never return.

2. Because I have a pretty good editing grid, built over years of working as a book and technical editor and an internal communicator, my writing no longer proceeds in quite a straight line. I will be testing and editing sentence structures, choosing and replacing words, adjusting subordinate clauses, and improving punctuation as I write. So the paragraph appears on the computer as a rippling wavefront of words and their relationships that are proposed, retracted, edited, and improved as the cursor moves down the screen. This process proceeds so fast that a pen or pencil can hardly keep up, and a paper page would too quickly be filled with cross outs, corrections, circles, and arrows. So I must work with a compliant word processor to really make progress on a piece of writing. Paper is just too slow and static.

3. I once thought that the hours I spend writing didn’t actually count against my natural life-span. They were a cessation of time and a blurring of the aging process. Now, as I draw closer to some kind of inevitable end of that life, I know better. Still, it’s a soothing thought.

4. A pictorial joke going the rounds on Facebook, at least among members of the writing community, shows a dog at a computer keyboard and screen with the commands “Sit!” and “Stay!” written in above his head—as if having your butt in a chair and your eyes fixed on the blank space of screen or page for some unspecified amount of time would produce anything. These actions may be necessary for entering the writing zone, but they are certainly not sufficient.

5. See Working with the Subconscious from September 30, 2012.

6. For some reason, hot water on the back of my neck stimulates the imaginative processes. See The Author’s Job from January 19, 2014.