Sunday, July 23, 2017

Degrees of Freedom

The subject of freedom is much on my mind these days. As I’m now approaching late middle age—on the cusp of seventy years old—I realize that avenues of potential are continuously closing down for me.

Of all the things I might have become as a young man fifty years ago—doctor, lawyer, soldier, politician—none remains available today. Those are occupations you must train long years to become, or meet special physical requirements, or establish a steady track record of participation, and I no longer have the time or the stamina to even try. Of all the exotic places I might go—Machu Picchu, the top of Mount Everest, or even diving in the Caribbean—I no longer have the physical energy to attempt. Given that I no longer do well on long airplane flights, with their cramped seating conditions and my big frame, I probably will never see Europe again, unless I’m willing to pay the treble fare to fly first class. And given the amount of political uncertainty and violence that seems to be endemic in the rest of the world, I probably will never get farther east than Greece or farther west than Japan in the travels of my remaining lifetime.

So freedom as a practical issue of choice and possibility, rather than an abstract matter of statute or moral law, is always part of the human condition. It may technically be true that every boy—and now every girl—born in the United States might one day grow up to be President. But that destiny will probably be decided sometime before he or she gets out of high school, based on whether that person has the inclination or the aptitude to put in the time and energy, enter the American cursus honorum,1 and make the sacrifices required. And then, by about the age of forty, he or she will know where the top of his or her personal career arc will likely reach—and for a great many it will stop in some local or state office without ever attaining national prominence.

Freedom comes in many forms and at many levels, depending on personal and public constraints, as well as personal interests and desires.

At the most basic level are those freedoms assigned to bodily function: the freedom to decide when and what you will eat; when and where you sleep and for how long; when and how you use the bathroom; and trivial choices such as whether you want coffee, tea, or something stronger to drink. One would think that we are all perfectly free to make these choices, but not everyone and not all the time. Some jobs have assigned eating and sleeping times, limit the kinds of foods served or allowed in the cafeteria or mess hall, and limit or prescribe bathroom breaks. We accept these restrictions in favor of a greater good, such as the smooth functioning of the organization or maintaining good relations with our co-workers. Some people agree to give up these freedoms under special circumstances and for a limited time, such as a person joining the army and taking food and rest under a strict regime, and again the reason is for some greater good. Societies also place involuntary restrictions on these freedoms as a form of punishment, as anyone who has served time in prison can attest.

At the next level are freedoms associated with the details of daily living: freedom to decide where you will live and under what conditions; where you will travel and with whom; and how you will spend your time. For most of us, these freedoms are prescribed only by our economic condition. I would like to spend my time reading or playing games, but in order to earn my daily bread and the mortgage money I must work at a job that is not always of my own choosing and not always easy and fun. I would like to commute to that job in a Ferrari, but that car is too expensive and the freeways are too crowded anyway; so I ride the bus or the subway with dozens or hundreds of strangers. I would like to live in a 5,000-square-foot house in a nice suburb, maybe with a pool and a patio, enjoying a ten-mile view to the mountains, but again that kind of living is beyond my means. Sometimes the state or local authority intrudes on these decisions, such as when downtown zoning doesn’t provide enough parking for even a small Fiat, let alone a Ferrari. Or that big house in the suburbs is precluded by limits on land use, lot size, or utility hookups, so that I am forced to live back in the city.

A special category of freedom is associated with decisions about lifestyle and a person’s level of health or dissipation: freedom to decide whether to eat wholesome foods or processed junk; how much exercise you will take versus how much time you spend in sedentary pursuits; which vices you will adopt and which you will engage your will power to renounce. Aside from people in prison or the military, we all think we are free to eat what we like and exercise as much or as little as we want. But employer-paid health insurance is beginning to provide monetary incentives—more likely disincentives—to promote healthy lifestyle choices. And certain vices such as liquor, cigarettes, and recreational drugs have been subject to heavy taxation if not outright prohibition for most of the twentieth century.

And finally, the ultimate level of freedom involves decisions and opportunities that affect a person’s lifelong contribution to society, the search for meaning in life, or the fulfillment of some personal destiny: freedom to acquire education, skills, and training; freedom to think for yourself and make decisions about your career and the ultimate reach of your ambitions; freedom to guide your children in paths you believe will give them a good life. More than access to money and avoidance of public censure and state controls, the limits on these freedoms are often your own imagination. If you don’t know what the choices are and can’t think up satisfactory goals for yourself, you are as bound as if you wore handcuffs. Yes, in totalitarian societies, the freedom to think and become what you want is often proscribed—ask someone trying to publish the truth as he sees it in the old Soviet Union or in the People’s Republic of China. And yes, being denied access to education and the broadening effects of wide reading and personal inquiry can limit the imagination. But in most cases, the lack of goals and motivation usually comes from a failure of the home environment and lack of access to good teachers, mentors, and wise relatives like a favorite aunt, uncle, or grandparent.

Since all of these levels of freedom—from bodily function to personal destiny—are subject to external limitations, the real question is how we want that limit decided. Do we take it upon ourselves to seek out and do what we want, live where we want, think what we want subject only to the natural limits of time, money, and our own skills, ambition, and energy level? Or are we willing to relinquish these choices to some other person or human agency, such as a prison guard, a platoon sergeant, a factory supervisor, the local zoning and school boards, or the representatives of one or the other alphabet-soup agencies of the federal government?

Persons with an “institutional mentality,” like a life prisoner or a career soldier—or many of the common citizens of more regulated societies in the European Union and the Middle East—will opt for a guard, officer, or commissar to do their thinking and deciding for them. Most Americans, however—at least those of the older generation—tend to guard their freedoms jealously and would by choice live in a cold-water cabin on the edge of the woods than in a marble mansion under the supervision of a nursemaid, perfect, or magistrate.

1. The cursus honorum was the “course of offices”—political, military, and religious—that an ancient Roman of senatorial rank was expected to fulfill on the way to political prominence and power in the Republic and the early stages of the Empire. The American equivalent would be something like getting a law degree and becoming district attorney, or serving in locally elected positions like being on the school board or town council, then running for state assembly or senate, then for Congress or a governorship. Other paths may be possible, and we certainly saw them represented in the Republican presidential candidates for 2016. But for someone who is not already independently wealthy, this course is the only way to attract the attention of publicists, campaign managers, fund raisers, and the funding sources necessary to attain high elected office.

Sunday, July 16, 2017

Could DNA Evolve?

I recently posted about the nature of DNA,1 how it is found in every living thing on Earth, and how every living thing—no matter how far back you go—uses the same DNA-RNA-protein coding system. It’s not just similar in every microbe, plant, and animal. It’s the same system, down to the smallest details of chemistry, arrangement, and function.

To me, this is like discovering that every car on the road has the same motive power: a four-stroke, four-cylinder, inline, fuel-injected, internal-combustion engine, all with the same valve timing and compression ratio, and all burning the same grade of gasoline. With a little imagination, you might be able to conceive of an internal-combustion engine that burned kerosene or diesel fuel. You could invent a block with two, six, eight, or ten cylinders. You could design in your head a configuration with the cylinders arranged in either flat opposed pairs or a V shape. With more imagination, you could imagine the power cycle simplified to two strokes, so that exhaust and intake occurred on the same stroke, and every combustion stroke was followed by a compression stroke. You could think of ways to introduce the fuel into the cylinder other than by injecting it with a nozzle—say, by spritzing it into the air flow through the throttle body and call it “carburation.” You could even think of external combustion processes, like a steam engine. Or engines that had no cylinders and pistons at all, like a turbine.

All of these variations are possible to think about. But in the world I’m describing, they don’t exist. Every car on the road is a fuel-injected inline four. More than that, every pickup truck, semitrailer tractor, farm tractor, and motorcycle has this type of engine. So, too, does every weed whacker, lawn mower, water pump, and air compressor. Also every airplane, helicopter, and railroad locomotive. If it moves in this world, it is powered by an inline four-cylinder engine of the same exact specifications. Some engines would have larger or smaller cylinder volumes than others, but all have the same arrangement, operating principle, and fuel needs.

After thinking about this for a bit more, you might reach one of two conclusions. The first is that the fuel-injected inline four is just so perfect an engine that the designers, manufacturers, and users of cars, trucks, airplanes, and farm equipment simply had no reason to try anything different. The second thought is that maybe the engine wasn’t invented around here but brought into this world in its fully developed state from someplace else.

This is where I end up thinking about DNA. Either the DNA-RNA-protein coding system was just so robust and efficient that it outperformed and overcame all other possible chemical coding systems during Earth’s earliest history—so early that no trace of these competing systems remains on the planet. Not as some feeble microbe hiding in a deep cave somewhere. Not as a tiny mite making an inconspicuous living on DNA’s droppings in the sandy desert soil or the ooze at the bottom of the deep ocean. Either that, or coding system for all the planet’s known life forms went through its development and evolutionary stages somewhere else in the universe and blew into Earth’s early atmosphere as a microbial spore, or arrived as skin cells shed inside a visiting astronaut’s lost glove, or was seeded here with a package launched by galactic gardeners from another star system.2

The obvious answer—once you accept either premise, ultimate efficiency or astronaut’s gift—is that the DNA system itself simply can’t evolve. Once the fragile molecular chain floating in the salt brine of a tide pool stops trying to arrange itself and starts calling for the protein and lipid sequences to build a membrane around the first single-celled, prokaryotic organism, the system is locked in place. That first cell, whether it leaned toward the plant-way or the animal-way, used the DNA coding system to build its internal organelles and external membrane, to regulate its operations by a cascade of enzymes, to feed itself through the breakdown of carbon compounds and the buildup of the energy molecule adenosine triphosphate (ATP) inside its mitochondria, and to conduct all the other processes to which the cell had become accustomed. Once the living organism was dependent on using this coding system to process the amino acids it needed to build proteins, and then to build those same proteins over and over again as the cell grew and expanded, its fate was sealed.

The DNA code—its sequence of its base pairs—might be changed, or mutated, either by chemical challenges or by radiation effects from the external environment. Change the letters of the code, and it will—sometimes, but not always, depending on the letter’s position in the three-base codon—call for a different amino acid and so create a different protein. The new protein might be slightly different in structure and function from what the code called for before, or it might be very different. That is how evolution works: accidents to the DNA sequence create changes in proteins that either hurt the cell inheriting the new code, or that have no present effect but allow this cell to prosper amongst its sisters when the environment changes—as the environment continually does—or, occasionally, that improve the cell’s functioning right away in the present environment.

The code itself is resilient, because many of the sixty-four possible combinations of four bases in a three-base reading frame call for the same amino acid, and the third base in the codon can usually be changed without effect—which is why it’s called the “wobble.” But also, most proteins are big enough and complex enough—with enough amino acids chained together—that changing out one or two amino acids in their makeup has little effect on structure or function. And then, most protein changes are not either beneficial or lethal to the organism right away, but instead they hang around and make themselves felt when the environment changes and then they either benefit or kill off one set of genetic inheritances over a competing sister line with a different inheritance.

The whole system is slippery and wobbly in its effects, in the exact sequence of DNA and RNA bases, the choices among amino acids, and the production of proteins. But this is like saying that a flatbed printing press can produce many different documents, based on how the lines of monotype letters are arranged in its iron frame. To create all those different documents, however, the press always uses a predetermined alphabet of type blocks, sets them up in the same framework, inks them the same way every time, lays the paper on them in the same place, and applies the same amount of pressure with the platen. The coding changes all over the place, but the coding system remains the same.

If the DNA-RNA-protein system could evolve and change, that would create chaos within the cell—wouldn’t it? If a new fifth purine or pyrimidine base were added to the existing four, it would scramble the DNA sequence. First, because it would have no complementary base to pair with, as A always pairs with T, and C pairs with G. A fifth base—say, the purine xanthine (X)—would just sit there filling a hole, like the empty socket in a jaw that’s missing a tooth. Having nothing to pair with, the new base would scramble the code, much as the upper tooth over an empty socket has no way to provide bite pressure. Second, if somehow two bases could be added and paired up at the same time—matching that X with, say, the pyrimidine orotic acid (O)—their popping up together in the sequence would still scramble the code. Even if the new bases could be recognized and transcribed into messenger RNA, the existing ribosome in the cell body would have no way to translate either of them into one of the possible amino-acid choices for the next position in the developing protein strand. And if somehow the new X and O bases were added to the existing code and intended to call for some new amino acid—beyond the twenty that now make up all microbial, animal, plant, and human proteins—that would simply create another toothless gap, because the cell’s internal processes are not yet geared to manufacture, collect, or supply this new amino acid in any quantity.

And all this is just to consider the evolution of the DNA-RNA-protein system inside a single prokaryotic cell. Such cells reproduce by continually growing all their contents and expanding to the point of rupture, at which time they replicate their DNA strands, divide and haul off the resulting new chromosomes to opposite ends of the cell body, pinch off the cell membrane in the middle, split into two new cells, and trot on. If the existing parent cell had somehow survived the chaos of introducing at least two new bases, transcribing them successfully into messenger RNA, happening to have the right kinds of new amino acids on hand, and then using the new protein in a constructive manner … then no problem. The two daughter cells produced by the split would inherit this newly evolved DNA-RNA-protein coding system and continue to function with it.

But in the eukaryotic domain, whose cells contain their DNA in a separate nucleus, most reproduction is by sexual joining.3 Two organisms come together, usually by one contributing an egg and the other fertilizing it with sperm, in order to create a new and unique individual. That individual differs genetically from either parent, and so sexual reproduction increases the amount of genetic variation—and thus the possibility for new combinations of mutations, more changes, more adaptations—in the species. But sexual reproduction puts a powerful limit on the evolution of the DNA coding system. An individual who might somehow evolve in his or her germline a new set of X-O base pairs, a new corresponding messenger RNA sequence, a modified ribosome to translate the new code, and a new and unusual set of amino acids to be used by it … would then be a genetic freak. To reproduce and pass all this newness along to the next generation, she or he would have to meet up with a breeding partner who had similar equipment. Chromosomes in sexually reproducing species come in pairs, one from the mother aligning with one from the father. Unless the individual with the newly evolved DNA could meet someone with the same evolved system, the breeding line would die out. The evolved system would disappear in the first, nonexistent generation.

Or would it? If the evolved DNA was in a male, it would probably disappear, because the sperm provides nothing but raw coding to the next generation. But if the altered individual was female, and her egg contained the mechanisms for the novel transcription and translation—appropriate RNA, ribosome, and amino-acid processes—then the offspring might survive. It would make the usual proteins from the traditional DNA chromosome pairs supplied by both the mother and the father, and it would make new proteins with the X-O-contaminated chromosomes and adapted cellular machinery supplied by the mother. Over time, and with enough generations—probably passing down the female side at first, like the mitochondria in the mother’s egg, because of all that cellular machinery—the new DNA system might spread through the population of both females and males. In fact, it probably would spread if it conferred advantages of more flexibility, more adaptability, more robustness. Eventually, certain species that had an improved six-base DNA, perhaps in a larger, four-base reading frame, and calling on more than twenty amino acids to create novel proteins, would appear in generations that could be traced back to the evolutionary split. Eventually, the older style of DNA with just four bases in a three-base reading frame might disappear in all the different animals or plants that evolved from that revolutionary ancestor. As a result, we might see two separate populations differing in their fundamental DNA system.

Such a systemic evolution would not be easy. It might first appear as a byproduct: one gene on a fragmentary chromosome, off to one side in the cell body or in the nucleus, making its own special proteins, and not interfering with the regular business of the cell. It would have its own RNA. And the ribosome out in the cell body, being a highly adaptable structure, might quickly evolve to make use of these new messenger RNAs with their strange coding. The new system might start out sex-linked to the female line, as certain genes are now linked to the male line’s Y chromosome. If a six-base DNA—or any other systemic variant—had any greater adaptive power or offered more evolutionary advantage to a cell line, it might certainly develop out of the existing four-base system. And some of its daughter cells might not be so chaotically disrupted that the old system would out-compete them in every environment. A hybridized cell, using both DNA systems at first, but perhaps eventually singling up on the newer model, could survive somewhere, in some environment, someplace on Earth.

With a little imagination, it could happen. But it didn’t. We live in a world without two-stroke engines, without two- or six- or eight-cylinder engines, and with no trace of a steam engine or a carburetor in our developmental history. Everywhere we look it’s just fuel-injected, four-cylinder, inline engines and always has been. And I still wonder why.

1. See The God Molecule from May 28, 2017.

2. The third alternative is that the DNA-RNA-protein coding system was thought up and then cooked up by a genius god with a PhD in molecular biology. But as soon as you start allowing for the supernatural, then all sorts of “just-so” stories become possible and the whole world is simply a giant miracle.

3. Once you get beyond the single-celled eukaryote variants such as the algae, yeasts, and protozoa.

Sunday, July 9, 2017

Causes of Civil War

Anymore, I’m keeping a clock inside my head, like one of those countdown-to-midnight clocks that once got published about certain predictable catastrophes, like the next nuclear war. Mine is weighing the chances of a second civil war in America. I wrote about this in a recent novel, Coming of Age, where—among many other story lines—the national debt makes this country vulnerable to foreign manipulation and initiates a split between the largely urbanized coastal states and the more rural inland states.

Most people consider even thinking about another civil war to be the sign of an unbalanced mental or emotional condition. For me, such a war is just another future hazard. Many countries have had civil wars when their political differences reached the irreconcilable stage. Most recently, these have been countries under attack by Marxist revolutionaries and leftist rebels: Korea, Vietnam, Cuba, Guatemala, Nicaragua, Cambodia, Venezuela, Bolivia, Colombia, and myriad African hotspots like Nigeria, the Congo, and South Sudan. In the Middle East, the wars have more recently been between the secular governments installed after the two world wars and the religious fundamentalists, but the contention is still between those who want an open society based on personal freedom and those who want it closed and based on rigid codes of moral or political conduct.

And even long-established countries that today we think of as enlightened and stable had their periods of civil war. England had its own war against the monarchy in the 17th century. France had its revolution against the aristocracy in the 18th century. And America—not counting the colonial revolt against English rule—had her crisis and convulsion in the middle of the 19th century. Russia fell apart under pressure from leftist revolutionaries and monarchical incompetence in the middle of World War I, went through a period of civil war, and emerged as a Communist regime. Germany fell apart in the 1920s as the result of losing that world war, went through a period of hyperinflation and street thuggery, and emerged as a National-Socialist dictatorship. China—with help from a Japanese invasion before and during World War II—fell apart into feuding, warlord-dominated enclaves and emerged as the People’s Republic in 1949.

You might think armed conflict over political or religious issues can’t happen in this country again, because we have a … a what? A document called the Constitution that has endured for 227 years now and is the model for good government around the world? A huge military armed with nuclear weapons that is, by design and by decree, politically neutral and subservient to civil authority? A built-in mechanism for regime change enshrined in popular elections held every two and four years? All of this makes us special and in some cases unique in the world. It does not, however, render us invulnerable to irreconcilable differences that cannot be healed by the ballot box and will not submit to long-standing social and military traditions.

Documents, traditions, and laws are effective only so long as the majority of people hold them to be inviolable and put them above personal advantage and political opinion. History is full of carved idols, tablets of stone and bronze, and inherited traditions that became honored only by rote and with the lips but were ignored in everyday practice and with the heart. Ancient Rome went from being a democratic republic to an imperial dictatorship in the span of two generations by just such a hollowing out of her traditions. Rome’s period of civil war was a contest between powerful politicians who fielded essentially their own private armies. All through it and the dictatorship that followed, the country still maintained the form of electing its politicians and military leaders, but the process was controlled and the outcome inevitable. Even the Soviet Union had its popular elections, but with the sole candidate nominated by the local soviets with guidance from the Communist Party. Even the Islamic Republic of Iran votes—but only for candidates already approved by the theocracy.

In the vast majority of the more recent civil wars, the dispute was not about some single social or economic issue—like slavery in the American Civil War, or economic collapse in my Coming of Age books—but about the ongoing nature of society itself, the principles under which people should be governed, and—in the case of the revolutionary insurgencies—who should exercise those principles.1 Even the religiously tinged uprisings in the Middle East—and now in parts of Europe and Asia—are not about doctrinal issues and matters of faith so much as about imposing Sharia law and Islamic culture on countries that have recently adopted—or, in Europe, have long practiced—Western-style, secular democratic government, free market economics, and liberal social policies.

In some cases, the war—that is, actual military hostilities—comes only after some defining action and not as a lead-up to it. In the American Civil War and in the wars between North and South Korea or North and South Vietnam, the separation of one part of the country had already occurred, whether by secession or through international agreement. In the Russian Revolution, the Bolsheviks had already taken power in the capital during the October Revolution and forced the royalists and the remaining moderates to retreat into the countryside or to emigrate. Sometimes, however, the war is the deciding factor in regime change, as in the case of the civil wars in Spain in the 1930s and Cambodia in the 1970s.

Which way will the United States go in the early 21st century—if we must go to war at all?

Although my novel Coming of Age portrayed a split between largely contiguous sections of the country—the urban, progressive coasts versus the rural, traditionalist interior—I don’t think that model holds in today’s political situation. We saw from the breakdown of voting patterns in the 2016 national election, by county rather than by state, that the sentiments between left and right are far more distributed. Most of the dense urban counties went Democratic, while the less populated rural counties—but holding an impressive amount of geographic territory—went Republican. California, for example, is staunchly progressive in the urban centers of San Francisco, Los Angeles, and San Diego, where most of the population lives, but also strongly traditionalist in its rural counties, which encompass most of the land area. If California ever decided to secede from the Union—as some are seriously promoting—either as part of a new federation with other progressive-dominated states like Oregon and Washington, or as its own country, it would quickly lose the Central Valley and the Foothills through their own act of secession. Indeed, the far northern counties of California and the southeastern counties in Oregon are already agitating—and have been doing so since 1941—to form a new state called “Jefferson.”

In the hardening controversy between progressives and conservatives—where reasonable discussion and polite disagreement have already given way to marches, occasional riots, and now to political shootings—the solution won’t be anything as simple as a resolution to take one part of the country out of the Union and form a new country with either free-market capitalism or bureaucratic socialism as its economic model. But in any new secessionist country, under either model, the government and its politicians would probably still consider themselves to be a democracy, and they might adopt some form of the U.S. Constitution as their founding document. However, the rules and practices of that democracy would likely change from what we have now. A progressive state would probably adopt a larger, more intrusive federal bureaucracy, give less authority to a smaller popular assembly, and seek more open and contextual adherence to that new constitution—i.e., treating it as a “living document.” A more conservative state would intentionally create a smaller standing government, give more rulemaking power to its congress, and adopt a more strictly “originalist” interpretation of its constitution.

But the geographic lines and the regional sentiment to support such a nicely defined state-by-state or regional split simply don’t exist. No, I believe we have progressives and conservatives living too close together, as in California. Or in Upstate New York versus New York City. Or in any other urban-rural split you could name. We are more like the intermixing of Hindu and Muslim in the British Raj before its partition into the states of India and Pakistan. And that means the next American civil war—if it ever comes, if some reconciliation doesn’t take place soon—will be more like Spain’s or Cambodia’s. More neighbor against neighbor, cities versus the suburbs and rural counties, more like guerrilla and urban warfare.

Whether the U.S. military could keep out of such a conflict is an open question. All of our officers have taken oaths to “support and defend the Constitution of the United States against all enemies, foreign and domestic.” Most soldiers and serving professionals—and since the end of the Selective Service draft, we have built a professional military based on self-selected, volunteer service—are traditionalists who see themselves as upholding the values of the country as a whole rather than the privileges of a politicized bureaucracy in any current government. If the party in power is openly contemptuous of America, its history, its traditions, and Western civilization in general, that is going to be a hard oath to keep.

Of course, a country engaged in urban warfare could not survive long. In short order—no more than a couple of years, if we go by history elsewhere—one side would dominate and the other give up. Otherwise, we would eventually see a flow of forces that moves people of similar loyalties and opinions into geographical refuges and strongholds. Such regions might eventually become the basis for new countries that coexist side by side, like North and South Korea. But the bet is still that one side will quickly dominate, as in Franco’s Spain and Mao’s China. And the risk in today’s world is that, while civil chaos exists, foreign intervention and opportunism might take the country down. With intercontinental ballistic missiles and other weapons of force projection, the two oceans guarding our borders, and our friendly neighbors to the north and south, will no longer protect us.

I hope we can avoid this. Such a war would mean large numbers of military and civilian dead, ten times as many injured, years of civil disruption, billions of dollars in destroyed infrastructure and property, trillions in lost personal and public wealth and lost productivity. War is the ultimate leveler. But it seems to be the only way two groups of human beings can settle their long-held, irreconcilable differences without possibility of deception. Oaths can be renounced. Treaties can be broken. Laws can be ignored or reinterpreted. Extralegal actors—rioters and assassins, brigands and pirates—can be encouraged. But once you have beaten an enemy to the point at which he cannot lift his arms to hold a weapon, once you have decimated his population, razed his cities, and salted his lands—or once you are put into this form of submission yourself—then you can pretty much call the issue settled and start working on the peace terms.

I don’t know what the future will bring—and I say that as a science-fiction writer whose business is to foresee and interpret the future. But I know that somewhere a clock is ticking.

1. When Lenin came back to Russia, via a sealed train through Germany, he was aghast to find his old revolutionary cadres shouting, “All power to the soviets!” These were the workers’ and soldiers’ councils—the meaning of the word “soviet”—that had sprung up in Moscow and Saint Petersburg during the revolution. “Do not cry ‘all power to the soviets,’ ” he chided them, “until you have control of the soviets.”

Sunday, July 2, 2017

The Science-Fiction Mindset

One of the beta readers for my latest novel The House at the Crossroads, commented that none of the characters ever seems to get hurt or angry. I thought about this during the editing phase but could see no reason to change anything. The characters’ responses to their life situations, to their frustrations, and even to outright enemy action seemed all appropriate to me. Then I realized that this commenter was not a regular reader of science fiction. And that, for all its historical trappings, is the essence of this novel.

Maybe it’s just me and the way I react to things. When someone challenges me, says something hurtful, or tricks or betrays me—doesn’t happen often, but sometimes—I probably do feel hurt and anger. But that’s a reaction occurring as a residual effect, usually in thinking about the situation after the act. In the moment, my conscious mind is busy trying to figure out the basis of the challenge, the reason for the other person’s scorn or belittlement, or a tactical response to the trick or betrayal. In other words, my response is to act first and moan about it later.

Maybe it’s just a lack of personal introspection. The way I was brought up, personal feelings were not all that important. My parents were practical, technically minded people.1 Like most of their generation, they had gone through the Depression and World War II, where making do with what you had and then putting aside your personal preferences in order to do your duty and get the job done was a national characteristic. Like most of my own generation, I regularly heard warnings that began with “If you think you’re hurting now …” and “If that’s the worst thing you ever have to do …”

These are also characteristics I admire and think should be emulated in fiction and in real life: emotional resilience, mental resourcefulness, physical bravery, dependability, and responsibility. I admire people who can face up to their situation, however painful, and work to rectify it—rather than brooding on their hurts and the wrongs done to them. And I believe this is a common characteristic of the fictional people portrayed in most science fiction—at least in the books produced in the decades immediately following the last world war. There the characters don’t waste time feeling hurt, and for them anger is a spur to action. Don’t get mad, get moving—and then get even.

Given a crisis, anyone confronts a choice. You can collapse inward or focus outward. You can curl up inside your shell, examine your feelings, and wait for someone else—or perhaps time itself—to make things better. Or you can take a stand, strike out, hit back, and keep fighting, dodging, and weaving until either the situation changes or you are dead. Perhaps, in the bigger picture, your stand and your moving fist will change nothing. Perhaps the initial blow was too great, the fire too hot, the sea too cold, and your hope of survival or the probability of your receiving reinforcement or rescue too small. But the choice is still there. You can die, face God, or enter Valhalla either curled up in a ball and whimpering or standing on your feet and spitting challenges.

This is not to say you—and the characters in science fiction from the 1950s through about the ’80s—don’t have feelings. You do register hurt and anger. But they are secondary and after the fact. The first order of business is to address the problem, get moving, fight back.

Maybe this observed tendency for the characters in my stories not to react with hurt and anger is also an artifact of the way I write. My style, developed over a number of years—and now seventeen completed novels—is what some have called “free indirect discourse.” It’s nothing that I was ever taught, except by observation and emulation of the books I have loved. In this style, the text is in the third person but the point of view is always first person. So I may be writing “He said …” “She thought …” “He observed …” but, change the grammar around, and the story would flow equally as well with “I said … thought … observed …” While the grammar may be pretending to observe the character from the outside, as with the traditional “omniscient narrator” of earlier fiction, the sense of the language is observing the world through the character’s eyes.

As a writer, this is a strange mask to wear. It puts me—and, vicariously, the reader—inside the character’s head at all times. It forces certain limitations, and so a structure, on the narrative. Unlike the omniscient narrator, a passage told in this style can’t sample one character’s thoughts and feelings, perceptions and observations in one sentence or paragraph, then turn around and delve into another character’s head in the next paragraph. If I place a character on one side of a closed door, I can only speculate from the knowledge available to him what might be happening on the other side. If the character is engaged in conversation, she can only speculate about the other person’s motives, intentions, or exact feelings. The world is one-sided for the duration of the scene or chapter in which the character is engaged. This means that, if I want to show what’s on the other side of the door or sample the thoughts on the other side of the conversation, I must start a new scene, enter the head of a new character who has access to these events and thoughts, and recreate the story from that second point of view.

Why adopt this technical, clunky style? First, it puts the reader into the center of the action. Rather than observing the story as a theater audience might, watching the characters on a stage, the reader joins me in putting on the mask and seeing the world through the eyes—and the history, perceptions, prejudices, and desires—of the focus character. This is like observing the action through the camera lens in modern cinema technique. And it’s a way to color the world of the story with the character’s sense of self and particular knowledge. That can be very powerful in storytelling.

Second, indirect discourse lets the writer set up situations where one character may be lying, misunderstanding or presuming certain facts, or acting from what seem to him like perfectly reasonable motives—all of which may differ from the perceived reality of the other viewpoint characters in the story. This establishes the possibility for the plot to go in two directions, to cycle back on itself, to force the characters into sudden and perhaps unpleasant realizations—and only the writer and the reader are party to all points of view and so to a greater understanding than any one character. Shakespeare sometimes does this with whispered asides from his characters, or with dialogue conducted in secret, in a scene set apart from the other characters. Indirect discourse is a story told in personal asides. And the possibilities for revelation and resolution are even more powerful.2

In this style of writing, it is entirely possible—one might think it is almost required—to show a character’s reactions of hurt and anger to a distressing situation. Simply write “He felt angry …” or “She was hurt …” And where the direct cause of the feeling might not be obvious, the writer in indirect discourse can use those lines and explain the feeling. But I believe a higher level of storytelling involves trusting that the reader is wearing the mask fully and completely. Then the reader will understand and feel the shock, the anger, the betrayal, and the pain without the writer having to belabor the point with internal stage directions. The unspoken feelings hang in the air, like a sudden realization, revealed only by the character’s subsequent actions: getting moving, solving the problem, taking revenge.

This is a subtle way of writing, to live the story through the eyes and perceptions of one character at a time. But after a while—and with some practice at it—writing in character becomes second nature. And then the old style of the omniscient narrator, pointing out this and explaining that, dancing indiscriminately through everyone’s head at third hand, and making everyone’s feelings visibly manifest, as if they were painted on the top of their skulls and the surface of their skins … that’s what feels clunky, inept, and foolish.

1. See Son of a Mechanical Engineer from March 31, 2013, and Son of a Landscape Architectfrom April 7, 2013.

2. Of course, it is still possible to surprise the reader along with the other characters: the author simply does not show action through anyone who has a full perception of what is about to happen.

Sunday, June 25, 2017

About Nothing

They say it’s impossible for the human mind to think about nothing at all, but apparently we think about it a lot.1 For example, the Zen kōan, with its impossible question or illogical juxtaposition, is designed to disrupt the continuous buzzing of the active mind and send the practitioner into a relaxed, passive, receptive state. This is why meditation is so refreshing: it is like the darkness of deep sleep before the nightly pageantry of dreamtime begins.

But you don’t have to be a Zen master to contemplate emptiness. Quantum physicists attempt to understand the void of creation all the time. After all, empty space makes up the largest fraction of the universe. For example, it’s a common metaphor that, if the nucleus of an atom—any atom from hydrogen to plutonium—were blown up to the size of a baseball, then the electrons in their various energy shells surrounding it would be like flies buzzing around inside the space of a cathedral. If you could stop their motion, then you could sweep the dead electrons and the nucleus itself up with a brush and dustpan, leaving a cathedral-sized nothing behind. And if a molecule is a group of atoms linked by sharing their electrons, then molecules are simply a concatenation of cathedral-sized empty spaces. And even in the most densely packed material, like that brick of plutonium, the space between the molecules would be even emptier.

Outside the densely packed substance of the Earth and its atmosphere, in interplanetary space, the most prolific form of matter is particles of the solar wind. Depending on the state of the Sun and its recurring coronal mass ejections, these particles occur at a density of between four and ten per cubic centimeter.2 And most of them are not intact atoms from the Sun’s store of hydrogen and helium but instead their ions—that is, uncoupled atomic fragments like protons and electrons. Thin soup indeed! Interstellar space, beyond the boundary of the Sun’s heliosphere, is even emptier.3

And yet, in the mind of the physicist, the empty space between atoms and particles, even the space between the planets and between the stars, is laced with the fields that are associated with dynamic particles. These fields include the electromagnetic field accompanying the photons4 flying outward from the sun and from any other release of energy, or the Higgs field accompanying the long-sought Higgs boson5 that enables all the other particles in the grand vision of quantum mechanics to have mass. So “empty” space is full of—well, let’s call a field the “potential” for things to happen if the right amounts of matter and energy are present. So empty space has structure—or at least the possibility of structure—based on the presence and number of those nano-sized baseballs, dead flies, and other bits of matter or energy, on how much mass each one contains, and on how fast it’s moving.

Science fiction writers have taken this idea of the structure of empty space to absurd but imaginatively useful limits. For example, the empty space of the physical universe is envisioned as folded and crumpled in dimensions more numerous than the three—x, y, and z—coordinates we use for defining the space in which we normally move around. The idea goes that, if you could focus enough energy at a particular point in normal space, you could break through that folded structure and instantaneously arrive at another place that might be light-years away in your frame of reference but just around the corner in that multidimensional crumple.

Another useful fiction is that, with the application of enough energy, the structure of space itself can be pulled and pushed around like a lump of taffy. This give rise to the Star Trek warp drive. Using this hypothetical propulsion system, a starship can move faster than light while not exceeding the speed of light, c, the universal speed limit, because its “warp field” collapses the space in front of the ship and expands the space behind it. This is rather like being able to walk along at a hundred miles an hour, rather than the usual human pace of four miles per hour, because the sidewalk bunches up—in the example here, at the rate of twenty-five feet for every step—before your front foot hits the ground, and then it smooths out as you lift your back foot for the next step. You walk in a bubble of collapsing and expanding space and never exceed your normal walking pace. What the warp field does to the ship itself, the passengers, and the empty spaces inside their molecules and atoms is another question.

Some theoretical physicists, taking their ideas from the pixilation of a digital image or an LED television screen, propose that empty space is actually just a field of unfilled holes waiting to be occupied by matter and energy. In this view, space is like a giant honeycomb and, rather than moving through it haphazardly, particles and objects simply transition from one invisible cell to the next, blinking into and out of existence in an orderly fashion. For me, that’s a great mind game, but it doesn’t tell you more about the rules behind matter and energy than simply imagining particles and their associated waves flying through empty space.

Finally, because the movements of stars in the spiral galaxies that we can observe do not seem to match the masses and corresponding gravitational fields of those galaxies,6 physicists believe the universe has an unseen component called “dark matter.” This is not only matter we cannot see, but also matter we cannot detect with any of our instruments because it doesn’t interact with the atoms, energies, and fields—except for gravity—that compose the universe we live in. Based on the stellar movements we can observe,7 physicists think that “normal” or “baryonic” matter—that is, particles with known masses like protons and neutrons, the stuff we’re made of—composes only about five percent of the universe, while this dark matter makes up approximately twenty-seven percent.

It gets worse. The galaxies we can see are moving away from each other—and not just moving but accelerating, moving faster and faster—rather than collapsing inward under the gravity of all the matter we can see and detect, plus any contribution from the mass of all that dark matter. Since the outward fling imparted by the universe’s supposed origin in the Big Bang would be at a steady velocity—or even gradually decelerating, as gravity began to take over—something else must be pushing the galaxies apart. Again, whatever this “something” might be is invisible to our senses and undetectable by our instruments, and so it is called “dark energy.” Based on the observed acceleration of the galaxies, this energy is thought to constitute approximately sixty-eight percent of the matter and energy in the visible universe.

And we haven’t a clue about the nature of either dark matter or dark energy. Physicists attribute the former to objects called WIMPs—weakly interacting massive particles—and MACHOs—massive astrophysical compact halo objects. These are clever names that cloak a bit of an idea but essentially translate as “I don’t know.” And dark energy is sometimes attributed to “vacuum energy,” which is giving some structure or property to the empty space between those atomic baseballs and dead flies. Some theories propose that this energy comes from virtual pairs of particles—one of matter, the other antimatter—that randomly pop into existence in empty space and immediately annihilate each other without leaving behind any visible or audible “pop.” So the whole action is invisible to us. The amount of vacuum energy or the number of virtual-pair annihilations can be adjusted to account for the universe’s dark energy requirement. But hey, when you’re summoning pixies or counting angels dancing on pinheads, any number will suffice.8

So, while we can debate whether a glass is half-full or half-empty, we can also fill up that empty place with all sorts of imaginative particles, fields, and structures. For some of us, all this “nothing” seems to be our favorite subject.

1. You knew this one was going to be weird, right?

2. When I write “cubic centimeter,” think of a sugar cube—back in the days when sugar came in little cubes in a box that you poured into a bowl, instead of measured packets of white powder that is usually not real sugar.

3. What a concept is “emptier”! More empty than empty. Perhaps the construction should be “less filled up”—until we get to the something that is really, totally nothing.

4. I don’t count photons among the particles in the solar wind because the photon only has apparent mass—and so physical existence—because it’s traveling at the speed of light. If you stop it in its tracks, it transfers that energy into something else and simply disappears. Physics is complicated stuff.

5. See “What exactly is the Higgs boson? Have physicists proved that it really exists?” from Scientific American.

6. From the vantage point of Earth, all we can see are the stars in other galaxies. We know that they must also contain an amount of nonluminous matter like planets, asteroids, comets, and loose dust and gases. But since those quantities in our own local neighborhood are such a tiny fraction of the mass of the Sun itself, we discount them in computing the mass of any galaxy.

7. Based on the masses we can see, we would expect the stars closer to the center of the galaxy to move faster than those out on the rim, like wood chips circling inside a tornado or whirlpool. Instead, the stars appear to move in a relatively fixed pattern, as if they were painted on a spinning disk. To achieve this effect, you would need more mass in the system than you can account for by the stars we can see.

8. See also Three Things We Don’t Know About Physics (I) from December 30, 2012, and (II) from January 6, 2013.

Sunday, June 18, 2017

Iambic Life and Trochaic Life

Poetry in the English language seems to settle—when it settles down at all, given the modern distaste for rhyme and meter—into a series of mostly two-beat measures, like a continuous handclap: dee-DAH, dee-DAH. Or sometimes DAH-dee, DAH-dee. Kind of like a heartbeat: lub-DUB, lub-DUB.1

Compare the stressed and unstressed syllables in two pieces of poetry. One familiar from William Shakespeare’s Hamlet, like all his plays written in iambic pentameter:

To be, or not to be? That is the question—
Whether ’tis nobler in the mind to suffer
The slings and arrows of outrageous fortune,
Or to take arms against a sea of troubles,
And, by opposing, end them? To die, to sleep—
No more—and by a sleep to say we end
The heartache and the thousand natural shocks
That flesh is heir to—’tis a consummation
Devoutly to be wished! To die, to sleep.
To sleep, perchance to dream—ay, there’s the rub,
For in that sleep of death what dreams may come
When we have shuffled off this mortal coil,
Must give us pause. There’s the respect
That makes calamity of so long life.

Five measures to the line, and the second syllable in each measure stressed.2

Now read a piece from Rudyard Kipling’s The Explorer, written in trochaic octameter:

“There’s no sense in going further—it’s the edge of cultivation,”
So they said, and I believed it—broke my land and sowed my crop—
Built my barns and strung my fences in the little border station
Tucked away below the foothills where the trails run out and stop:
Till a voice, as bad as Conscience, rang interminable changes
On one everlasting Whisper day and night repeated—so:
“Something hidden. Go and find it. Go and look behind the Ranges—
“Something lost behind the Ranges. Lost and waiting for you. Go!”

Eight measures to the line, and the first syllable stressed in each measure.

The words are so chosen and placed, as if naturally occurring, that the lines can only be read in one way. Try reading them with the stresses reversed, and your tongue gets tangled up.

In the Shakespeare, you have to place the stress and the importance on the second syllable:

To BE or NOT to BE that IS the QUES-tion

For IN that SLEEP of DEATH what DREAMS may COME

In fact, you could drop out the unstressed words and you would still have the sense of the verse surviving in telegraphic form, almost like a text message.3

In the Kipling, the words and structure force you to pay attention to the first syllable:

BUILT my BARNS and STRUNG my FENC-es IN the LIT-tle BORD-er STA-tion

SOME-thing HID-den. GO and FIND it. GO and LOOK be-HIND the RANG-es—

Here again, the stressed words and syllables carry the sense of the poem. And the stress itself conveys the urgency of the whisper: “Go and find.Go and look.

It makes me think that these two opposite forms of reading—stress first versus stress second—almost define two separate approaches to life.

In the Kipling style, life is full of trochees, with that impetuous initial stress that leaves the second almost unvoiced. The tone is imperative, commanding, insistent, thrusting, and sure of itself. It is the voice of a British serving officer. It is the voice that drives men into battle or sends them overseas to seek their fortunes.

In the Shakespeare style, life is made up of iambs, with that hesitant initial stress and the second firming up the sense of the matter. The tone is reflective, contemplative, associative, conjoined with lots of “ands,” “fors,” and “ifs,” and yet ultimately resolute. It is the voice of a mature person weighing consequences—and not just in young Prince Hamlet considering suicide but in all of Shakespeare’s plays. It is the voice that invites us inside the character’s thinking.

When I think back on various people I have known, both in life and in literature—for yes, we readers have invisible friends—I believe many would line up under one banner or the other, the iambic types and the trochaic types.

The trochees are direct and obvious in their life and attitudes: slam-dunk, there-you-are, and sometimes in-your-face characters. For them, life is simple and unquestioned. Hit the ground running. Take the shot. Make your move. Accept the facts as they are presented. This might mean they sometimes jump to conclusions and precipitate hostilities that might better be avoided. But so be it. They also tend to win gun battles and, through their decisiveness and audacity, get the biggest piece of cake.

The iambs are more subtle and reasonable in their approaches: on-second-thought, but-what-about?, and sometimes oh-let’s-not! characters. For them, life is complex and full of questions. Pick and choose. Consider all the angles. Try to understand. Examine the facts before accepting them. This might mean they sometimes miss out on the best items in a holiday sale and fail to stand up to bullies. But they also win chess games by seeing three or four moves ahead and, through their thoughtful and sensitive natures, savor the piece of cake they do finally get.

Which personality is better? That depends on the circumstances. A trochee makes a good soldier and a competent administrator of complex systems that resolve into obvious patterns, like running a railroad or an electric-power grid. These are activities where the hesitations and second thoughts of an iamb can cause no end of trouble. But you don’t want a trochee for a military strategist or judge in a court of law. Those are activities where critical examination, questions, and playing three or four moves out are more reliable. Which makes the better and more lasting friend? That depends on whether your taste runs to playing football with its rough-and-tumble, block and tackle, or fencing with its subtle weave of parries and ripostes while respecting an opponent’s personal space. One kind is good at playing poker, the other tends to play bridge.

Do these opposites attract? In this case, I think not. The tendency for trochees to pounce and for iambs to react would lead the pair to get on each other’s nerves. The iamb would end up nursing hurts that the trochee might never perceive. Or the iamb would get back at the trochee in ways the latter would never see coming.

Are men trochaic and women iambic? Only in your dreams. I know women who are deadly quick and not at all subtle—and men who need to walk three times around the house before opening a drawer. These are not masculine and feminine characteristics played against type. They are basic approaches to life belonging to the species H. sapiens without gender distinction.

In The Iliad, Achilles and Agamemnon are blunt trochees, while Hector and Odysseus are subtle iambs. Anna Karenina and her impetuous cavalry officer, Count Vronsky, are a pair of trochees, while Stepan Oblonsky and his wife Dolly are, for all their frivolousness, more iambic. Ellen Ripley, in the Alien series, is an iamb despite her tough-gal heroism, because her basic attitude is stop-wait-and-look, and she sees right through Lieutenant Gorman or the Company’s devious Carter Burke. In the Dune series, the Fremen, despite their reputation as fierce fighters, “were supreme in that quality the ancients called ‘spannungsbogen’—which is the self-imposed delay between desire for a thing and the act of reaching out to grasp that thing.” That is an iambic trait: wait and see. Americans are generally considered to be trochaic, while Europeans, the Chinese, and Japanese are thought to be more iambic.

Of course, human beings in specific cases, taken one by one, are far too complex to exist under such a crude dichotomy of characteristics. That is why most of my examples above come from literature, where the author emphasizes one approach, one mindset or trait, to prove a point. And yet, in real life, some people still consistently hit that first syllable hard, while others pause and reflect on that second syllable. Dah-dee, or dee-dah, the beat of life goes on.

1. This may have something to do with the fact that English, as an amalgam language, drew on Celtic, Norse, and Germanic roots that were formalized and spread by bards and poets reciting their verses in the lord’s banquet hall, rather than by written records.

2. There are already exceptions to this formula, of course. For example, the first four lines demand that the final words—“question,” “suffer,” “fortune,” “troubles”—be partially swallowed on the second syllable in order to maintain the beat. Well, nobody’s perfect—and a perfectly restrictive meter would eventually become boring, like riding a rocking horse.

3. And now that I think of it, the actors in a noisy Elizabethan theater—where the patrons and groundlings are calling to one another and chatting among themselves—might have to shout their lines. Only the stressed words would cut through the noise, and they would have to carry the sense of the play.

Sunday, June 11, 2017

Sense and Imagination

All art forms bear a certain similarity to each other. For example, they invite creativity: they allow for the expression of individual and personal tastes and interests; they celebrate the introduction of new constructions or combinations of existing ideas and forms; and they expect the artist to explore new methods, stretch current standards, and try novel perspectives and viewpoints. An artist working in any format is presumed to differ in substance and style from every other artist and to explore new ways of interpreting his or her art.

Almost all art forms appeal directly to the senses. For painters and photographers, it’s the visual sense associated with color, proportion, and perspective. For musicians, it’s auditory sense associated with timbre, harmony, and tempo. For perfumers, it’s smell and the associated scents of flowers, organic pheromones, and other chemical-based memories. For chefs, it’s taste and texture, associated with flavors, scents, and the visuals of presentation.

Writing is different, however. In reading a written piece, the image of the type on the page or the feel of the book’s binding is a minor sensory note that is not particularly related to the story. Writing appeals not to the senses but directly to the intellect and the imagination. That’s one reason why books as bound paper, electrons on a screen, or a voice reciting from a loudspeaker can equally carry the content of the work.

Other arts might also tell a story. Tchaikovsky’s ballet Sleeping Beauty presents the recreated visuals, with associated melodies and harmonies, of the classic fairytale. But one can watch the dance for just those graceful movements, or listen to the music for just those blended tones and tempos, and enjoy the ballet without knowing the story. Similarly, one doesn’t have to know the story of Peter and the Wolf or Lieutenant Kije to savor Prokofiev’s works. Indeed, a Tchaikovsky or Prokofiev symphony has no story thread at all, and it’s still comprehensible and enjoyable.

Similarly, you can look at a painting by Monet or Bierstadt and learn something about the environs of Paris or the grandeur of the American West. But you can also enjoy these works just for their color and their use of light and shadow. Indeed, you can also look at any abstract painting for its blend of shapes and colors, because it has no recognizable object and may not even have a unifying idea, and it’s still enjoyable.

When a writer tries to emulate an impressionist painter’s approach in telling a story, the reader is often left unsatisfied. That’s because most readers treat what they are encountering in the words on the page as a form of concrete reality that only differs from real life in that it is simply occurring inside their heads.1 Even a work of fiction draws on images, ideas, emotions, and dialogue that the reader can treat as if they were a form of reality.2 Vague imagery and surreal dialogue—meant to convey foggy or drug-induced impressions and half-remembered memories, without that hard-edged sense of concrete reality—usually create only uncertainty and confusion in the reader’s mind. And when a writer tries to emulate an abstract painter’s disconnected shapes and colors, abandoning story and sense for the sake of pretty words, like a Dadaist poet, the work becomes virtually unreadable. Either that, or it can only be appreciated by readers who care more for innovative and daring stylistics than they do for immersing themselves in the story.

And there, I believe, arises the power of writing over other art forms. More than painting or music, the written word requires the active participation of the reader. A gallery patron can wander from room to room, appreciating this painting, ignoring that one. A concert goer can listen intently to the music or ignore it, letting the blend of sounds wash past his or her ears while thinking of something else. A diner can wolf down an exquisite meal without savoring its flavors or appreciating its presentation. But a reader cannot follow the thread of an article, argument, or story without focusing on the words, absorbing them, interpreting them in terms of his or her own vocabulary, knowledge, and experience, and helping the author create the logical or imaginative structure—the relationship of ideas, or the embodiment of character and plot line—inside his or her own mind.

Unlike the sensual arts, which can stay outside at the limits of our ears and eyeballs, or pass quickly over our tongues, the rhetorical and literary arts must pass through to the brain and work their magic directly on the reader’s insight and imagination. This is where the conscious mind builds its perceptions of the world. Unless this active collaboration proceeds, the words remain inert marks upon the page or sounds spoken into empty air. This need for reader collaboration creates a particular challenge for the writer.

Any artist faces a certain amount of audience resistance. Gallery patrons tend to focus on and gather around paintings that have some familiarity for them, something they can approach as they have approached it before. This is why artist retrospectives and museum exhibits of famous paintings from another era are so successful: the public already knows that it will like and understand what it sees. But the new painter, striving to present some of that individual taste or explore those stretched standards, presents even the most active and receptive viewer with a question mark. “Do I like this?” “Do I understand what the artist is doing?” And ultimately, “Do I care about this?”

Similarly, a musician trying out new rhythms and new blends of harmonics risks having the audience react at first as if they were hearing mere noise. Two hundred years ago, the public and the music critics both reacted to Beethoven’s now-beloved symphonies as discordant and a caricature of other, more familiar composers.3 This may be one reason why many 19th-century composers like Dvorak and Holst took their themes from folk songs and country dances. In many ways, because a piece of music flows across time and at first hearing cannot be stopped, studied, and analyzed the way a painting can, the audience for a new musical work has less chance of asking those probing questions about liking and understanding.

The writer’s challenge is that readers are even more selective. While a person in a museum might glance at a Dali painting, even though he or she cares nothing for whimsically impressionist art, or a radio listener might catch part of a song from a heavy-metal rock band, even though his or her tastes run to country music, a reader is much less likely to pick up a book or a magazine full of stories devoted to an unfamiliar or disliked genre. A person who avidly reads science fiction might never encounter a romance story, and vice versa. And unless the reader opens the book, focuses on the words, starts giving them attention, and follows the thread … the magic does not happen.

Even when the tastes and taints of genre fiction are not involved, such as a straightforward think piece on some popular scientific, political, or economic question, the reader’s mind may have already erected barriers based on his or her previous thinking about the subject. So, to be read at all, to even start the reader’s mind along the thread of the article’s logic or the story’s plot, the writer must create a breakthrough moment. The article must start with a claim or a question that the reader has not thought about before or that ignites new impressions jarring his or her ordered sense of the world. The story must begin with piece of action or a mystery that draws the reader deeper into the plot and characters. And even before that, the book or magazine seeks a dynamic piece of cover art or a gripping blurb to draw the reader inside to the words on the page.

Writing in its appeal to the imagination and understanding, rather than the senses, differs from the other art forms in another way as well. It’s the only form that has no raw materials and uses no instrument in its expression. The painter buys canvas by the yard and pigments by the tube. He or she prepares one canvas at a time and sells it to one buyer only. The photographer and the digital artist might do a little better, in that a pixelated image can be copied, reproduced, and sold many times to many different buyers. The musician plays an instrument or sings inside a venue once for a paying audience whose size is limited by the capacity of the club or concert hall. He or she may have the performance captured as sound waves on tape or in digital format and sold again and again. The chef creates a meal out of selected raw ingredients, working in a single kitchen space, and then sells the product at the rate of one plate to a customer.

The writer, in contrast, has no physical raw materials. Well, in the most basic form, a pen spreads ink lines across a piece of paper, and for a novel that’s a lot of ink and paper. Most writers these days use a computer, where the ink lines become typed characters that flash briefly on the screen, become stored as ASCII codes in dynamic memory or on a hard disk, and get translated into electrons traveling through wires and across the air to the reader’s screen, or become imposed in patterns of ink or tone powder on a roller and spewed out in multiple copies of printed pages. The physical form is irrelevant. Some writers even compose most of the story and dialogue in their heads before ever setting pen to paper or fingers to keyboard.4 The writer’s stock in trade is invisible, not even as tangible as the sound waves the musician or the singer produces. The “stuff” of an article or story is built wholly out of the writer’s vocabulary, his or her sense of grammar, syntax, and structure, and an act of pure imagination.

As an idea, the writer’s art from is conceived and produced, and as an idea it is received in the reader’s head. All the rest is energy and electrons. And that is the mystery of being a writer.

1. Actually, all reality occurs solely inside our heads. Our brains make up what we think of as objective reality from visual, auditory, tactile, and other cues brought in through nerves connected with our various sense organs. Yes, the “real world” does exist outside of us, but our perception and understanding of it are a construct as ephemeral—existing only in our short- and long-term memories—as any fairytale.

2. And when that seeming reality tells a story with fantastic, imaginative, or magical imagery, elements, and insights—as if the story constituted a part of the reader’s everyday world—then the pleasurable effect is heightened. At least, it is for some readers.

3. A view that I personally maintain—minus the aspects of caricature—for most of the works of Dmitri Shostakovich. But, ah! I do love his Symphony No. 10 in E minor.

4. I can’t do that, of course, but I still must have some pieces of the story, fragments of sentences and paragraphs, and the voices and partial exchanges of my characters swirling around in my head before I can sit down to write my fiction.