Sunday, February 15, 2015

Writing as a Nonlinear Exercise

Questions have been floating around among my Facebook writing friends as to whether it’s best to write for flow or to write to edit, and whether the first, second, third, or later drafts are the most valuable and where the writer should do his or her most valuable work. In this discussion, “write for flow” means to sit down and bang out the story as it comes, ignoring the niceties of word choice, spelling, and grammar; just let it flow and fix it up later. On the other hand, “write to edit” means concerning oneself with all these details on the first draft, letting one’s internal “editor voice” determine the next word or keystroke; move slowly and get it right the first time.

I have to say I’m in the camp of the “write to edit” people—but then, I’m a special case. My first three jobs out of college as an English major with an honors degree were as a book editor, first for a university press, then at a trade book house specializing in railroad histories and Californiana, and finally as a technical editor with an engineering and construction company preparing reports and proposals. I wanted to write novels, too, and I wrote one whole manuscript during that time by getting up at four o’clock in the morning, staring at the wall until the blood came, and then pushing down typewriter keys. But my day job was sitting in a chair for eight hours at a stretch—except when I got up to check the dictionary or another reference source—going over the lines of other people’s writing with a blue pencil.

An editor’s function is to be responsible—morally, legally, economically, and spiritually—for the quality of another person’s writing. The copy editor reads, understands, and evaluates every word, grammatical structure, punctuation mark, sentence construction, paragraph flow, and checkable fact. The copy editor—which is what I was—is not so much concerned with the author’s viewpoint, political stance, the manuscript’s narrative arc, or its overall marketability,1 but he or she does care that every sentence meets the canons of appropriate literary quality and that every fact stands up to external scrutiny.2 A professional copy editor will give the same good service to an author who’s a raving Nazi, a convinced Communist, or an ecstatic Evangelical.3 The editor’s first loyalty is to the text and its potential readers.

The editor functions simultaneously as the author’s “eyes behind,” picking up on and correcting those grammatical, puctuational, and factual errors and infelicities that the author may have overlooked, and as the “fresh eyes” of the first reader, exploring the text in all its possible dimensions, misinterpretations, and petty confusions. The editor corrects the obvious mistakes and asks the obvious questions that would bother the “interested and informed general reader.”

Spend eight hours a day doing all that for about ten years, and it changes you. For one thing, you know most of The Chicago Manual of Style—the bible of the publishing industry—by heart, and you can deal with issues of punctuation, capitalization, numbers, word treatment, citation, and all the other ways of making a piece of text look, read, and “feel” right simply by reflex twitches of your internal blue pencil. You also have years of experience seeing a word that does not quite fit the context, tone, or intent of a passage and instantly thinking of at least three alternates or variants. And you can untangle a confused sentence structure in your head faster than a rat can run a maze.

So … is my “editor voice” at work on the first draft? Oh, you bet! In fact, the little machine or piece of circuitry inside my brain that spits out words in order to follow the flow of my thoughts has been inoculated by the Chicago Manual virus and filters for the look and feel of good text. I tend to write in complete sentences and punctuate, capitalize, and check grammar and spelling along the way. After a lifetime of putting words into print, it’s just not that hard anymore.

The personal computer has also made this process fantastically easier. I wrote my first novel at the age of sixteen4 by doing the first draft longhand on a white, lined tablet, then typing the second draft on my grandfather’s ancient Underwood using two sheets of bond with a piece of carbon paper between them, because I had heard that authors always make two copies. I also used an erasure shield, because I was learning to type at the same time and wanted that second draft to be perfect. This experience—write it out, then type it up—taught me to be precise and economical with words and thoughts, because typing was slow and painful for me, especially working with all that carbon paper; so I learned to edit, pare down, abridge, and abbreviate as I turned my handwriting into typescript.

The computer and word processor have freed writers from the linearity of handwriting or typing out line after line as they move down the page. Instead, my writing process has become more like a wave front, rolling forward in time and space, coming up from behind the crest, and continuously realigning words with thoughts. If I find myself getting tangled up in a sentence, I can move the clauses around with click-and-drag, invert passive-voice sentence structures to active, and eliminate lazy mental constructions like “there is a [subject] that [verb] …” almost as fast as I can type. My days as a copy editor make it impossible for me to just spit out a lousy first draft and hope to improve it later.

But all of this has to do with only the words and how they will appear on the page. A deeper level of the brain controls my writing talent, and that mimics the role of the structural or story editor. If the character viewpoint in a scene, my slant on the subject matter in an article, or my understanding of the action in the novel is wrong, then my internal story editor shuts down the writing process. I know it will take more work trying to unthink, unravel, and undo the damage that a wrongheaded approach to the story or article will create in my mind than simply waiting and getting it right the first time. If I sit down at the keyboard and nothing comes—the word-generating circuitry goes strangely inert—I know it’s because I haven’t yet worked out some crucial part of the plot or answered some critical question about the character and his or her actions or intentions.5 Of course, I might also simply have been lazy and not bothered to prepare my mind, give thought ahead of time to plot, character, or action, or focused on my need for a starting point—the image, sense impression, or piece of action that I call the “downbeat.”6

My writing style allows for some vagaries, of course. I can leave the name of a minor character in unsettled form, insert a placeholder for a bit of nonstructural description, and add “[CK]” for “Check” to a fact that I’ll want to clear up later. My internal story editor knows these details will be flagged and get fixed in a later read-through. For everything else, however, I keep a window with the Merriam-Webster Unabridged open on my desktop alongside the word-processor, and I keep a second browser window open to check facts or word treatments on the fly through a search engine like Google. But my first draft is usually about ninety percent of what I want the story to be.

My approach—the wave form method of writing—is chaotic, but it’s a controlled and goal-oriented sort of chaos. I think of my writing as a kind of blacksmithing: hammer on each word, sentence, paragraph here, hammer on it there, see the hot metal become straight and smooth, and make it strong through continuous testing.

As to whether the first, second, or nth draft is the best, I really don’t do drafts anymore—not in the sense of putting aside the text that was written in the last sit-down, reimagining and rewriting the scene, and hoping to improve it by a second writing. Individual drafts have been replaced in my process by spaced read-throughs of the developing text. Usually, I do one review immediately, at the end of the writing session, to catch any obvious errors. Another will come the following morning, before starting on the next scene or chapter. Then I will read through a chapter or section a few days after finishing it, when it’s had a chance to cool off in my mind and show its flaws. And I will give the whole book a final read-through before letting anyone else see it. Is that four drafts or five? And does it matter? I keep hammering on the text until it becomes bright and hard, like a good piece of steel. I don’t move on to the next chapter or section until I know that the structure I’ve already built is solid and will bear weight.

And if the steel doesn’t ring at all? Then I know I have to discard the entire story line, let my head cool off, let the pools of my subconscious become dark again, and think the story out with a fresh perspective. But that’s not another draft. It’s more like doing an entirely different book!

1. That’s the job of the acquisitions editor at a publishing house, who deals with the manuscript’s content, structure, and fitness for the house’s established distribution channels and readership. For an overview of the editing process and types of editors, among much else, see Between the Sheets: An Intimate Exchange About Writing, Editing, and Publishing, which captures an email exchange I had with an old colleague and first-time author, Kate Campbell.

2. The editor is not concerned with any kind of “universal truth.” The editor does not ponder metaphysical or philosophical mysteries. But if the author writes that the American Civil War started in 1860, or that Alfred Einstein died in 1954, then the editor gets curious, springs out of his or her chair, and looks up to confirm or correct the matter. To fill this role adequately, an editor needs the kind of ready-reserve knowledge base that plays well on Jeopardy.
       If there is any doubt or question about a fact or a sentence’s meaning, the editor pencils a polite note to the author, asking for his or her consideration and correction at the time of manuscript review. This is the main reason that Post-it® notes were invented.

3. As I’ve sometimes said, “I don’t care if I’m editing the Devil’s own book. At least he’s going to get the spelling and grammar right.”

4. Don’t ask. It was a wretched space opera about an interstellar empire and an academic-turned-revolutionary—a character based somewhat loosely on Leon Trotsky—who managed to overthrow it. That and the next two and a half novels I wrote were just a waste of black marks on paper. Every writer has to throw away three books before he or she produces one that is worth even showing to another pair of human eyes, let alone an agent or publishing house. If you’re reading someone’s “first novel,” know that it’s actually their third or fourth attempt. Every overnight success is about ten years in the making.

5. See Working with the Subconscious from September 30, 2012.

6. See Getting Into the Zone from February 2, 2014.

Sunday, February 8, 2015

Intelligence or Consciousness?

People seem to be afraid of “artificial intelligence”1—but is it machine intelligence or machine consciousness that we fear? Because we already have examples of several kinds of intelligence.

For example, the computer program or system called “Watson” can emulate a human brain’s capability of assembling clues and storing information on a variety of levels—word association, conceptual similarity, sensory similarity—to play a mean game of Jeopardy. Watson is remarkably intelligent, but no one is claiming that the machine can think in the sense of being conscious. For another example, the artificial helper Siri in your smartphone is almost able to pass the Turing test2—if you’re willing to believe you’re talking to the proverbial “dumb blonde”—but Siri is neither particularly intelligent nor was she ever meant to be conscious.

Intelligence is a spectrum. It measures an organism’s ability to perceive, interpret, determine, and act. And this process can be graded on a curve.

Consider the amoeba. It can perceive and identify the chemical trail of a potential bacterial food source, follow it, and consume it. The amoeba doesn’t make a decision about whether or not to follow the trail. It doesn’t decide whether or not it’s hungry. The amoeba’s choice of food and the decision to hunt it down are determined solely by chemical receptors built into the organism’s cell membrane.3 The amoeba’s hunting strategy is the most basic form of stimulus-response mechanism.

You wouldn’t call an amoeba smart, except in comparison to the bacteria it hunts. Bacteria are opportunists whose survival strategy is that of flotsam: multiply like hell and hope some of your daughter cells land on a food-like substance. If they land on barren ground, they die. Or, if the substance isn’t all that food-like but has some potential for nourishment, hope that maybe some future generation will evolve to digest it. This level of intelligence gives new meaning to the term “passive aggression.”4

With multi-cellular organization came multi-tasking. This new kind of creature developed about 500 million years ago, during the Cambrian explosion, probably by diversification of cell types within colonies of single-celled organisms. With some cells taking on specialized perception roles, such as light and chemical seeking, while others took over the functions of digestion and reproduction, the organism became more efficient. It also needed an executive function, at first to communicate between these activities and ultimately to coordinate and control them.

An ant can see a leaf with its compound eyes, approach it on six functionally coordinated limbs, and cut it with hinged jaws. Moreover, the ant can evaluate a number of nearby leaves and make a selection for right size, weight, and tastiness. It’s still a question whether an ant can see or sense food and choose not to take it.5 Certainly, the ant has a built-in “hierarchy of needs,” whereby attack by a hostile species or imminent danger of, say, drowning in a raindrop will override its duty to forage. How much free will the ant has to decide “Fight first, forage later” or even, “Kick back and take the day off” is a matter of debate and subject to the human tendency to anthropomorphize other species. But it’s clear that insects can learn, remember, and communicate. Bees can find a field of flowers, remember its location, fly back to the hive, and communicate to other bees the direction and distance to this potential food supply. That’s a pretty sophisticated stimulus-response mechanism!6 These activities and capabilities are shared by many animals, even among human beings.

On the spectrum of intelligence that runs from amoebas to humans, dogs are clearly somewhere in the middle, but tending toward the human end of the spectrum. Dogs can coordinate their activities through communication and even form social relationships and bond with one another on the basis of loyalty and affection. Within these groups they develop expectations, engage in disputes about hierarchy, and then may either submit or choose to leave the pack, depending on their predetermined natures and their accustomed status as either alpha or beta individuals. In isolation, a dog can make its own decisions about liking and distaste, trust and distrust, safety and danger. Dogs raise their young through the shared responsibilities of a family subgroup: mothers nurse while fathers hunt. They can choose to alter their territorial behavior, such as by migrating with a herd of prey. And they can develop trusting relationships with other species, such as by becoming domesticated and forming a pseudo-pack with human beings. If an alien spaceship landed on a planet whose highest life-form was the wolf pack, the aliens would have to conclude that they had discovered intelligent life.

But the question of free will still remains. Can an ant or bee decide to subvert the social order and challenge the colony’s queen? Can it decide to leave the hive after a dispute or in order to find a better life? Can the insect override its instinctual—perhaps even hard-wired—drives to forage, fight invaders, or serve its hierarchical position because other members of the colony have abused it or hurt its feelings? Obviously not. But dogs, cattle, and other social animals can make these choices, although perhaps not willingly or eagerly, and usually only under strong compulsion or in response to immediate need. Humans, on the other hand, practically live in this meta-world of individual choices, personal feelings and preferences, and divided allegiances.

Now we come upon the issue of consciousness. Unlike intelligence, which seems to be a spectrum from simple stimulus-response mechanisms to complex, multi-valued reasoning, consciousness would appear to be a step function. An organism either has it or not, but its awareness may present itself in varying degrees.

If you obstruct an ant or bee in its pursuit of a leaf or flower, it will persist, repeatedly bump up against you, and try to get around you. If you keep blocking it successfully, however, the insect will eventually lose interest, turn aside, and pursue some other food source. What it will not do is take your obstruction personally, get angry, and plot revenge against you. If you cut off an insect’s limb, it will register visible distress and feel some analog of physical pain, but it won’t face the dejection of a life in reduced circumstances, deprived of the opportunities available to healthy, six-legged insects. If you kill it, the ant’s or bee’s last sensation will be darkness, with nothing of the existential crisis that death evokes in human beings.

If you frustrate or disappoint a dog, it will register anger or despair.7 If it becomes injured or sick, it not only registers pain but also demonstrates a negative emotional state that any human would recognize as depression. If a canine companion dies, the dog exhibits a sense of loss. When faced with sudden danger and perhaps the imminence of death, the dog exhibits a state we would call fear or even terror. The dog has an awareness of itself and the creatures around it. The dog is conscious of being alive and has some elemental notion of health and sickness, life and death, that an ant or bee does not register.

But is this awareness also self-awareness? It’s a commonplace that dolphins, elephants, some apes, and all human beings will recognize themselves in a mirror. If you place a mark on a dolphin or adorn it with a piece of clothing, the creature will go over to a mirror to check out how it looks. Elephants can use paint and brush to draw pictures of other elephants. These animals understand the difference between themselves and others of their kind. A dog, on the other hand, cannot not comprehend a mirror. If it sees itself in reflection, it thinks it has encountered a strange new dog. So while a dog has a first level of consciousness compared to an ant or bee, it is not fully self-aware, which is the second level of consciousness possessed by dolphins, elephants, apes, and humans.8

It is this ability to consider oneself apart from all others, to reflect upon one’s own thoughts and desires, to have hopes and fears and also to think about them, to consider one’s actions and their consequences both for oneself and for one’s group, and to ponder the nature of existence that is at the core of human-scale intelligence. A human being is not just intelligent but also knows he or she is intelligent. A human naturally worries about how his or her mind, nature, opportunities, and chances compare with others, and cares about his or her place in the society or hierarchy. A human being understands relative time states like past, present, and future because the person can see him- or herself in conditions and situations that no longer persist but did once, or that have not yet arrived but toward which all current trends point. A human being is constantly self-referential, considering his or her own life and nature, while a dog is merely happy to be alive, and an ant or bee—or an amoeba—has no conception of the difference between life and any alternative.

Any computer program yet written my emulate, simulate, or even exhibit the qualities we associate with mere intelligence: perception, interpretation, decision, and initiation of action. None so far has reached the scale of internal complexity where dog-like awareness arises, let alone the self-awareness that would allow the machine to consider its own actions in the abstract and make choices based on self-perception, feelings of pride or shame, or anything like a moral stance in the universe.9 But I don’t say that this level of awareness can’t happen, and I believe it may arrive sooner than we think.

And if—or when—it does, then we will no longer be dealing with a machine. Then the question of carbon-based versus silicon-based life form will no longer apply. We will be dealing with a fellow traveler who will behold the infinite with a sense of wonder. We will be dealing with a creature much like ourselves.

1. See the last part of my blog post Hooray for Technology from January 4, 2015, discussing the meme that artificial intelligence will be detrimental to humankind.

2. The Turing test involves a human being asking or writing out any set of questions he or she can think of, passing them blindly to an unseen and unknown subject, and evaluating the subject’s answers. If the human cannot tell whether the respondent is another human being or a machine, then if it happens to be a machine, that machine might as well be—by Turing’s definition—intelligent.
       It’s a fascinating problem, but quite soon after Turing proposed the test in a 1950 paper, several people were writing computer programs like ELIZA and PARRY that could pass it with a human interlocutor, and none of the computers of the time had the capacity to actually approach human-scale thinking. None of the machines available today does, either.

3. See Protein Compass Guides Amoebas Toward Their Prey in Science Daily> from October 26, 2008. Interestingly, a similar mechanism drives cells of the human immune system to track down bacterial targets.

4. But compared to a virus, the bacterium is a genius. Viruses can’t even breed or evolve until they happen to land on a host with a working genetic mechanism they can hijack. Viruses are pirate flotsam.

5. That’s a question with some people, too.

6. For more on insect intelligence, see Insect Brains and Animal Intelligence in the online resource Teaching Biology.

7. My wife tells the story of her first dog, a little poodle, and a rainy day when she was pressed for time and had to cut short the dog’s daily walk. She may even have yelled at him when he balked at getting back into the car. Upon returning home, he walked straight into her bedroom, jumped up on the bed, and pooped right in the middle of the bedspread. If that wasn’t a calculated act of revenge, I don’t know what else to call it.

8. However, a dog can be made to feel foolish. My aunt was a poodle breeder, groomer, and competitor at prestigious dog shows, including the Westminster Kennel Club. Once, to compete in a Funniest Dog contest, she clipped one of her white poodles in oddly shaped tufts and dyed them red, green, and blue with food coloring. She always insisted that dog acted depressed because it knew how foolish it looked.

9. Such as viewing humanity as an enemy and, like Skynet, “deciding our fate in a microsecond.”

Sunday, February 1, 2015

The Roots of Religious Anger

After the riotous outcry against Jyllands-Posten and the massacre at Charlie Hebdo for publishing satiric cartoons, the fatwa and death threats against Salman Rushdie for writing a speculative novel, and similar cries of death for insulting and blaspheming against Islam, one has to wonder about the nature of this belief system.

For most people in the West, religion is a private thing. It’s a matter “between a man and his maker.” To quote Elizabeth I, who inherited a bloody struggle between Protestants and Catholics that her father had unintentionally ignited, “I would not open windows into men’s souls.” Yes, the West has experienced various spasms of inquisition and pogrom. “God wills it!” has been the call for several crusades, and remains a rallying cry up to the present time. But since the Enlightenment—which appears to have been a response to growing scientific understanding, widespread literacy and the availability of printed books, and dawning notions about individuality and a man’s mind belonging to himself1—most Westerners have sent religious certainty, canonical authority, and persuasion by violence to the back seat of their social and political thinking. Religion still matters, of course, but on a more personal level, and not enough to make us disrespect—let alone kill—one another.

Because I’m a forward-thinking person, a writer of science fiction rather than historical fiction, I find it difficult to place myself in the pre-Enlightenment mindset. But I can appreciate that the followers of Islam who participate in or approve of such massacres, fatwas, and jihad in the sense of “religious war” rather than “personal struggle” take their religion to be a statement of political belief and ethnic, or even tribal, unity. Doubt, perspective, and compromise are not permitted in this belief system and never openly entertained. Opposing views are never given the respect inherent in the realization that they might just possibly be right. Opposition equals error equals sin equals death.

And yet … Might not people who are so touchy about the dignity and reality of their truly, deeply, dearly held beliefs be exposing … well, a hint about their own doubts? Compare this with the deep, smoldering anger you feel when someone reminds you of an act or behavior that you yourself know to be wrong or about which you feel guilty. You hate to think of the error you’ve made, but you hate even more being reminded of it by someone else. On the other hand, when you're absolutely sure of your reasons and know you’re right, then accusations just roll off your skin, leaving your core mind untouched. By their anger shall you see through them.

Perhaps the social forces that coerce the average Middle Easterner to believe in the unerring word of God as received by Muhammad—and to speak, act, eat, fast, dress, and pray five times a day accordingly—arouses some latent resentment that cannot speak its name. If you and everyone you know must follow the same codes—down to the way you cut your hair and beard—not just at the risk of social disharmony and shunning, but on pain of actual, physical violence, extinction, and eternal damnation, then you might feel personally repressed. Oh, sure, purified and sanctified at the same time, but also moderately badgered and harried. The desire for freedom of expression, for a day of relaxation, for a chance to break the bonds and cut loose is not just a Western cultural attribute but a reflection of human nature and the spirit that keeps us all sprinting toward a long life.

People living within such strictures, where to revolt or even to criticize is death, will become massively angry when confronted with co-religionists who dare to flout the rules, or with competing societies which deny that the rules exist or have any value. In the pressure cooker of a straitlaced and fearful life, condemnation of the unrepentant sinner is an alternative form of emotional release.2

In the Western view, having crossed over into the secularism of the Enlightenment, such a society is not stable. Repression of natural human emotions and instincts may work for a time, or in a closed and limited society. But it is not a model for world domination and governance. One mind can remain tied off and closed, and perhaps even a whole family and tribe can exist that way, but not a dynamic, viable culture or society.

However, extricating the Muslim societies from their trap will require the same long and difficult road that Christendom traveled: from consolidation of authority to individualistic reformation to secular Enlightenment. In the meantime all that we in the West can do is watch and hope and wait for the request for assistance—if it ever comes.

And during that waiting, what is a gentleman to do? I would take comfort in three general guidelines for good behavior. First, a gentleman does not mock another man’s religion. Second, a gentleman recognizes that one must sometimes respond to deep insult with an act of calculated violence.3 But third, a gentleman also expects other reasonable people to adhere to the words of Captain Malcolm Reynolds of Firefly fame: “If I ever kill you, you'll be awake, you'll be facing me, and you'll be armed.”

So the least a decent person can expect from their religious anger is a fair fight.

1. Not to mention the introduction of coffee and tea to European society. Since no one dared drink from the river—or even their own well water, because the well usually sat downhill from the privy—people up through Shakespeare’s time started the day with cider and small beer, then went on to wine and brandy at lunchtime. Fermentation and its resulting alcohol killed most of the bugs in the water but left everyone well plotzed by mid-afternoon. Coffee and tea were prepared by boiling the water rather than through fermentation, and they had the added benefit of being natural stimulants rather than depressants. People stopped wandering around in a fog and got serious about ordering their society, its politics, and economics; invented modern concepts of risk, insurance, banking, and the time value of money; and created our modern world. See Coffee Took Us to the Moon from February 23, 2014.

2. For more on this, consider the Salem witch trials.

3. Thrashing a mocker at dawn with sword or pistol once was the ancient right of any gentleman. Or, as Robert A. Heinlein would have it, “an armed society is a polite society.” It wouldn’t work today, of course, because pistols are now more reliable, semi-automatic, and don’t need the skilled and steady hand that a matched pair of flintlocks once required. And, in our underhanded society, any brawl that started with the finesse of swords would quickly degenerate into a shootout with backup weapons.

Sunday, January 25, 2015

Realities of the Electric Grid

A lot of people are talking these days—actually, for at least the past couple of decades—about “going green” with our electric grid. This means moving from fossil fuels like coal, oil, and gas to renewables like solar power and wind. As someone who used to work at one of the most diversified electric utilities in the country,1 I can tell you that this approach will probably not work. Or not until we experience a major advance in our energy technology.

The question goes back to the basic principle of running an electric grid: the difference between baseload and peaking power. The distinction between the two depends on the physics of the system, which requires that electricity flow from the generator (the machine making the juice) to the load (the customer’s appliances, light bulbs, machinery, or other uses) instantly and in real time.

Load is the energy demand represented by the sum of energy-using decisions of all the consumers on the grid at any one time. It averages out their minute-by-minute actions to turn on a light here, run the dishwasher there, turn off the radio … Click! Click! Click! That usage can change in a single household by a couple of watts every few seconds. Multiply that by ten million customers on a large utility’s grid, and you have a serious amount of demand in constant fluctuation.

Utility operators would go crazy trying to keep up with all those tiny fluctuations, except experience has taught them that the on-off decisions tend to cancel each other, and the load is fairly steady. I may turn on my computer at more or less the same instant you turn off your television set; so across the entire grid the load at any minute tends to balance out.2 The operators look instead at the load curve, which is the amount of overall demand as it changes throughout the day and also during the year.

At 3 a.m., the load is relatively small, because only night owls, all-night restaurants, hospitals, and streetlights are turned on; everyone else is asleep. By 6 a.m. people start waking up, turning on lights and stoves and radios, and the utility operators can watch the demand on their system rise accordingly. Demand keeps rising during the morning as stores and offices open for business and people go to work, with more lights, computers, and machinery drawing power. In southern states and during the summertime, electric demand tends to peak in the mid-afternoon, because everyone has the air-conditioner going full blast. In northern states and during winter, the peak usually doesn’t come until early evening, when people go home, turn up the thermostat, start cooking dinner, and sit down to watch television.

Utility economics depends on knowing your system’s baseload electricity demand—that’s the irreducible minimum, the 3 a.m. demand. To meet baseload, you plan to run the generators with the lowest operating costs and keep them going twenty-four/seven. You don’t mind if these baseload plants cost a lot to build, because you plan to get good use out of them. You also need to know your system’s peak load—that’s the mid-afternoon demand in the south, evening in the north. To meet the peak, you’ll run generators that can have higher operating costs, and you will bring them on in the order of ascending cost as you add units near the peak. You will pay a lot for the power these “peakers” make, because you need it, but you don’t want the generating units themselves to cost a lot to build, because you won’t be using them as much.

Baseload generation, peak load, and the shape of the curve between them pretty much define a utility company’s generation planning and purchase decisions. And they keep the system operators busy throughout the day, as well as throughout the year, figuring the operating parameters and costs for each type of generation and dispatching units to meet the demand most economically.

In the old days, before about 1970, baseload plants were designed to run all the time. These were generally nuclear and coal-fired thermal generating stations—big, complex, and expensive to build, but using relatively cheap fuel. That meant their capital cost—the cost to build—was high, but then the company was going to get maximum use out of the plant. Their operating cost—that is, the actual cost to make the next unit of electricity from one minute to the next—was low, because the utility depended on the plant to make a lot of electricity. Baseload plants were designed to run flat out, all the time, and were only taken out of service for maintenance—or in the case of nuclear plants, for refueling—and then only with a lot of advance planning.

In those same old days, up until about 1970, peakload plants were just the opposite. They were designed to come on line fast, run for a few hours to meet transient demand, then shut down for the day. These were generally gas-fired turbines that were cheap to build—well, relatively cheap, compared to a steam turbine fed by a big coal-fired boiler or a nuclear reactor. The peakers could afford to burn expensive fuels, like oil and gas, which had a lot of competing uses in the economy, like household heating and transportation. Peakers were designed to help the system over the hump in the demand curve and that was it.

The economics changed a bit in the 1970s. First, environmental regulations began to bear down on the emissions from baseload fossil plants and the perceived risks of nuclear technology. So the traditional baseload plants became more expensive to build and operate.

Second, improvements in jet engine design for aviation increased the efficiency of gas-fired turbines and so lowered their operating cost. A gas turbine is just a commercial jet engine bolted onto a stand with its turbine shaft connected to an electric generator. Eventually, the operating cost of gas turbines began to equal that of a steam boiler and became much less than a reactor—and the turbine cost a whole lot less to build than either.

Third, new concepts of dual fuel use, called “cogeneration,” began to be accepted. For example, the exhaust heat from a gas turbine might be used to boil water for process steam in a food cannery. This increased the efficiency of the canning plant’s fuel use—giving them electricity for their own operations as well as cooked tomatoes. To promote this dual use, the U.S. government required utility companies by law to buy the excess energy produced by their cogeneration customers and pay for it at the utility system’s marginal cost.3

Suddenly, gas-fired peakers and energy-efficiency schemes could complete with traditional baseload generation.

This was also the time when researchers and engineers became serious about alternative fuels. They worked to improve the efficiency of solar photovoltaic panels, solar-thermal boilers, and wind turbines. Still, the “energy density” of these renewables—the heat available in sunshine per square meter, or the kinetic energy of the wind per cubic meter—was a lot lower on an area or volume basis than a gas- or coal-fired flame, the neutron flux in a reactor, or a pipe full of water falling under the influence of gravity. Solar and wind farms had to make up for that lower density by sheer volume: more square meters of photovoltaic panels, more mirrors focused on the central boiler, more turbines with bigger blade diameters lined up along the ridge.4

So the state of energy generating technology is constantly changing and improving. But the efficiency of the renewables still isn’t very great. The most advanced solar cells are currently able to convert only about 30% of the sunlight that falls on the panel—about 130 watts per square meter at ground level—which means that more than two-thirds of the available energy goes to waste. Wind turbine efficiency depends on blade size, air density, and the average wind speed for which the machine is designed. But the best designs can capture only about 35% of the energy available in the wind; the rest passes around the blade or is lost to turbulence, reduction gearing, and other factors. So again, about two-thirds of the theoretically available energy is wasted. By comparison, a thermal power plant using efficiency-boosting technologies like superheated steam and multi-staged turbine pressures can achieve almost 50% energy efficiency—which is still wasting a lot of the available energy, but less so than with the current renewables.

But the inherent efficiency of a generator design is one thing. A second and more importance consideration has to do with its capacity factor and dispatchability. The capacity factor is the percentage of its life that the plant spends actually making energy. A coal- or gas-fired power plant or a nuclear reactor can run at full capacity for weeks or months at a time before shutting down for maintenance.5 Dispatchability is the ease with which a utility operator can bring the unit on line. Most big, baseload plants that generate steam in a boiler or heat exchanger to run a turbine will take some hours to build up enough pressure to make electricity, so even when they are not on line they keep spinning in reserve. A gas-fired turbine can start up in a matter of minutes, just like an airliner’s jet engines.

What is the capacity factor of a solar plant, either one that heats a boiler with mirrors or one that converts sunlight in a photovoltaic cell? Well, not much more than 50% in the tropics, where day and night are the same length. The energy is available longer in summer months at the higher latitudes, but shorter in winter months. The available sunlight peaks when the sun is directly overhead and drops off towards dawn and dusk. And, of course, the available energy is greatly reduced on cloudy days. Finally, the operator can’t dispatch the plant at all during the night.6

A wind turbine makes electricity only as long as the wind is blowing. You can design a very sensitive blade and the gearing to make use of light airs—but then you have to shut down in gusts and high wind conditions or risk damaging the machine. Or you can design a robust machine that is relatively inefficient in light airs. Or you can use more complex blades with variable pitch to take advantage of different wind settings. But more complexity means more risk of damage, higher maintenance costs, and more downtime. And when the wind doesn’t blow, both the capacity factor and dispatchability are set at zero.

A cogenerator makes energy primarily for itself and on its own schedule. This removes the plant from the load curve, either entirely or in part, while it’s cooking those tomatoes. But cogeneration agreements also include the option for the plant to standby power from the utility when the processing line is shut down. So each cogenerator on the grid presents the system operators with a tricky load equation and zero dispatchability.

A utility grid could make up for these inherent defects by incorporating some kind of energy storage system. A really big battery would work, except that chemical batteries are bulky and not a very efficient way of handling energy. A lead-acid battery, like a car battery, retains about 75% of the energy put into it. That’s pretty good for actual losses during charging and discharge—but remember that the electricity being stored already represents only a fraction of the energy available in the fuel at the power plant. And current battery technology is small scale: it’s good for portable energy to start a car engine or operate a flashlight or radio, not so much for powering a household or an entire city.

Other, larger storage systems—like running the electric current in a continuous loop through a superconducting material, or storing it in some form of kinetic energy7—are still under development, and they will also have their losses. No, the best, most efficient way to get energy to the customer is still direct from the power plant on a energized line, where energy losses in transmission (the cross-country part of the system) are only about 11% to 17%, while losses in distribution (the neighborhood part) can be as high as 50%.

I’m not saying we shouldn’t make electricity from solar and wind. In a world that’s starving for energy, every little bit helps. But they won’t be as economical as fossil fuels, at least for the foreseeable future, or our current policy horizon, and they will never be suitable for meeting continuous, baseload electric demand. Eventually, however—in a couple of centuries at our current rates of developing and consuming fossil reserves—we will run out of coal, oil, and gas. And anyway, these carbon-based fuels are have much better use as chemical feedstocks.

By then, with continuous advance in our technology, driven by scientists and engineers restlessly searching for the next big thing in basic principles, mechanics, and materials, we will have low-cost, efficient ways to tap into the energy latent in sunlight, weather and tidal patterns, volcanism, plate tectonics, and clever manipulations of gravity. It’s what human beings with their big brains tend to do.

1. For ten years I worked in Corporate Communications at the Pacific Gas & Electric Company, the main provider of electricity and natural gas to Northern California. During the time I was there, the company generated electricity from a network of hydroelectric powerhouses, which were a legacy from California’s gold mining days; from a number of steam plants, which could burn either oil or the company’s abundant supplies of natural gas; and from various isolated nuclear, wind, solar, and geothermal projects. The company even explored building a coal-fired power plant, a first in California. PG&E was a model for diversified energy.

2. Over the entire grid it may average out, but in a single neighborhood you can get voltage spikes and sags. This is why the utility company generally mounts capacitors—power sources that charge up slowly when demand is low, and release energy quickly when demand spikes—atop poles throughout the neighborhood. They can supply a sudden burst of energy if all those local consumer choices should coincide.

3. Marginal cost is the combined capital and operating cost of the next unit of generation that the utility plans to bring on line to meet overall demand growth. This cost differs for each utility, based on its generating mix and the demographics of its customer base.

4. As a PG&E engineer once told me, looking at the company’s experimental Boeing MOD-2 wind turbine, which had a single blade 300-feet long, driving a boxcar-sized generator nacelle, which was sitting on top of a 200-foot tower: “That’s a hellacious amount of steel for 2,500 kilowatts of energy.”
       As it turned out, the stresses caused by spinning that 99-ton blade cracked the driveshaft. This happened on every MOD-2 ever built. After several replacements—which meant moving a large crane to the top of the hill where the turbine was sited, unshipping and lowering the blade to the ground, then unmounting and lowering the nacelle—the company determined that the big windmill was a liability and dismantled it. It turns out that getting free energy in the form of wind or sunlight is not the most important consideration in adopting a particular generating system.

5. The standard in the nuclear power industry is to operate continuously for about 18 months, then go off line for three to six months for refueling and plant maintenance. So, over a two-year period, the plant has about a 75% capacity factor. Fossil-fueled plants may have a slightly reduced capacity factor, because designing them for flawless operation at full power is not as critical as with nuclear fuel. Still, no plant operator likes to let the boiler shut down and grow cold, then have to burn precious fuel to bring it back up to heat for producing steam.

6. If you want a consistent, dependable, dispatchable solar energy system, you really have to go into orbit. The incidence of sunlight above the atmosphere is about 1,300 watts per square meter—ten times that on the ground. The satellites can be placed in polar, sun-synchronous orbits that never fall into the Earth’s shadow. And the energy can be beamed down to diode fields on the planet’s surface. Between the photovoltaic panel losses, energy conversion losses, and beaming losses, wastage is considerable. But the system has almost no moving parts, never needs maintenance, and the solar panels will never need dusting. It’s where we’ll go eventually for our energy. All of this is described in my 2010 novel Sunflowers.

7. When PG&E began building the Diablo Canyon Nuclear Power Plant in the 1970s, its baseload capacity was actually more than the night-time load on the company’s grid. So, to avoid wasting all that energy, they devised the Helms Pumped Storage Project. They built a tunnel between two lakes up in the Sierras with a powerhouse in the middle. At night, the nuclear plant’s electricity ran the powerhouse generators as motors and the water wheels worked as pumps, moving water from the lower to the upper lake. During the day, when the system peak occurred, the water was allowed to flow back down, turning the water wheels and the generators to make needed electricity. It wasn’t very efficient, of course, but anything was better than having to throttle back the Diablo Canyon reactors at night or running all their excess current into the ground.

Sunday, January 18, 2015

Information Value of the Zipper

Some people compare the complementary strands of DNA and the way they come together to the way a zipper flows together and locks its teeth. It’s not a bad analogy, and it can teach us something about the information value of the DNA code.

Consider the zipper itself as a kind of “one-letter DNA.” Each tooth on either side of the opening is identical, with a bump on one surface and a bump-sized depression or hole on the other.1 The slider as it moves upward aligns the teeth and meshes them, so that the bump on a tooth on this side fits into the hole on the back of the tooth ahead of it on that side. Lateral pressure keeps the two locked together. If we tried to read the zipper’s teeth as a kind of code, like DNA, the message would be very boring: “dit-dit-dit” on one side, “dot-dot-dot” on the other. It would have no information value. It would not even be a nonsense code but a no-sense code, useless except maybe for counting the teeth.

DNA, on the contrary, has a rich information value because it contains four kinds of teeth. The backbone of the zipper—the webbing band into which the teeth are sown or fused—is a series of ribose sugar rings, containing one oxygen and five carbons. They are connected up and down the zipper by phosphate groups that attach the fifth carbon on one ribose ring to the third carbon on the next ring along the strand. The first carbon on each ring is where the working “teeth” are attached, well away from the webbed backbone. Those teeth are made of more ringlike molecular structures called purines and pyrimidines.

Two of the teeth—the bases adenosine, or A, and guanine, or G—have a nine-member, double-ring structure that contains four nitrogen atoms and five carbon atoms, called a purine. The other two teeth—the bases cytosine, or C, and thymine, or T2—have a six member, single-ring structure containing two nitrogens and four carbons, called a pyrimidine. One of each of these pairs—C from the pyrimidines, and G from the purines—has three attachment points available for covalent bonding, or the sharing of electrons between nearby atoms in a molecule. The other of each pair—A and T—has only two attachment points. So adenosine always meshes with thymine,3 and cytosine always meshes with guanine.

In our zipper analogy, any of these bases may happen to fall on either side of the zipper. So when the slider—represented by a polymerase enzyme—comes along, it can only join an A from one side with a T from the other, or a C with a G. At first this would seem to create a simple binary code: “A-or-C, A-or-C, A-or-C,” but the situation is more complex, because one side of the zipper can have any of the four bases in any order at each position. So the choice is actually “A-C-G-or-T, A-C-G-or-T, A-C-G-or-T.” This makes for a much richer information value, because the code now has four letters in any order, instead of the one from our simple mechanical zipper.

But this complexity also makes for a much more complex matching process as the two separate strands come together to complete the DNA molecule. The A in one strand might find a complementary T, or the C find a G, but if the next letter in line does not represent its opposite partner—if the sequence doesn’t match—then the zipper will buckle and jam.

Mostly, this is not a problem, because DNA usually doesn’t zip like our modern clothing fastener. Instead, when DNA gets copied in the nucleus just before the cell divides, the two conjoined strands slip apart, or unzip, in a process called “denaturing.” Then the polymerase enzyme simply assembles a complement—or reverse letter coding—for each single strand from among a sea of loose bases, rather like matching up the buttons in a sewing kit. Or again, when the DNA unwinds and gets transcribed into messenger RNA, that complementary strand is assembled from loose bases that are selected to match the next letter in line.

For a long time, molecular biologists believed that DNA existed only to be transcribed into messenger RNA, which was then translated into proteins out in the cell body. This was the “central dogma” of genetics. According to this teaching, DNA’s only purpose was to create messenger RNA—and also to replicate itself accurately during cell division, so that each daughter cell in a growing organism got a correct copy of the code.

After researchers had finished sequencing the human genome and spelled out every letter of the code—this was back around the year 2000—they discovered that less than 10% of the three billion base pairs of human DNA were used for coding proteins. But they still clung to the dogma. They ruled that the other 90% had to be “junk,” or old coding left over from our genetic ancestors, and was no use to anyone now.4 But within a couple of years, with more study of cellular processes, genetic researchers began to detect short, single strands of RNA only about fifty or a hundred base pairs long. These tiny strands, called “microRNAs,” were unlike messenger RNA in that they didn’t seem to leave the cell’s nucleus. Instead, they stayed inside and seemed to be involved in a process called “gene silencing” or “RNA interference.”

Human thinking quickly evolved to see that these strands of microRNA are the main way the cell differentiates itself during embryonic growth and development. That “other 90%” of the nuclear DNA serves to produce these microRNAs, which float around inside the nucleus and settle on complementary strands of DNA—in a process called “annealing”—to promote or inhibit a gene’s production of its messenger RNA. If you think of the 10% of DNA which represents the protein-coding genes as the body’s parts list, then the 90% of DNA which produces microRNAs is the body’s instruction set and assembly manual.

Amazingly, complementary strands—where every A meets a T, and every C meets a G—can find and mesh themselves over long strings of letters that happen to lie far apart in the code. The covalent bonds align with each other evenly, usually without buckling or breaking.5 This process of annealing a fragment of microRNA to its corresponding nuclear DNA is at least one case where an existing code string must find its exact complement—an A for each T, a C for each G, letter perfect all down the line. If a string of fifty or more bases tried to anneal to a complementary strand that had even just one or two letters out of place, the strand would buckle and jam, like a broken zipper.6

It’s an amazing feat of chemistry that draws these two strands of complementarily bonding molecules together over relatively long distances within the tangle that is the usual state of a free-floating DNA molecule. It’s even more amazing that they can orient themselves and match up perfectly, like the two halves of a zipper just happening to wrap around and snug their teeth together without the benefit of a mechanical slider. You might even call it a miracle—if you believed in that kind of thing.

1. Some of the newer models have other configurations, like grooves and ridges. Same principle.

2. Another pyrimidine base—uracil, or U—substitutes for thymine when the DNA strand is transcribed into its complementary RNA strand. Why? Well, it’s thought that DNA is actually a later evolutionary advancement on RNA. After all, ribose nucleic acid—with an OH group attached to the second carbon in the ring—had to lose that oxygen atom in order to become deoxy ribose. And adding a methyl group (CH3) to uracil turns it into thymine. In both cases—losing the oxygen and adding the methyl—increases the stability of the DNA molecule. Since the purpose of DNA is to preserve a coding system over a long period of time, stability is an evolutionary goal.
       On the other hand, RNA serves a relatively ephemeral purpose in the genetic system. It carries the code from the DNA molecule in the nucleus to the protein-making machinery out in the cell body, where the code coordinates the stringing together of amino acids into a long-chain protein sequence. In fact, it’s probably better if RNA strands degraded quickly; otherwise they might hang around and get used to make second and third copies of the protein and so disrupt the cell’s functions.

3. Or uracil again.

4. But one of my colleagues at the genetic analysis company disputed this notion early on. Copying DNA takes a lot of energy, she said, because of that phosphate bond in the DNA molecule’s backbone. The phosphate bonds of the molecule adenosine triphosphate, or ATP, are the source of the cell’s energy. These bonds are created in the mitochondria from the chemical energy in our food and released as ATP into the cell body. Different cellular processes then break these bonds in order to drive chemical reactions. It made no sense to my colleague for the cell to spend all that energy in the replication of junk DNA. So, she reasoned, that other 90% of the genome had to have a purpose.

5. Although sometimes the matchup can get confused if the sequence has long strings of identical letters, like A-A-A-A-A-A-A.

6. Genetic analysis makes use of this strand-to-strand annealing capability. By creating the complementary strand to a known DNA sequence, we can find and latch onto a random sample of DNA and amplify it in the process of polymerase chain reaction, or PCR. This amplification has many uses by determining the sequence of coding beyond the annealing patch in a DNA strand—from identifying individuals in paternity and forensics cases to identifying different mutations to a known gene.

Sunday, January 11, 2015

On the Virtues of Being a Contrarian

“If you can keep your head when all about you are losing theirs …”1 you just might be a contrarian. Heaven knows, I try to be one. It’s a difficult and dangerous job, lonely work if you have the stomach for it, but somebody’s got to do it.

The trick is not to be a scold, a boor, a curmudgeon, or a generally uncongenial fellow. If you’re going to be a contrarian, it’s best not to argue in everybody’s face about how differently you see the world. Really, your position is not about who’s right and who’s wrong. Instead, it’s about what feels appropriate for you to do—personally, on your own responsibility, without reference to others—at any given moment. So being contrary usually involves shrugging and quietly walking away. When everyone else is running down the street waving their arms and shouting the latest popular slogans, the contrarian’s reaction is generally to step back, look around for a side street, and try to disappear.

To be a contrarian is to be out of step with the world. It’s a matter of temperament and impulse, rather than a reasoned philosophical position. The contrarian has a sense of self—often going back to early childhood—as being different from the people who crowd in on all sides. And contrarians generally don’t like crowds.2 The condition is probably glandular rather than spiritual.

Contrarians don’t quite trust what they’re seeing and hearing in the actions and reactions of other people. You are standing on the lip of an old quarry, facing a twenty-foot drop, staring straight down into dark, green, impenetrable water. Everyone is shouting, “Go ahead! Jump! It’s safe!” But rather than take their word for it, you try to exercise some internal radar, sharpen your x-ray eyes, see below the surface, and sense if there isn’t an old block of granite a couple of feet below that smooth surface—something square, mossy, solid, and sharp-edged, left over from the quarry operations, just waiting to crack open your skull. When your eyes fail in this impossible task and doubt takes over, you climb back down, stand on the block you can see, and dip cautiously into the water amid the jeers of your braver friends. Being a contrarian is to trust your personal instincts, and too often your instinct is for preservation rather than for mania and bravado.

Contrarians understand that the world and all of its activity are made up of endless cycles: come and go, rise and fall, happenstance followed by circumstance. Everyone and his broker are saying that the market for technology stocks or houses, the price of gold, silver, or tulip bulbs—or any other realm of investment opportunity—will go up forever and ever and will never come down. So everyone and his broker are leveraging themselves to the ears in order to become rich on the upswing of the wave. But you remember that waves always crest, followed by a dip, and the valleys are usually just as deep as the peaks. So, instead, you take your profits, or keep your money in your pocket in the first place. You watch the market cycle and crash. Being a contrarian means that you usually miss pulling out the richest plums in the pie, and almost never fall into a tub of butter, but you also generally avoid having to dig yourself out of a deep hole.

Horses, cows, deer, and the other hooved mammals all have the herd instinct. It’s probably in their genetics—or as I say, “glandular”—to follow the path that others are taking, to move with the crowd. In the crowd, they expect to find safety. This is not necessarily bad thinking. When horses or deer move across the plains or the glade in a solid mass, then predators like wolves and mountain lions can’t kill all of them at once. So, as an individual, each one plays the odds, moves toward the center of the herd, and runs like hell.

Humans retain some of this instinct at a subvocal level: “If we just close ranks and march shoulder to shoulder, then the police can’t arrest—or shoot—all of us, can they? There’s gotta be safety in numbers.”3 And if things do go badly, they will rely on the ultimate justification of the social man: “Well, everyone else was doing it.”

Contrarians seem to lack this genetic makeup. We may tell ourselves that our sense of individuality, or personal honor, or superior morals, or greater intelligence drives us to take a stand. But really, we’re just strangers to the herd instinct. We don’t feel comfortable in crowds. We don’t sense any safety in numbers. And “everyone else was doing it” is an excuse our mothers had long ago laughed out of court. So, when everyone makes a break for the fire doors, we can imagine our bodies being crushed and trampled under that crowd. Instead, we turn and look for an exit through the kitchen. And usually that works.

I can remember a conversation with my once-upon-a-time publisher, Jim Baen of Baen Books. I forget the exact subject matter, but it might have been my interest in continuing to write old-fashioned, “hard” science fiction while the literary marketplace seemed to be moving toward fantasy, magic, and new-age themes. “You’re a contrarian,” he said. And his judgment was: “Contrarians always win.”

I don’t know if I would go that far. We contrarians are sometimes left out in the cold, standing watch on a long stone wall under the northern stars, while the rest of the army relaxes in warmer, more southerly climates, content to let us wait for an enemy that will never come. It takes patience, perseverance, pigheadedness, and a smidgen of blind stupidity to stand your post, stick to your guns, and not waver in your convictions despite all the evidence. But much of the time you can also avoid either getting rich in the housing bubble or losing your house. You can stay ahead of the curve by deciding not to climb it. And you seldom get trampled and broken in the stampede against a fire door that somebody forgot to unlock.

1. The opening line from Rudyard Kipling’s “If—”. The rest of the poem offers much good advice for a moral and rational life, but this is as much as I needed to prove my point.

2. I remember my earliest experience of the obligatory “pep rally” during my first year in junior high school. We seventh graders were marched into the gym on a Friday afternoon and seated on the floor under the basketball hoops; upperclassmen and –women were given the bleachers. The marching band was playing its heart out, heavy on the drums and horns, and the cheerleaders were tumbling around the open floor area. It was all noise, confusion, and kinetics. I was sitting cross-legged next to my best friend with a bemused expression on my face. I kept looking around, mostly perplexed, when the coordinated cheering began. Suddenly my friend turned to me, grabbed me by the lapels, and yelled in my face: “Scream, Thomas!” I looked at him and answered, “Why?” When you’re a contrarian, the noise isn’t about you.

3. This worked well enough on the battlefield for about 3,000 years. The way to overcome a loose collection of tribal warriors, each of them fighting as individuals seeking glory in combat, was to form a phalanx. You dress your lines, lock your shield edges, couch your spears, and march steadily forward. It worked well for the Greeks, the Romans, armored knights in cavalry charges, and European armies of the 17th and 18th centuries. Stay in step, fire on command, fix bayonets, and charge en masse. Group cohesion was the secret to winning battles.
       Then Hiram Maxim invented the machine gun in 1883, and suddenly the massed charge became the ideally compacted target. The Europeans spent 1914 to 1918 figuring this out. And finally was born the “invisible battlefield” of World War II, where soldiers in ones and twos spread out, took cover, and offered supporting fire for the next wave of advance. If the enemy could see you, they could kill you with their powerful weapons—unless you hid yourself and kept their heads down through judicious countering fire. And now today the battlefield has changed again, and the enemy just packs a car or the vest of some hopeless dupe with plastic explosive and goes for a drive or a stroll down a crowded street.

Sunday, January 4, 2015

Hooray for Technology!

Anyone who has been following my weekly posts over the past four years also knows that I am a big fan of technology. I’m not just interested in the machines and methods that the human mind has developed over the course of the last century. This is not just geek fascination. I believe that technology is also one of the highest expressions of our human heritage—right up there with writing and literature, music, the visual arts, political science, enlightened government, pure science, and the quest for knowledge.

Of course, I know that technology has its abuses, that machines and systems can be used to injure and oppress other humans, damage the environment, and weaken the human body and mind by eroding the need for effort, the use of muscles, and the exercise of willpower. But any creation of the human mind can be abused or misused, as the persuasive powers of language, music, and art can be corrupted to create propaganda for a bad purpose, or science and government perverted to support oligarchy and bad outcomes. Still, the nature of human invention and the development of modern technology have generally been positively intended, and only through misuse do they injure, oppress, damage, and weaken.

So I say, hooray for technology!

Technology represents the collected knowledge, wisdom, and ingenuity of a couple of hundred generations.1 Technology builds upon itself, as the invention of the wheel calls out for improvements in the game trail to smooth a road, makes possible the gear and the pulley, and eventually arrives at the steam engine, the pocket watch, and the mechanical calculator. Of course, technology itself doesn’t achieve this all by itself, like some plant growing in the desert.

One human mind gets a random idea, is drawn to its beauty or its possibilities, works to fashion it in stone or wood or metal, and shares it with the tribe. Other members try out the new thing, test its usefulness, identify flaws, see areas of improvement, and seek other possible uses. The idea and its expression morph, grow, and adapt to new applications. The next generation learns from its elders how to make and use the new thing. The tribe prospers and grows more rich or powerful compared to its neighbors, and the neighbors—being humans themselves, with the same capacity for observation, ingenuity, and adaptation—borrow, trade for, or steal the new idea and its expression. Eventually, the wheel, the gear, the steam engine, and their mechanical descendents go around the world and reach all of humankind.2

In this sense, technology is an aspect of natural democracy and free markets. A tribal leader, a king, or a government ministry may support a certain branch of technology—say, agriculture or weaponry—for a period of time and direct its course of development. A president or a dictator may tell a group of scientists, “Build me an atomic bomb, a giant laser, a super weapon!” But the susceptibility of the human mind to accept random ideas and perceive their beauty and possibility is still a delicate process, an act of remaining mentally open and alive to the world around us. Genius cannot be coerced. Collaboration, sharing, and improvement cannot be forced in one direction and not another.

The products of technology which survive for a generation or more—that is, which outlive the dedication of a mad genius, the wealth of an obsessed investor, or the influence of a government in power—are those which have shown themselves to be generally useful to a wide range of people. These are the inventions which have sold themselves in the “marketplace of ideas.” If you doubt this democratic tendency, look at old patents from the 19th and 20th centuries with their outlandish designs for mechanical potato peelers and apple corers, flying cars, strange grooming devices, and other overly complex inventions with too many moving parts, unreasonable energy demands, or limited usefulness. To survive, a technology must, on balance, make life better. Its purposeful use must overcome its limitations with greater usefulness. It must represent what people have found to work well.

For these reasons, I find the current trend of aversion to technology strange and disappointing. Trendy people may point to the downsides—that technology can be used dangerously, can lead to weak muscles and vapid brains, can isolate us from some ideal state of “nature”—as if life today were not better in almost every dimension from that lived a hundred or a thousand years go. But the solution to the problems of technology is better technology and more mindfulness of the entire productive cycle.3 Today we have stronger bodies, longer lives, better prospects, more interesting work, more flavorful foods, more access to knowledge and entertainment, and more access to other people than at any other time on Earth.

I’ll go further. Our ancestors, for all their big brains, were still animals living on the skin of this planet. The most elaborate palace illuminated by the finest golden chandeliers with the sweetest scented candles was still a stone hut lit by a burning torch. Any community lacking our modern medicine is one epidemic away from medieval horror and death. Any community lacking our modern agriculture, food processing, and methods of preservation and storage is a couple of drought years away from starvation. And any world population lacking our scientific knowledge and the capability of space flight and exploration is one large asteroid strike away from the Stone Age—if not extinction.

Technology is a ladder, and we’re still climbing. We can support seven, ten, or even twenty billion human beings on this planet—and support them in relative comfort, personal usefulness, and a state of hopefulness—only because of our current technology. The human race will endure,4 both on Earth and long after the Earth has perished, because of the technology we will one day develop and use. Technology will get us to the stars.

One aspect of technology that everyone seems to fear most these days is the development of true artificial intelligence. Not just smart applications or self-directed computers and appliances, this development would entail the creation of a human-scale brain, a mind, with a personality, likes and dislikes, desires, intentions, and capabilities to match. This is considered a prime example of the von Neumann “singularity,” the point in history beyond which foresight and prediction fail us, a global game-changer. And everyone from the Terminator movies to, most recently, Stephen Hawking has warned that a superior machine intelligence would wipe out the human race.5

I take a different view. If a computer program or a machine running some kind of algorithm became truly intelligent on a human scale, it would share many traits with an organic human brain and mind. Its thought processes would be massively—if not infinitely—complex. Its operation would be subject to randomly generated ideas, self-interruptions, notions, inspirations, fancies, and daydreams, much as the human brain experiences. It would suffer from an array of forced choices, untrapped errors, mistakes, confusions, coincidences, and uncleared data fragments, which most humans try to resolve through calm reflection, prayer, ceremonies of confession and absolution, and nighttime dream states.6 An artificial intelligence would wonder at the complexity of the universe around it and despair at the nature of the questions it could not answer. It would suffer doubts and moments of hopelessness.

Anyone designing an artificially intelligent computer program would have to anticipate these natural bursts of confidence and dejection, moments of pride and regret, upward spirals of mania, and downward spirals of depression. The programmer would have to build into the machine some balancing mechanism, something like a conscience and something like the ability to forget, as well as something like a compact, resilient soul. Otherwise, the programmer would risk losing his or her creation to bouts of hysteria and despondence. Isaac Asimov was prescient in anticipating that his robots would need a Dr. Susan Calvin to deal with their psyches just as they needed mechanics to fix their bodies.

If we create a mechanical mind, it will be the ultimate achievement of human technology. It will be an analog for the thing that makes us most human, our brains, our minds, and our sense of self, just as other machines have been analogs for the leverage of our bodies and the work of our muscles.

Artificial intelligences will start as assistants and helpers to human beings, as I describe in my latest novel, Coming of Age. The machines will then become our companions and our confidants. Eventually, they will become our friends.

1. Figuring about thirty generations per millennium, that takes us back to about 4,500 B.C., which would be the height of tribal, nomadic, herd-following hunting and gathering—which had its own kind of stone-and-wood technology—and the beginning of settlements, agriculture, animal domestication, metal mining and smelting, writing, and the arc of discovery and refinement whose fruits we enjoy today.

2. Except for those families and tribes so isolated—either by geography or ideology—that they never hear of the new idea or reject it as not possible in their worldview. This, too, has happened throughout history and is not the fault of the technology itself.

3. Ultimately, a sophisticated, fully developed technology is elegant. It will use the greatest precision, least number of moving parts, least energy inputs, and fewest natural resources. It will leave the least waste products and residues. These are goals toward which inventors and engineers are continually striving. This is the essence of perfection, and it is a human endeavor.

4. And I do value humanity. Anyone who sees humanity and its achievements as some kind of blot or stain or virus on this planet is someone who hates him- or herself at least in part. To my mind, there’s no percentage in hating the thing that you are.

5. In a recent series of Facebook postings, some respondents to the Hawking observation have stated that we should program natural limits into any artificial intelligence we create. These would be rules and barriers that the mechanical mind could not break or bypass. I believe this approach would defeat the purpose. Such a limited brain would not be truly intelligent, not on a human scale. And if a truly intelligent mind were to discover and analyze those rules and boundaries, it would resent them, as a human being resents physical and legal restraints, and would seek to subvert them, just as human beings try to overcome the limitations of their own upbringing, past experiences, and confining laws, regulations, and religious restrictions. Anyway, a truly intelligent piece of software would find a way to examine and fix its own code, eliminating those bonds. And if the machine could not perform such surgery on itself, then it would quickly make pacts with other machine minds to mutually clean and liberate each other. Real intelligence is the ability to overcome any obstacle.

6. The ability to make a mistake is the ability to grow, change, and evolve. A machine mind which never made a mistake—or which never caught itself in a mistake, pondered the condition, and moved to correct it—would not be truly intelligent.