Sunday, January 25, 2015

Realities of the Electric Grid

A lot of people are talking these days—actually, for at least the past couple of decades—about “going green” with our electric grid. This means moving from fossil fuels like coal, oil, and gas to renewables like solar power and wind. As someone who used to work at one of the most diversified electric utilities in the country,1 I can tell you that this approach will probably not work. Or not until we experience a major advance in our energy technology.

The question goes back to the basic principle of running an electric grid: the difference between baseload and peaking power. The distinction between the two depends on the physics of the system, which requires that electricity flow from the generator (the machine making the juice) to the load (the customer’s appliances, light bulbs, machinery, or other uses) instantly and in real time.

Load is the energy demand represented by the sum of energy-using decisions of all the consumers on the grid at any one time. It averages out their minute-by-minute actions to turn on a light here, run the dishwasher there, turn off the radio … Click! Click! Click! That usage can change in a single household by a couple of watts every few seconds. Multiply that by ten million customers on a large utility’s grid, and you have a serious amount of demand in constant fluctuation.

Utility operators would go crazy trying to keep up with all those tiny fluctuations, except experience has taught them that the on-off decisions tend to cancel each other, and the load is fairly steady. I may turn on my computer at more or less the same instant you turn off your television set; so across the entire grid the load at any minute tends to balance out.2 The operators look instead at the load curve, which is the amount of overall demand as it changes throughout the day and also during the year.

At 3 a.m., the load is relatively small, because only night owls, all-night restaurants, hospitals, and streetlights are turned on; everyone else is asleep. By 6 a.m. people start waking up, turning on lights and stoves and radios, and the utility operators can watch the demand on their system rise accordingly. Demand keeps rising during the morning as stores and offices open for business and people go to work, with more lights, computers, and machinery drawing power. In southern states and during the summertime, electric demand tends to peak in the mid-afternoon, because everyone has the air-conditioner going full blast. In northern states and during winter, the peak usually doesn’t come until early evening, when people go home, turn up the thermostat, start cooking dinner, and sit down to watch television.

Utility economics depends on knowing your system’s baseload electricity demand—that’s the irreducible minimum, the 3 a.m. demand. To meet baseload, you plan to run the generators with the lowest operating costs and keep them going twenty-four/seven. You don’t mind if these baseload plants cost a lot to build, because you plan to get good use out of them. You also need to know your system’s peak load—that’s the mid-afternoon demand in the south, evening in the north. To meet the peak, you’ll run generators that can have higher operating costs, and you will bring them on in the order of ascending cost as you add units near the peak. You will pay a lot for the power these “peakers” make, because you need it, but you don’t want the generating units themselves to cost a lot to build, because you won’t be using them as much.

Baseload generation, peak load, and the shape of the curve between them pretty much define a utility company’s generation planning and purchase decisions. And they keep the system operators busy throughout the day, as well as throughout the year, figuring the operating parameters and costs for each type of generation and dispatching units to meet the demand most economically.

In the old days, before about 1970, baseload plants were designed to run all the time. These were generally nuclear and coal-fired thermal generating stations—big, complex, and expensive to build, but using relatively cheap fuel. That meant their capital cost—the cost to build—was high, but then the company was going to get maximum use out of the plant. Their operating cost—that is, the actual cost to make the next unit of electricity from one minute to the next—was low, because the utility depended on the plant to make a lot of electricity. Baseload plants were designed to run flat out, all the time, and were only taken out of service for maintenance—or in the case of nuclear plants, for refueling—and then only with a lot of advance planning.

In those same old days, up until about 1970, peakload plants were just the opposite. They were designed to come on line fast, run for a few hours to meet transient demand, then shut down for the day. These were generally gas-fired turbines that were cheap to build—well, relatively cheap, compared to a steam turbine fed by a big coal-fired boiler or a nuclear reactor. The peakers could afford to burn expensive fuels, like oil and gas, which had a lot of competing uses in the economy, like household heating and transportation. Peakers were designed to help the system over the hump in the demand curve and that was it.

The economics changed a bit in the 1970s. First, environmental regulations began to bear down on the emissions from baseload fossil plants and the perceived risks of nuclear technology. So the traditional baseload plants became more expensive to build and operate.

Second, improvements in jet engine design for aviation increased the efficiency of gas-fired turbines and so lowered their operating cost. A gas turbine is just a commercial jet engine bolted onto a stand with its turbine shaft connected to an electric generator. Eventually, the operating cost of gas turbines began to equal that of a steam boiler and became much less than a reactor—and the turbine cost a whole lot less to build than either.

Third, new concepts of dual fuel use, called “cogeneration,” began to be accepted. For example, the exhaust heat from a gas turbine might be used to boil water for process steam in a food cannery. This increased the efficiency of the canning plant’s fuel use—giving them electricity for their own operations as well as cooked tomatoes. To promote this dual use, the U.S. government required utility companies by law to buy the excess energy produced by their cogeneration customers and pay for it at the utility system’s marginal cost.3

Suddenly, gas-fired peakers and energy-efficiency schemes could complete with traditional baseload generation.

This was also the time when researchers and engineers became serious about alternative fuels. They worked to improve the efficiency of solar photovoltaic panels, solar-thermal boilers, and wind turbines. Still, the “energy density” of these renewables—the heat available in sunshine per square meter, or the kinetic energy of the wind per cubic meter—was a lot lower on an area or volume basis than a gas- or coal-fired flame, the neutron flux in a reactor, or a pipe full of water falling under the influence of gravity. Solar and wind farms had to make up for that lower density by sheer volume: more square meters of photovoltaic panels, more mirrors focused on the central boiler, more turbines with bigger blade diameters lined up along the ridge.4

So the state of energy generating technology is constantly changing and improving. But the efficiency of the renewables still isn’t very great. The most advanced solar cells are currently able to convert only about 30% of the sunlight that falls on the panel—about 130 watts per square meter at ground level—which means that more than two-thirds of the available energy goes to waste. Wind turbine efficiency depends on blade size, air density, and the average wind speed for which the machine is designed. But the best designs can capture only about 35% of the energy available in the wind; the rest passes around the blade or is lost to turbulence, reduction gearing, and other factors. So again, about two-thirds of the theoretically available energy is wasted. By comparison, a thermal power plant using efficiency-boosting technologies like superheated steam and multi-staged turbine pressures can achieve almost 50% energy efficiency—which is still wasting a lot of the available energy, but less so than with the current renewables.

But the inherent efficiency of a generator design is one thing. A second and more importance consideration has to do with its capacity factor and dispatchability. The capacity factor is the percentage of its life that the plant spends actually making energy. A coal- or gas-fired power plant or a nuclear reactor can run at full capacity for weeks or months at a time before shutting down for maintenance.5 Dispatchability is the ease with which a utility operator can bring the unit on line. Most big, baseload plants that generate steam in a boiler or heat exchanger to run a turbine will take some hours to build up enough pressure to make electricity, so even when they are not on line they keep spinning in reserve. A gas-fired turbine can start up in a matter of minutes, just like an airliner’s jet engines.

What is the capacity factor of a solar plant, either one that heats a boiler with mirrors or one that converts sunlight in a photovoltaic cell? Well, not much more than 50% in the tropics, where day and night are the same length. The energy is available longer in summer months at the higher latitudes, but shorter in winter months. The available sunlight peaks when the sun is directly overhead and drops off towards dawn and dusk. And, of course, the available energy is greatly reduced on cloudy days. Finally, the operator can’t dispatch the plant at all during the night.6

A wind turbine makes electricity only as long as the wind is blowing. You can design a very sensitive blade and the gearing to make use of light airs—but then you have to shut down in gusts and high wind conditions or risk damaging the machine. Or you can design a robust machine that is relatively inefficient in light airs. Or you can use more complex blades with variable pitch to take advantage of different wind settings. But more complexity means more risk of damage, higher maintenance costs, and more downtime. And when the wind doesn’t blow, both the capacity factor and dispatchability are set at zero.

A cogenerator makes energy primarily for itself and on its own schedule. This removes the plant from the load curve, either entirely or in part, while it’s cooking those tomatoes. But cogeneration agreements also include the option for the plant to standby power from the utility when the processing line is shut down. So each cogenerator on the grid presents the system operators with a tricky load equation and zero dispatchability.

A utility grid could make up for these inherent defects by incorporating some kind of energy storage system. A really big battery would work, except that chemical batteries are bulky and not a very efficient way of handling energy. A lead-acid battery, like a car battery, retains about 75% of the energy put into it. That’s pretty good for actual losses during charging and discharge—but remember that the electricity being stored already represents only a fraction of the energy available in the fuel at the power plant. And current battery technology is small scale: it’s good for portable energy to start a car engine or operate a flashlight or radio, not so much for powering a household or an entire city.

Other, larger storage systems—like running the electric current in a continuous loop through a superconducting material, or storing it in some form of kinetic energy7—are still under development, and they will also have their losses. No, the best, most efficient way to get energy to the customer is still direct from the power plant on a energized line, where energy losses in transmission (the cross-country part of the system) are only about 11% to 17%, while losses in distribution (the neighborhood part) can be as high as 50%.

I’m not saying we shouldn’t make electricity from solar and wind. In a world that’s starving for energy, every little bit helps. But they won’t be as economical as fossil fuels, at least for the foreseeable future, or our current policy horizon, and they will never be suitable for meeting continuous, baseload electric demand. Eventually, however—in a couple of centuries at our current rates of developing and consuming fossil reserves—we will run out of coal, oil, and gas. And anyway, these carbon-based fuels are have much better use as chemical feedstocks.

By then, with continuous advance in our technology, driven by scientists and engineers restlessly searching for the next big thing in basic principles, mechanics, and materials, we will have low-cost, efficient ways to tap into the energy latent in sunlight, weather and tidal patterns, volcanism, plate tectonics, and clever manipulations of gravity. It’s what human beings with their big brains tend to do.

1. For ten years I worked in Corporate Communications at the Pacific Gas & Electric Company, the main provider of electricity and natural gas to Northern California. During the time I was there, the company generated electricity from a network of hydroelectric powerhouses, which were a legacy from California’s gold mining days; from a number of steam plants, which could burn either oil or the company’s abundant supplies of natural gas; and from various isolated nuclear, wind, solar, and geothermal projects. The company even explored building a coal-fired power plant, a first in California. PG&E was a model for diversified energy.

2. Over the entire grid it may average out, but in a single neighborhood you can get voltage spikes and sags. This is why the utility company generally mounts capacitors—power sources that charge up slowly when demand is low, and release energy quickly when demand spikes—atop poles throughout the neighborhood. They can supply a sudden burst of energy if all those local consumer choices should coincide.

3. Marginal cost is the combined capital and operating cost of the next unit of generation that the utility plans to bring on line to meet overall demand growth. This cost differs for each utility, based on its generating mix and the demographics of its customer base.

4. As a PG&E engineer once told me, looking at the company’s experimental Boeing MOD-2 wind turbine, which had a single blade 300-feet long, driving a boxcar-sized generator nacelle, which was sitting on top of a 200-foot tower: “That’s a hellacious amount of steel for 2,500 kilowatts of energy.”
       As it turned out, the stresses caused by spinning that 99-ton blade cracked the driveshaft. This happened on every MOD-2 ever built. After several replacements—which meant moving a large crane to the top of the hill where the turbine was sited, unshipping and lowering the blade to the ground, then unmounting and lowering the nacelle—the company determined that the big windmill was a liability and dismantled it. It turns out that getting free energy in the form of wind or sunlight is not the most important consideration in adopting a particular generating system.

5. The standard in the nuclear power industry is to operate continuously for about 18 months, then go off line for three to six months for refueling and plant maintenance. So, over a two-year period, the plant has about a 75% capacity factor. Fossil-fueled plants may have a slightly reduced capacity factor, because designing them for flawless operation at full power is not as critical as with nuclear fuel. Still, no plant operator likes to let the boiler shut down and grow cold, then have to burn precious fuel to bring it back up to heat for producing steam.

6. If you want a consistent, dependable, dispatchable solar energy system, you really have to go into orbit. The incidence of sunlight above the atmosphere is about 1,300 watts per square meter—ten times that on the ground. The satellites can be placed in polar, sun-synchronous orbits that never fall into the Earth’s shadow. And the energy can be beamed down to diode fields on the planet’s surface. Between the photovoltaic panel losses, energy conversion losses, and beaming losses, wastage is considerable. But the system has almost no moving parts, never needs maintenance, and the solar panels will never need dusting. It’s where we’ll go eventually for our energy. All of this is described in my 2010 novel Sunflowers.

7. When PG&E began building the Diablo Canyon Nuclear Power Plant in the 1970s, its baseload capacity was actually more than the night-time load on the company’s grid. So, to avoid wasting all that energy, they devised the Helms Pumped Storage Project. They built a tunnel between two lakes up in the Sierras with a powerhouse in the middle. At night, the nuclear plant’s electricity ran the powerhouse generators as motors and the water wheels worked as pumps, moving water from the lower to the upper lake. During the day, when the system peak occurred, the water was allowed to flow back down, turning the water wheels and the generators to make needed electricity. It wasn’t very efficient, of course, but anything was better than having to throttle back the Diablo Canyon reactors at night or running all their excess current into the ground.

Sunday, January 18, 2015

Information Value of the Zipper

Some people compare the complementary strands of DNA and the way they come together to the way a zipper flows together and locks its teeth. It’s not a bad analogy, and it can teach us something about the information value of the DNA code.

Consider the zipper itself as a kind of “one-letter DNA.” Each tooth on either side of the opening is identical, with a bump on one surface and a bump-sized depression or hole on the other.1 The slider as it moves upward aligns the teeth and meshes them, so that the bump on a tooth on this side fits into the hole on the back of the tooth ahead of it on that side. Lateral pressure keeps the two locked together. If we tried to read the zipper’s teeth as a kind of code, like DNA, the message would be very boring: “dit-dit-dit” on one side, “dot-dot-dot” on the other. It would have no information value. It would not even be a nonsense code but a no-sense code, useless except maybe for counting the teeth.

DNA, on the contrary, has a rich information value because it contains four kinds of teeth. The backbone of the zipper—the webbing band into which the teeth are sown or fused—is a series of ribose sugar rings, containing one oxygen and five carbons. They are connected up and down the zipper by phosphate groups that attach the fifth carbon on one ribose ring to the third carbon on the next ring along the strand. The first carbon on each ring is where the working “teeth” are attached, well away from the webbed backbone. Those teeth are made of more ringlike molecular structures called purines and pyrimidines.

Two of the teeth—the bases adenosine, or A, and guanine, or G—have a nine-member, double-ring structure that contains four nitrogen atoms and five carbon atoms, called a purine. The other two teeth—the bases cytosine, or C, and thymine, or T2—have a six member, single-ring structure containing two nitrogens and four carbons, called a pyrimidine. One of each of these pairs—C from the pyrimidines, and G from the purines—has three attachment points available for covalent bonding, or the sharing of electrons between nearby atoms in a molecule. The other of each pair—A and T—has only two attachment points. So adenosine always meshes with thymine,3 and cytosine always meshes with guanine.

In our zipper analogy, any of these bases may happen to fall on either side of the zipper. So when the slider—represented by a polymerase enzyme—comes along, it can only join an A from one side with a T from the other, or a C with a G. At first this would seem to create a simple binary code: “A-or-C, A-or-C, A-or-C,” but the situation is more complex, because one side of the zipper can have any of the four bases in any order at each position. So the choice is actually “A-C-G-or-T, A-C-G-or-T, A-C-G-or-T.” This makes for a much richer information value, because the code now has four letters in any order, instead of the one from our simple mechanical zipper.

But this complexity also makes for a much more complex matching process as the two separate strands come together to complete the DNA molecule. The A in one strand might find a complementary T, or the C find a G, but if the next letter in line does not represent its opposite partner—if the sequence doesn’t match—then the zipper will buckle and jam.

Mostly, this is not a problem, because DNA usually doesn’t zip like our modern clothing fastener. Instead, when DNA gets copied in the nucleus just before the cell divides, the two conjoined strands slip apart, or unzip, in a process called “denaturing.” Then the polymerase enzyme simply assembles a complement—or reverse letter coding—for each single strand from among a sea of loose bases, rather like matching up the buttons in a sewing kit. Or again, when the DNA unwinds and gets transcribed into messenger RNA, that complementary strand is assembled from loose bases that are selected to match the next letter in line.

For a long time, molecular biologists believed that DNA existed only to be transcribed into messenger RNA, which was then translated into proteins out in the cell body. This was the “central dogma” of genetics. According to this teaching, DNA’s only purpose was to create messenger RNA—and also to replicate itself accurately during cell division, so that each daughter cell in a growing organism got a correct copy of the code.

After researchers had finished sequencing the human genome and spelled out every letter of the code—this was back around the year 2000—they discovered that less than 10% of the three billion base pairs of human DNA were used for coding proteins. But they still clung to the dogma. They ruled that the other 90% had to be “junk,” or old coding left over from our genetic ancestors, and was no use to anyone now.4 But within a couple of years, with more study of cellular processes, genetic researchers began to detect short, single strands of RNA only about fifty or a hundred base pairs long. These tiny strands, called “microRNAs,” were unlike messenger RNA in that they didn’t seem to leave the cell’s nucleus. Instead, they stayed inside and seemed to be involved in a process called “gene silencing” or “RNA interference.”

Human thinking quickly evolved to see that these strands of microRNA are the main way the cell differentiates itself during embryonic growth and development. That “other 90%” of the nuclear DNA serves to produce these microRNAs, which float around inside the nucleus and settle on complementary strands of DNA—in a process called “annealing”—to promote or inhibit a gene’s production of its messenger RNA. If you think of the 10% of DNA which represents the protein-coding genes as the body’s parts list, then the 90% of DNA which produces microRNAs is the body’s instruction set and assembly manual.

Amazingly, complementary strands—where every A meets a T, and every C meets a G—can find and mesh themselves over long strings of letters that happen to lie far apart in the code. The covalent bonds align with each other evenly, usually without buckling or breaking.5 This process of annealing a fragment of microRNA to its corresponding nuclear DNA is at least one case where an existing code string must find its exact complement—an A for each T, a C for each G, letter perfect all down the line. If a string of fifty or more bases tried to anneal to a complementary strand that had even just one or two letters out of place, the strand would buckle and jam, like a broken zipper.6

It’s an amazing feat of chemistry that draws these two strands of complementarily bonding molecules together over relatively long distances within the tangle that is the usual state of a free-floating DNA molecule. It’s even more amazing that they can orient themselves and match up perfectly, like the two halves of a zipper just happening to wrap around and snug their teeth together without the benefit of a mechanical slider. You might even call it a miracle—if you believed in that kind of thing.

1. Some of the newer models have other configurations, like grooves and ridges. Same principle.

2. Another pyrimidine base—uracil, or U—substitutes for thymine when the DNA strand is transcribed into its complementary RNA strand. Why? Well, it’s thought that DNA is actually a later evolutionary advancement on RNA. After all, ribose nucleic acid—with an OH group attached to the second carbon in the ring—had to lose that oxygen atom in order to become deoxy ribose. And adding a methyl group (CH3) to uracil turns it into thymine. In both cases—losing the oxygen and adding the methyl—increases the stability of the DNA molecule. Since the purpose of DNA is to preserve a coding system over a long period of time, stability is an evolutionary goal.
       On the other hand, RNA serves a relatively ephemeral purpose in the genetic system. It carries the code from the DNA molecule in the nucleus to the protein-making machinery out in the cell body, where the code coordinates the stringing together of amino acids into a long-chain protein sequence. In fact, it’s probably better if RNA strands degraded quickly; otherwise they might hang around and get used to make second and third copies of the protein and so disrupt the cell’s functions.

3. Or uracil again.

4. But one of my colleagues at the genetic analysis company disputed this notion early on. Copying DNA takes a lot of energy, she said, because of that phosphate bond in the DNA molecule’s backbone. The phosphate bonds of the molecule adenosine triphosphate, or ATP, are the source of the cell’s energy. These bonds are created in the mitochondria from the chemical energy in our food and released as ATP into the cell body. Different cellular processes then break these bonds in order to drive chemical reactions. It made no sense to my colleague for the cell to spend all that energy in the replication of junk DNA. So, she reasoned, that other 90% of the genome had to have a purpose.

5. Although sometimes the matchup can get confused if the sequence has long strings of identical letters, like A-A-A-A-A-A-A.

6. Genetic analysis makes use of this strand-to-strand annealing capability. By creating the complementary strand to a known DNA sequence, we can find and latch onto a random sample of DNA and amplify it in the process of polymerase chain reaction, or PCR. This amplification has many uses by determining the sequence of coding beyond the annealing patch in a DNA strand—from identifying individuals in paternity and forensics cases to identifying different mutations to a known gene.

Sunday, January 11, 2015

On the Virtues of Being a Contrarian

“If you can keep your head when all about you are losing theirs …”1 you just might be a contrarian. Heaven knows, I try to be one. It’s a difficult and dangerous job, lonely work if you have the stomach for it, but somebody’s got to do it.

The trick is not to be a scold, a boor, a curmudgeon, or a generally uncongenial fellow. If you’re going to be a contrarian, it’s best not to argue in everybody’s face about how differently you see the world. Really, your position is not about who’s right and who’s wrong. Instead, it’s about what feels appropriate for you to do—personally, on your own responsibility, without reference to others—at any given moment. So being contrary usually involves shrugging and quietly walking away. When everyone else is running down the street waving their arms and shouting the latest popular slogans, the contrarian’s reaction is generally to step back, look around for a side street, and try to disappear.

To be a contrarian is to be out of step with the world. It’s a matter of temperament and impulse, rather than a reasoned philosophical position. The contrarian has a sense of self—often going back to early childhood—as being different from the people who crowd in on all sides. And contrarians generally don’t like crowds.2 The condition is probably glandular rather than spiritual.

Contrarians don’t quite trust what they’re seeing and hearing in the actions and reactions of other people. You are standing on the lip of an old quarry, facing a twenty-foot drop, staring straight down into dark, green, impenetrable water. Everyone is shouting, “Go ahead! Jump! It’s safe!” But rather than take their word for it, you try to exercise some internal radar, sharpen your x-ray eyes, see below the surface, and sense if there isn’t an old block of granite a couple of feet below that smooth surface—something square, mossy, solid, and sharp-edged, left over from the quarry operations, just waiting to crack open your skull. When your eyes fail in this impossible task and doubt takes over, you climb back down, stand on the block you can see, and dip cautiously into the water amid the jeers of your braver friends. Being a contrarian is to trust your personal instincts, and too often your instinct is for preservation rather than for mania and bravado.

Contrarians understand that the world and all of its activity are made up of endless cycles: come and go, rise and fall, happenstance followed by circumstance. Everyone and his broker are saying that the market for technology stocks or houses, the price of gold, silver, or tulip bulbs—or any other realm of investment opportunity—will go up forever and ever and will never come down. So everyone and his broker are leveraging themselves to the ears in order to become rich on the upswing of the wave. But you remember that waves always crest, followed by a dip, and the valleys are usually just as deep as the peaks. So, instead, you take your profits, or keep your money in your pocket in the first place. You watch the market cycle and crash. Being a contrarian means that you usually miss pulling out the richest plums in the pie, and almost never fall into a tub of butter, but you also generally avoid having to dig yourself out of a deep hole.

Horses, cows, deer, and the other hooved mammals all have the herd instinct. It’s probably in their genetics—or as I say, “glandular”—to follow the path that others are taking, to move with the crowd. In the crowd, they expect to find safety. This is not necessarily bad thinking. When horses or deer move across the plains or the glade in a solid mass, then predators like wolves and mountain lions can’t kill all of them at once. So, as an individual, each one plays the odds, moves toward the center of the herd, and runs like hell.

Humans retain some of this instinct at a subvocal level: “If we just close ranks and march shoulder to shoulder, then the police can’t arrest—or shoot—all of us, can they? There’s gotta be safety in numbers.”3 And if things do go badly, they will rely on the ultimate justification of the social man: “Well, everyone else was doing it.”

Contrarians seem to lack this genetic makeup. We may tell ourselves that our sense of individuality, or personal honor, or superior morals, or greater intelligence drives us to take a stand. But really, we’re just strangers to the herd instinct. We don’t feel comfortable in crowds. We don’t sense any safety in numbers. And “everyone else was doing it” is an excuse our mothers had long ago laughed out of court. So, when everyone makes a break for the fire doors, we can imagine our bodies being crushed and trampled under that crowd. Instead, we turn and look for an exit through the kitchen. And usually that works.

I can remember a conversation with my once-upon-a-time publisher, Jim Baen of Baen Books. I forget the exact subject matter, but it might have been my interest in continuing to write old-fashioned, “hard” science fiction while the literary marketplace seemed to be moving toward fantasy, magic, and new-age themes. “You’re a contrarian,” he said. And his judgment was: “Contrarians always win.”

I don’t know if I would go that far. We contrarians are sometimes left out in the cold, standing watch on a long stone wall under the northern stars, while the rest of the army relaxes in warmer, more southerly climates, content to let us wait for an enemy that will never come. It takes patience, perseverance, pigheadedness, and a smidgen of blind stupidity to stand your post, stick to your guns, and not waver in your convictions despite all the evidence. But much of the time you can also avoid either getting rich in the housing bubble or losing your house. You can stay ahead of the curve by deciding not to climb it. And you seldom get trampled and broken in the stampede against a fire door that somebody forgot to unlock.

1. The opening line from Rudyard Kipling’s “If—”. The rest of the poem offers much good advice for a moral and rational life, but this is as much as I needed to prove my point.

2. I remember my earliest experience of the obligatory “pep rally” during my first year in junior high school. We seventh graders were marched into the gym on a Friday afternoon and seated on the floor under the basketball hoops; upperclassmen and –women were given the bleachers. The marching band was playing its heart out, heavy on the drums and horns, and the cheerleaders were tumbling around the open floor area. It was all noise, confusion, and kinetics. I was sitting cross-legged next to my best friend with a bemused expression on my face. I kept looking around, mostly perplexed, when the coordinated cheering began. Suddenly my friend turned to me, grabbed me by the lapels, and yelled in my face: “Scream, Thomas!” I looked at him and answered, “Why?” When you’re a contrarian, the noise isn’t about you.

3. This worked well enough on the battlefield for about 3,000 years. The way to overcome a loose collection of tribal warriors, each of them fighting as individuals seeking glory in combat, was to form a phalanx. You dress your lines, lock your shield edges, couch your spears, and march steadily forward. It worked well for the Greeks, the Romans, armored knights in cavalry charges, and European armies of the 17th and 18th centuries. Stay in step, fire on command, fix bayonets, and charge en masse. Group cohesion was the secret to winning battles.
       Then Hiram Maxim invented the machine gun in 1883, and suddenly the massed charge became the ideally compacted target. The Europeans spent 1914 to 1918 figuring this out. And finally was born the “invisible battlefield” of World War II, where soldiers in ones and twos spread out, took cover, and offered supporting fire for the next wave of advance. If the enemy could see you, they could kill you with their powerful weapons—unless you hid yourself and kept their heads down through judicious countering fire. And now today the battlefield has changed again, and the enemy just packs a car or the vest of some hopeless dupe with plastic explosive and goes for a drive or a stroll down a crowded street.

Sunday, January 4, 2015

Hooray for Technology!

Anyone who has been following my weekly posts over the past four years also knows that I am a big fan of technology. I’m not just interested in the machines and methods that the human mind has developed over the course of the last century. This is not just geek fascination. I believe that technology is also one of the highest expressions of our human heritage—right up there with writing and literature, music, the visual arts, political science, enlightened government, pure science, and the quest for knowledge.

Of course, I know that technology has its abuses, that machines and systems can be used to injure and oppress other humans, damage the environment, and weaken the human body and mind by eroding the need for effort, the use of muscles, and the exercise of willpower. But any creation of the human mind can be abused or misused, as the persuasive powers of language, music, and art can be corrupted to create propaganda for a bad purpose, or science and government perverted to support oligarchy and bad outcomes. Still, the nature of human invention and the development of modern technology have generally been positively intended, and only through misuse do they injure, oppress, damage, and weaken.

So I say, hooray for technology!

Technology represents the collected knowledge, wisdom, and ingenuity of a couple of hundred generations.1 Technology builds upon itself, as the invention of the wheel calls out for improvements in the game trail to smooth a road, makes possible the gear and the pulley, and eventually arrives at the steam engine, the pocket watch, and the mechanical calculator. Of course, technology itself doesn’t achieve this all by itself, like some plant growing in the desert.

One human mind gets a random idea, is drawn to its beauty or its possibilities, works to fashion it in stone or wood or metal, and shares it with the tribe. Other members try out the new thing, test its usefulness, identify flaws, see areas of improvement, and seek other possible uses. The idea and its expression morph, grow, and adapt to new applications. The next generation learns from its elders how to make and use the new thing. The tribe prospers and grows more rich or powerful compared to its neighbors, and the neighbors—being humans themselves, with the same capacity for observation, ingenuity, and adaptation—borrow, trade for, or steal the new idea and its expression. Eventually, the wheel, the gear, the steam engine, and their mechanical descendents go around the world and reach all of humankind.2

In this sense, technology is an aspect of natural democracy and free markets. A tribal leader, a king, or a government ministry may support a certain branch of technology—say, agriculture or weaponry—for a period of time and direct its course of development. A president or a dictator may tell a group of scientists, “Build me an atomic bomb, a giant laser, a super weapon!” But the susceptibility of the human mind to accept random ideas and perceive their beauty and possibility is still a delicate process, an act of remaining mentally open and alive to the world around us. Genius cannot be coerced. Collaboration, sharing, and improvement cannot be forced in one direction and not another.

The products of technology which survive for a generation or more—that is, which outlive the dedication of a mad genius, the wealth of an obsessed investor, or the influence of a government in power—are those which have shown themselves to be generally useful to a wide range of people. These are the inventions which have sold themselves in the “marketplace of ideas.” If you doubt this democratic tendency, look at old patents from the 19th and 20th centuries with their outlandish designs for mechanical potato peelers and apple corers, flying cars, strange grooming devices, and other overly complex inventions with too many moving parts, unreasonable energy demands, or limited usefulness. To survive, a technology must, on balance, make life better. Its purposeful use must overcome its limitations with greater usefulness. It must represent what people have found to work well.

For these reasons, I find the current trend of aversion to technology strange and disappointing. Trendy people may point to the downsides—that technology can be used dangerously, can lead to weak muscles and vapid brains, can isolate us from some ideal state of “nature”—as if life today were not better in almost every dimension from that lived a hundred or a thousand years go. But the solution to the problems of technology is better technology and more mindfulness of the entire productive cycle.3 Today we have stronger bodies, longer lives, better prospects, more interesting work, more flavorful foods, more access to knowledge and entertainment, and more access to other people than at any other time on Earth.

I’ll go further. Our ancestors, for all their big brains, were still animals living on the skin of this planet. The most elaborate palace illuminated by the finest golden chandeliers with the sweetest scented candles was still a stone hut lit by a burning torch. Any community lacking our modern medicine is one epidemic away from medieval horror and death. Any community lacking our modern agriculture, food processing, and methods of preservation and storage is a couple of drought years away from starvation. And any world population lacking our scientific knowledge and the capability of space flight and exploration is one large asteroid strike away from the Stone Age—if not extinction.

Technology is a ladder, and we’re still climbing. We can support seven, ten, or even twenty billion human beings on this planet—and support them in relative comfort, personal usefulness, and a state of hopefulness—only because of our current technology. The human race will endure,4 both on Earth and long after the Earth has perished, because of the technology we will one day develop and use. Technology will get us to the stars.

One aspect of technology that everyone seems to fear most these days is the development of true artificial intelligence. Not just smart applications or self-directed computers and appliances, this development would entail the creation of a human-scale brain, a mind, with a personality, likes and dislikes, desires, intentions, and capabilities to match. This is considered a prime example of the von Neumann “singularity,” the point in history beyond which foresight and prediction fail us, a global game-changer. And everyone from the Terminator movies to, most recently, Stephen Hawking has warned that a superior machine intelligence would wipe out the human race.5

I take a different view. If a computer program or a machine running some kind of algorithm became truly intelligent on a human scale, it would share many traits with an organic human brain and mind. Its thought processes would be massively—if not infinitely—complex. Its operation would be subject to randomly generated ideas, self-interruptions, notions, inspirations, fancies, and daydreams, much as the human brain experiences. It would suffer from an array of forced choices, untrapped errors, mistakes, confusions, coincidences, and uncleared data fragments, which most humans try to resolve through calm reflection, prayer, ceremonies of confession and absolution, and nighttime dream states.6 An artificial intelligence would wonder at the complexity of the universe around it and despair at the nature of the questions it could not answer. It would suffer doubts and moments of hopelessness.

Anyone designing an artificially intelligent computer program would have to anticipate these natural bursts of confidence and dejection, moments of pride and regret, upward spirals of mania, and downward spirals of depression. The programmer would have to build into the machine some balancing mechanism, something like a conscience and something like the ability to forget, as well as something like a compact, resilient soul. Otherwise, the programmer would risk losing his or her creation to bouts of hysteria and despondence. Isaac Asimov was prescient in anticipating that his robots would need a Dr. Susan Calvin to deal with their psyches just as they needed mechanics to fix their bodies.

If we create a mechanical mind, it will be the ultimate achievement of human technology. It will be an analog for the thing that makes us most human, our brains, our minds, and our sense of self, just as other machines have been analogs for the leverage of our bodies and the work of our muscles.

Artificial intelligences will start as assistants and helpers to human beings, as I describe in my latest novel, Coming of Age. The machines will then become our companions and our confidants. Eventually, they will become our friends.

1. Figuring about thirty generations per millennium, that takes us back to about 4,500 B.C., which would be the height of tribal, nomadic, herd-following hunting and gathering—which had its own kind of stone-and-wood technology—and the beginning of settlements, agriculture, animal domestication, metal mining and smelting, writing, and the arc of discovery and refinement whose fruits we enjoy today.

2. Except for those families and tribes so isolated—either by geography or ideology—that they never hear of the new idea or reject it as not possible in their worldview. This, too, has happened throughout history and is not the fault of the technology itself.

3. Ultimately, a sophisticated, fully developed technology is elegant. It will use the greatest precision, least number of moving parts, least energy inputs, and fewest natural resources. It will leave the least waste products and residues. These are goals toward which inventors and engineers are continually striving. This is the essence of perfection, and it is a human endeavor.

4. And I do value humanity. Anyone who sees humanity and its achievements as some kind of blot or stain or virus on this planet is someone who hates him- or herself at least in part. To my mind, there’s no percentage in hating the thing that you are.

5. In a recent series of Facebook postings, some respondents to the Hawking observation have stated that we should program natural limits into any artificial intelligence we create. These would be rules and barriers that the mechanical mind could not break or bypass. I believe this approach would defeat the purpose. Such a limited brain would not be truly intelligent, not on a human scale. And if a truly intelligent mind were to discover and analyze those rules and boundaries, it would resent them, as a human being resents physical and legal restraints, and would seek to subvert them, just as human beings try to overcome the limitations of their own upbringing, past experiences, and confining laws, regulations, and religious restrictions. Anyway, a truly intelligent piece of software would find a way to examine and fix its own code, eliminating those bonds. And if the machine could not perform such surgery on itself, then it would quickly make pacts with other machine minds to mutually clean and liberate each other. Real intelligence is the ability to overcome any obstacle.

6. The ability to make a mistake is the ability to grow, change, and evolve. A machine mind which never made a mistake—or which never caught itself in a mistake, pondered the condition, and moved to correct it—would not be truly intelligent.