Sunday, December 4, 2016

My Story of Oil

When I worked at Howell-North Books in Berkeley, editing volumes of railroad history and Western American, I learned many interesting facts. One was that the word “ore,” from the viewpoint of a miner, has no exact definition. Sure, the general meaning is material that can be mined and refined at a profit. But that doesn’t tell you what percentage of a shovelful of dirt constitutes ore and the rest waste in any particular sense, because the values keep changing based on the methods used and the current state of the market. An independent mercury miner hand-working a seam at the now-defunct New Almaden mine in California might discard any load with less than ten percent cinnabar as waste not worth hauling back up to the surface. The operator of an open-pit, steam-shovel copper mine in Arizona might take two pounds of metal out of a ton of ore—or one percent—and call it a rich mine.1

The same thinking applies to a barrel of oil. There is no standard definition or composition of the commodity we call “oil.” Sure, there are benchmarks for pricing, like “West Texas Intermediate” (WTI) and “Saudi sweet light crude.” But every field produces oil with a different proportion and weight of underlying hydrocarbons. And each refinery is optimized to take oil of a particular quality from a particular region.

I remember when the Trans-Alaska Pipeline was approved, all the oil produced on the North Slope was legislatively earmarked for North American refiners on the basis of “energy independence.” Then the obvious place to ship Alaskan oil was the Chevron refinery in Richmond, California. But that refinery was optimized to take raw product from the fields of Indonesia. This oil from the Far East is more like coke than crude. If you spill it on the water, it doesn’t spread out to form a bright, rainbow sheen; instead, it contracts into floating clumps like bits of cork. So, at the time, a deal was made that allowed North Slope oil to be sent to Japanese refiners, and Japan traded it barrel-for-barrel with the Chevron refinery for their take of Indonesian oil.

When I was at the end of my last freelance, novel-writing gig in the mid-1990s and the money was running out, I needed to get back into the corporate world. The fastest way to build my resumé after such a hiatus was to hire out as a contractor rather than hope to be employed directly. So for a number of years I became a Kelly Temp. I worked for a season as administrative assistant in the Control Systems Engineering Department at Royal Dutch Shell’s refinery in Martinez, California. And as is my practice, I used the opportunity to ask intelligent questions and learn everything I could about the business.2

I can remember as a child, when the family drove from Philadelphia to New York, seeing the oil refineries of New Jersey five or six miles away from the highway across the tidal flats. I can remember smelling them at that distance, too—a rich, funky, sulfurous odor, like a mixture of hot tar, rotten eggs, and farts. So as an adult, when I went to work at Shell, I mentioned to my supervisor that the site didn’t smell like a refinery. He replied that if I ever did smell anything, I should report it, because the company would then pay me $25. “If you can smell something, that means we’re losing product somewhere.”

In the earliest days of refining—oh, late 1800s to early 1900s—the process was pretty simple, based solely on thermal cracking. These people were what modern refiners call “oil boilers.” They would heat the raw crude and feed it into a tower that drew off the fractions—based on the number of carbon atoms in the hydrocarbon chains—that settled out by weight. Lightest, and coming off the top of the tower, were the gases with one, two, three, or four linked carbons surrounded by hydrogen atoms: methane, ethane, propane, and butane. Since most refineries had no large customers for these byproducts and couldn’t be bothered to compress and store them until they collected enough to sell, they just lit a flare at the top of the tower and burned them off. In the middle of the tower came the liquids with between five and sixteen carbons: from pentane to hexadecane, represented by gasoline (octane, eight carbons), kerosene (decane, ten carbons), typical diesel fuel (dodecane, twelve carbons, and heavier fractions), and bunker C fuel oil (pentacontane, about fifty carbon atoms).3 And at the bottom of the tower would be the residues: tars, waxes, and the stuff that is used to make asphalt. Mixed in with the straight-line hydrocarbons chains would be those with odd branches and cross-connections. Along with the oil would come impurities that are not exactly hydrocarbons, like the carbon-ring molecules benzene, toluene, and xylene. You can also find other compounds, like sulfur, which makes the oil categorically “sour.”

Each type of crude oil yields varying fractions of these products. You can guess that “sweet light crude” has only small amounts of sulfur and large fractions of the liquids useful in blending gasoline, kerosene—once a lantern fuel but now burned in jet engines—and diesel fuel. You can also guess that heavy, sludgy oils, like that from the Indonesian fields, contain a lot of tar and wax.

In the old days, the refiners took what they could get from the oil by fractionation. The 42 gallons in a standard barrel might, in a really good grade of crude, yield only twenty or thirty gallons of highly prized gasoline, and the rest would go to less valuable byproducts. And of course, the gaseous fractions were still flared off as waste. This explains why oil prices are pegged to benchmarks with known qualities. The quoted price per barrel is always adjusted locally for the grade of oil and its fractions.

But that was the old days. In a modern refinery, like the one I worked at in Martinez, the operation uses all the fractions. The operation is more than just a cracking tower; it’s a complete chemical plant. After the cracking step, the gases, the lightest liquid fractions, the branched hydrocarbons, and the carbon rings are all broken into simpler, straight-line molecules and then knit back together into gasoline, jet fuel, or whatever the plant wants to make. The heaviest fractions are broken into lighter molecules and then knit together into more valuable products. As the American meatpackers used to say, “We use every part of the pig but the squeal.” And then the modern refiners blend for the designated octane level4 and put in additives for engine cleaning, anti-knock performance, and environmental protection—these days including a percentage of corn-based ethanol—in keeping with federal and state regulations.5

If you drive by a modern refinery, you may still see clouds of white stuff coming out of pipes and boiling off some of the buildings. These days, that’s just steam venting from a heating process or condensing out of a cooling tower. Most refineries still maintain a flare, but it is not part of regular operations and is used only in emergencies. No matter how safe and well run a modern refinery may be, the various processes are still handling volatile, flammable products at high temperatures. Sometimes a batch deviates from its nominal operating parameters and might explode or burn, injuring personnel and damaging the plant. In that case, the control system automatically dumps the batch down a pipe that leads to a nozzle far off in the middle of a gravel field. There the product can be mixed with air and burn away without endangering anyone.6

So that’s my trip down memory lane. Oil is a fascinating and complicated business. And, like almost every other industry, the state of the art is constantly evolving toward greater efficiency, lower costs, lower environmental impacts, and greater dependability. This is a good time to be alive.

1. And when I worked at Kaiser Engineers in Oakland, we produced a massive, twelve-volume engineering report on an iron-ore mine in Ivory Coast. This was to be a vast complex on new ground, with an open-pit hematite mine, mill and slurry plant, pipeline to take the slurry to the coast, pelletizing plant to turn the ore into shippable form, stockpile and ship-loading facilities, and a new harbor, plus housing and amenities for all the workers. The proposed ore was rich, 42 percent pure iron. But because the mine was 400 miles from the coast, most of that through treacherous mangrove swamp, and the cost of money was high at the time, while the world market for iron was weak, the partners simply could not justify building the plant. All that glitters is not gold, especially when it’s on the backside of the Moon.

2. Part of my weekly duties was to back up Control Systems Engineering’s computer records. Although the hardware and software that ran the plant were modern and up-to-date, the backup system was a relic from the old IBM 360 days. So I learned to mount, feed, and start reels of nine-track magnetic tape—those big cabinets with spinning reels and loops of tape that spelled “computer” on the television shows I grew up with in the 1960s.

3. Generally, the fewer carbons there are in the chain, the more thoroughly the fuel burns—that is, breaking more of the available carbon-to-carbon bonds at once—leaving fewer unburned hydrocarbons to flush out as soot and particulate. This is why methane burns more cleanly and with higher energy than gasoline, and much more cleanly than lump or even powdered coal.

4. When I worked at Shell, the Control Systems engineers told me a dirty little secret: that they sometimes had difficulty making fine adjustments in blending the octane level; so their medium and premium grades of gasoline always carried a few percent more octane than was strictly required by law. So if you care about your engine’s performance, go to the big yellow seashell sign. (But, then, maybe they grinningly tell that to all the newbies.)

5. California has its own mix of additives, required by the California Air Resources Board (CARB). That’s why the world can be awash in oil and gasoline, but if there’s been a fire or other shutdown at a California refinery, supplies will be tight and the price will go up.

6. The flare went off once when I was on the Shell property. It wasn’t an accident; one of the engineers was testing a new way to ignite the errant product stream more efficiently. The blast and roar shook the surrounding buildings.

Sunday, November 27, 2016

Web of Character and Depth of Detail

There are traps in a major artistic endeavor like writing a novel. Similar traps exist, I imagine, in painting a large picture or mural, or composing a major symphony, but writing stories is what I do and what I know best.

The novelist has many threads to coordinate, especially in multiple-character or “ensemble” stories, such as I like to write. The author must weave together the personal relations among the various characters; the temporal relations among their actions, including initiating choices, reactions, and consequences; and the congruence of the characters’ actions with their established personalities and motivations. All of these, like the highlights and shadows in a painting or the contributions of each instrumental section to a score, must maintain the overall balance, tone, and proportion of the work.

To make a good story, the main characters must not be too passive, just letting things happen to them and then reacting according to their natures. This may be the way many people in real life function, but it makes for a poor figure in a story. But neither can the characters be too dynamic and all-encompassing. It’s fine for fantasies, comic books, and pagan religions to treat with gods and superheroes as superlative beings who can be daunted but never defeated, but you wouldn’t want to meet such a person in real life, and you couldn’t identify with such a character in a serious, modern story.

The draft first-half of my sequel to The Children of Possibility, which is tentatively titled The House at the Crossroads, has two main groups of characters working against each other. One group, the Troupe des Jongleurs from the original novel, has been fairly easy to portray and align, because they are dedicated in their mission, are naturally aggressive, and come to the page fully weaponized. But the second group, a young people whom “the Builders” send back into history to establish and operate the original Crossroads House, have been harder for me. They are scheduled to embark on a mission that suddenly changes because of the Jongleurs’ actions, and the terms of their commitment suddenly become much harder. As originally conceived, these young people were restless and bored, Europeans making life choices in a stale and static job market, and going back in time to become innkeepers at a temporal waystation simply looked like more fun than joining the reserve army. But my outline and my draft had suddenly placed them in a situation where they were forced to abandon their normal lives and undertake what was essentially a suicide mission.

When I sent this first half of the book to a good friend and fellow novelist, who is one of my regular beta readers, he rejected their situation immediately. He doesn’t believe in casually accepting suicide missions or in characters so passive that they will agree to a change in the original deal without convincing rewards or dire compulsions. He pointed out that walking into a buzz saw just because this couple gave their word and signed a binding contract is not a credible motive. And if the Builders pushed them forcibly through the time portal to complete a hopeless mission in a primitive ancient time, most people would disregard their instructions and, instead of lying low to avoid temporal paradoxes, would go full Connecticut Yankee and try to change history to their own liking and for the sake of their own survival.

My bad. This is also perhaps the greatest failing in my storytelling. Personally, I believe that most people are honorable, accepting of their fate, and stick to their commitments. I believe they must be yanked out of their comfortable chairs in order to send them on an adventure, like Bilbo Baggins in The Hobbit. I’m not emotionally in tune with the sort of people who wake up every day searching for action and spoiling for a fight, like Louis Wu in Larry Niven’s Ringworld stories or Kimball Kinnison in E. E. “Doc” Smith’s Lensman series. So my characters often have small dreams amid placid lives until something or someone collides with them sideways, and then they are forced to cope, to demonstrate their resourcefulness, and perhaps to fight for their lives. It’s not a bad approach to storytelling, but it can lead to traps like the one I fell into with House.

My novelist friend thought the fix would be a simple change in attitude, leading off with a few scenes of derring-do for the young couple, and then producing some kind of golden promise by the Builders sending them back on the doomed mission, so that the couple is emboldened, empowered, or coerced into going willingly. My friend was confident that my subconscious1 would easily figure out the necessary incentives. What I faced, however, was one of those “can God conceive of a stone too heavy for Him to lift?” puzzles.2 What incentive can you give daring and aggressive people to go back in time and then patiently wait for an outcome beyond their natural lifespans, meanwhile enduring hardships and eventual ignominious death, without them wanting to—even resolving to—change things?

Sometimes books just go wrong like this. Every novelist has a drawer or a hard disk full of half-baked stories and partial outlines that have struck a motivational or character-improvisational rock and foundered. Sure, the subconscious will figure it out … one day. In the meantime, why not turn to something else with a clearer path and story line? My novelist friend didn’t intend for me to stop telling the House story, because he found it interesting and compelling. And I think he tried to make the disjunction and its possible fix seem a lot smaller and less of a problem than it was.

The other difficulty with this conundrum—especially when the novel has already gone beyond the outline stage into an actual, 50,000-word, partial draft—is that to build up a credible story in the author’s mind, he or she must first give it enough complexity, memorable imagery, and substantiating details to make it come alive in the imagination. As a novel comes together, the telling acquires a depth of detail—layers of moss (for forest imagery) or barnacles (sea imagery)—and the characters acquire their own tastes, quirks, mannerisms, and speech patterns that make it difficult to change or even deflect their sense of self and the story’s direction. All of these details, swirling in the author’s brain and playing peekaboo with the subconscious, are a prerequisite to finally sitting down at the keyboard and telling the story in the reader’s real-time version.3

To change my characters’ intentions and reactions and to discover a reward or compulsion that would make them act against their motivations would mean ripping all this up and starting over. So, momentarily—actually, for about a day and a half—I noodled this unsolvable problem. Then I remembered the novelist’s salvation: the infinite malleability of character, space, time, and story line. If you can’t fix the problem, cheat.4

So that’s what I did. I found the one detail in all of my planning and thinking that had created the hang-up and turned the workable proposition into a suicide mission. And then the clouds parted and beams of sunlight shone down. I had a way forward. I will still have to scrap, envision, outline, and rewrite maybe three or four chapters out of the first fourteen; make some substantive changes to another two or three chapters; and then comb through and make minor deflections throughout the text, including that one hung-up detail. But this work is all doable. Moreover, it will make for a better story with more challenges for the characters to resolve with a hopeful spirit.

Still, this work of changing the story arc, adjusting character expectations and reactions, and revising a cascading series of incidents—all of this is no small matter in a fully developed draft. It is like trying to straighten the Bent Pyramid without taking it apart stone by stone. The author is moving heavy blocks of text in his mind, hearing them grating across the uneven surfaces of underlying stones, and perhaps seeing them grind away details of the story. It may be necessary work, but it takes time, and the experience is … fretful.

This is part of a writer’s working life: solving one problem after another until you can put in place the last dab of paint or the closing bars of the melody.

1. See Working With the Subconscious from September 30, 2012.

2. I had already faced that challenge with the first draft of my first published novel, The Doomsday Effect. It involved a planetary catastrophe with a micro black hole that was devouring the Earth from the inside, and no one could capture and contain it, so humanity was forces to build interstellar ships and flee. Fortunately, a good agent and a good editor made me see that I really had to find a way to solve the overarching problem—but that’s another story.

3. “Reader’s real time” is my shorthand for the ground-level walkthrough of the story. This is the reality that the reader will experience upon meeting the words on the printed or electronic page.

4. “Change the conditions of the test,” in the words of Captain James T. Kirk—said with a wry smile.

Sunday, November 20, 2016

At the Edge of Science

Be warned, this is a rant.1 This is where Crazy Old Uncle Thomas gnashes his dentures, pounds his cane on the floor, and screams things you probably don’t want the children to hear. But I’m going to say it anyway.

First of all, let me say that I love science and technology. Although I never formally majored in any scientific discipline, I am the son of a mechanical engineer, took the basic science and math courses in high school and college, and have worked alongside and reported on the activities of scientists and engineers for most of my professional life. I currently subscribe to a number of science magazines2 and, while I don’t necessarily read every article, I make a point of studying the contents, reading the summaries and articles that interest me, and skimming the rest. I believe the enterprise of science, which humanity has been pursuing diligently since about the 17th century, has made human life immeasurably better in terms of our understanding of the universe, this planet, and ourselves. We have vastly improved our practice of information management, communications, transportation, medicine, and everyday convenience over earlier times. So I’m a fan.

But that doesn’t mean I am a “true believer” in anything and everything. And I’m not an unobservant fool. In the past, oh, twenty years or so, I have noticed a disturbing trend at the leading edge of scientific inquiry that seems almost “postmodern” in its approach. We appear to be in the hands of scientists who have gone over to some kind of scientific fantasy, which replaces observation and fact-based analysis with imagination and mathematical illusion. Here are three examples.

Black Holes

Black holes are predicted by Einstein’s Theory of General Relativity. If you concentrate enough matter in a small enough space—say, by collapsing a massive star in on itself—that mass bends spacetime so much that not even light can travel fast enough to climb out of the gravity well. We have identified stellar objects, such as Cygnus X-1, that appear to have properties consistent with concentrating the masses of tens of suns into a space where no star can be detected. We also have observed effects at the center of our own and other galaxies suggesting that they concentrate the masses of billions on suns in what appears to be empty space.3

Well and good. Something strange is going on, and it would seem to fit with our present and most accepted theory of how time, space, and gravity work. But I have begun to see in the literature suggestions that black holes are not just bottomless garbage bins from which nothing—not even the fastest object in our universe, the photon comprising light and other electromagnetic effects—can escape. Black holes are now supposedly able to give up energy and radiation, such as when the small ones “evaporate” in Stephen Hawking’s theory of simultaneously appearing and disappearing particle/antiparticle pairs. And lately it has been suggested that matter and information can actually come out of a black hole: supposedly, the information is turned into a two-dimensional hologram that continues to exist on the outer surface of the event horizon and can theoretically be retrieved.4

So black holes don’t really have to be black at all. Doesn’t this smack of “I have a novel idea and I can generate the math to prove it”? A black hole is, after all, a theoretically constructed object for which our observations and analyses are frustratingly distant and indirect. That is, they are less imaginary than a unicorn but also less real, from the standpoint of hands-on study, than a horse. So scientists are now embroidering the edges of a theoretical tapestry. This is not necessarily advancing our understanding of what the universe, in all its strangeness, actually is.

Quantum Entanglement

While General Relativity deals with galaxies and stellar-sized masses, quantum mechanics is concerned with particles and forces too small to see with the naked eye—and most of them too small to observe or directly detect using any instrument at all. With its Standard Model, quantum mechanics has generated a menagerie of subatomic particles and their associated fields—that is, forces spread over the surrounding area as a theoretical stand-in for the physical particle and its effects. Most of these particles are in the lower range of size where, if you can detect it at all, you also deflect it. That is, you can know what the particle is, or where it’s going, but not both at the same time.

Most of the particles smaller than the protons, neutrons, electrons, and photos that we’re all familiar with from high-school chemistry have been found in high-energy colliders. These take two beams of common particles traveling at near-light speeds in vacuum and run them together head-on at higher and higher energies. The resulting train wreck gives off fragments traveling at speeds and energies that can be mathematically interpreted as having a given mass. By conducting the experiment over and over and comparing the results—usually in the form of flying pieces which quickly disintegrate into ever smaller pieces—physicists can identify new particles. So far, everything they’ve discovered either fits into, or expands, the Standard Model’s pattern of masses, spins, interactions, and symmetries that include the elementary particles: the leptons such as electrons, positrons, and neutrinos; the bosons such as the photons, gluons, gravitons, and the Higgs boson; and the quarks—in their varieties of “up,” “down,” “charm,” “strange,” “top,” and “bottom”—that make up larger things, the hadrons, such as protons and neutrons. It was by smashing beams together, over and over again, that physicists at CERN’s Large Hadron Collider discovered the disintegration trail of the Higgs boson in 2012.

All well and good. But now quantum mechanics is predicting that some of these particles can become “entangled” over unusually large distances. That is, two electrons or quarks or even large molecules may be separated by distances so great that light or gravity effects would take a measurable amount of time to travel between them, but they can still interact instantaneously. The position, momentum, spin, polarization, or some other characteristic of one in the pair is instantly affected by a change in the corresponding characteristic of the other. This would seem to violate the basic principle in relativity that nothing—not information, not energy, not influence, not gravity effects—can move across the universe faster than the speed of light. If the Sun were to suddenly vanish from our system—poof!—it would still take eight minutes for our view of the Sun from Earth to wink out and for our planet to give up its angular momentum and start heading out into interstellar space in a straight line.

Unless, of course, some particles in the Sun and their correspondents on Earth—no saying which ones, of course—were quantumly entangled, and then we would know of the disaster instantly by observing the corresponding particle here on Earth. So the physicists with this bright idea and the math to prove it have found a way to overcome the traditional prohibition on instantaneous action at a distance. Like wormholes and subspace radios—both of which can supposedly shortcut the vast distances of interstellar space—all of this seems a bit wishful and fanciful.

Catastrophic Global Warming

Okay, here’s where Uncle Tom goes nuts. Of course, climate changes. Any decent appreciation of astronomy, geology, evolution, and the other hard sciences confirms that we live under a variable star on a changeable planet. Eleven thousand years ago—when members of H. sapiens had fully attained our current level of mental and physical capabilities—we came out of an ice age that covered most of Eurasia and North America with ice sheets a mile thick and drew the ocean levels down by about four hundred feet to the edges of the continental shelf. In recorded history we have the Norse traveling to “Vinland” in North America a thousand years ago and finding grapevines in Newfoundland, suggesting that there really was a “Medieval Warm Period.” We also have historical observations from the middle of the last millennium suggesting that humankind experienced a “Little Ice Age,” with much colder climate and “frost fairs” held on European rivers that had frozen over, where now they run freely all year round.

We have been tracking sunspot cycles since Galileo first reported seeing spots on the Sun with his new telescope in 1610. Then, between about 1645 to 1715, the Sun went into a quiet period called a “Maunder minimum,” named for the scientist who first described it.5 Since sunspots increase the star’s release of energy, the number of spots at any given time affects the amount of energy arriving on Earth. From observations over the past four hundred years or so, we have detected within the eleven-year sunspot cycle a larger, four-hundred-year cycle of rising and falling eleven-year peaks. Our last three solar cycles were unusually large in terms of this greater cycle, heading toward a four-hundred-year maximum, while our current cycle that’s just ending, identified as Cycle 24, generated only about half as many sunspots as those previous peaks. Whether we’re heading toward another Maunder minimum or just seeing a freak aberration in this one cycle is not yet apparent. But the 17th century minimum—and the presumed period of declining spots leading up to it—would seem to correspond to the Little Ice Age, and the recent peaks we’ve experienced would seem to correspond to our recent Industrial Age warming spell.

In 1987, I attended Energy Daily’s annual conference in Washington, DC, which discussed issues related to energy production and use. One of the speakers was James Hansen, then head of the NASA Goddard Institute for Space Studies, who presented on the role of carbon dioxide from our energy and transportation industries in increasing global temperatures. One of the points he made was that rising temperatures would not mean that everywhere on the planet would become uniformly and increasingly hotter, but instead some places would get hotter, and others colder, as fluctuations in the climate’s response worked themselves out. But this does kind of leave exact measurement of the system and the extent of the damage open to question, doesn’t it? Another of James Hansen’s points that I remember vividly was that “the man in the street” would be able to see these temperature changes for himself by “the middle of the next decade”—meaning the mid-1990s. Well, I’ve been living in the San Francisco Bay Area for almost half a century now, and my sense from “the street” is that some years are colder and some warmer; some have more rain and some less; the fog still rolls in each summer, making May and September our hottest months; and we still tend to turn the wall heaters on from December to February. If there’s been an obvious change in our weather patterns, indicating a change in climate, I have yet to see it.

In support of global warming or climate change—and the call of climate scientists to make urgent and drastic changes in our energy production and use—Michael Mann of my alma mater, Penn State, produced the “hockey stick” graph. He used recorded temperature observations for as long as we’ve been taking them—and NASA keeps “adjusting” the raw data of these observations downward for the early to mid 20th century—and from the time before that he measures variations in tree ring—which I always understood responded to changes in ambient moisture rather than temperature. His graph shows the period from about 1000 AD up to current times, but curiously it smooths out the fluctuations of the Medieval Warm Period and Little Ice Age. On his graph, temperatures bump along in neutral for a thousand years until the last hundred years or so, when they start taking off.

Since we cannot study climate as a complete system—hell, we can’t even predict the weather much farther out than next week—and since we can’t experiment with effects that encompass land, sea, and sky all at once, climate scientists instead create models of what they think is going on. Models are mathematical structures that assign variables to different effects like incidental sunlight, factors governing land and water absorption and re-radiation of the infrared waves, and atmospheric conditions that govern absorption of the outgoing radiation—the “greenhouse effect.” Carbon dioxide is a weak greenhouse gas, not as good at blocking that re-radiation of heat into space as are, say, water vapor or methane. The climate scientists’ models which predict dire effects in the next century all rely on a positive feedback loop, what they call a “forcing,” in which the carbon dioxide that’s been added to the atmosphere increases the amount of water vapor—and that achieves the predicted greenhouse effect and rising temperatures.

This whole scenario seems problematic to my mind for four reasons. First, models are not testable science. They fall into the realm of “I have a good idea and I can generate the math to prove it.” Since climate involves too many influences and variables to predict accurately, the model makers are forced to choose which ones they will study and which ignore or hold to a constant value. Second, if your model depends entirely on positive feedbacks, you’re missing something. Feedbacks are generally both positive and negative; for example, more water vapor might mean more greenhouse gas blocking re-radiation from land and sea, but it might also mean more clouds, which block the incidental radiation and so result in cooling temperatures. Third, all of these models appear to be anticyclical. That is, they assume straight-line effects that continuously build and reinforce each other. Once the carbon-dioxide influence takes off, it is predicted to continue upward forever. But everything we’ve seen about Earth science involves cycles of rising and falling effects—temperatures, rainfall, storms, ice. More carbon dioxide should eventually force an increase in other factors, like promoting an increase in green plants, which would then absorb that excess carbon. You might adjust the set point somewhat, but no effect goes on forever. Fourth and finally, the observed temperature rises seemed to slow down in the early 21st century, and none of the climate models could account for that—nor indeed for variations observed earlier in the 20th century.

I do not deny that climate does change. I do not doubt that human activity has some effect on the changes. But I doubt that the effects will be as uniformly catastrophic as the models predict. And even if they are, human beings are geniuses at adapting to change. We lived through the Little Ice Age with far less understanding and technological capability than we have today. We’ve expanded our reach over the whole globe—except for Antarctica, where there’s nothing much we need or can live on—and we are now going into space, which is the most hostile climate of all. I think we can move uphill a bit as the sea levels rise over the next hundred years, and we can adapt our buildings, our agriculture, and our lifestyles to an overall increase of a couple of degrees. Besides, as our technology keeps developing and changing, we are bound to see new energy production and usage patterns arise and sweep across the economy faster than a government mandate could ever achieve. Look what smartphones have done to telephone landlines and the recording industry in less than a decade. The pace of technological change and its acceptance will only increase.

Astronomy, physics, and the geosciences have achieved much for humanity, and I have no doubt they will achieve even more in years to come. But that does not mean that every scientist with a nimble imagination and a penchant for writing equations and mathematical models should be granted the mantle of impeccable truth. Human life on Earth is not going to change much, no matter what astronomers predict about black holes, or quantum physicists predict about subatomic particles and their entanglement. And we’re not going to dismantle our modern energy production and use patterns just to head off a rise in temperature of a couple of degrees a hundred years from now.

Here ends the rant. Uncle Tom is now back in his chair, mumbling quietly to himself.

1. For the origins of this rant, you might want to read, among others, Fun with Numbers (I) and (II) from September 19 and 26, 2010, and Fun with (Negative) Numbers from November 3, 2013.

2. Chief among them Science, Nature, Scientific American, and Astronomy.

3. I made a personal study of black holes in preparing to write my first published novel, The Doomsday Effect, from 1986.

4. See, for example, “Stephen Hawking has found a way to escape black holes” from Wired, August 25, 2015.

5. I also made a personal study of the Sun and its cycle of spots to write the novel Flare with Roger Zelazny, published in 1992.

Sunday, November 13, 2016

Excess Spirit

In a recent post1 I considered the ways that two systems, a human being and a robot, would approach the task of hitting a baseball. At the most basic level, both would observe the pitcher’s release and the flight of the ball and then apply either a learned response or an algorithm to interpret the ball’s actual trajectory and select the ideal swing. The difference is that the robot would wait patiently to perform this task, while the human being—with so much else going on his or her body and mind—will fidget, glance around, take practice swings, and remain physically and mentally ready for so much more to happen than simply meeting the oncoming ball with the barrel of the bat.

Having just observed the major league playoffs and the World Series, with their ups and downs,2 I could see another difference between humans and machines—or the artificial intelligence that will run them. Humans have an excess of spirit that no analytical intelligence has yet attained. We express this spirit in terms of expectations, beliefs, hopes and fears, confidence and insecurity—all of which take known or discoverable facts into account and yet sometimes cause us to think and believe otherwise.

This comes up most strongly in differences between the commentary from the announcers and the action on the field. The men and women in the broadcast booth today have instant access to a fantastic computer memory. They not only know and can tell you which teams have met before and what were the outcomes. No, that’s just the sort of statistic an old-time radio announcer could look up in a sports almanac. Today’s broadcaster can tell you how many times and when each batter has faced each pitcher, how many balls and strikes the pitcher has thrown against him, and how many hits for how many bases, or runs batted in, or home runs the batter has made. And these statistics go back for years and across the player’s affiliation with every team in his career. If a batter makes an unusual home run—or an outfielder makes an unusual diving catch—the announcer can find a similar instance from play earlier in the season, or even from years ago, and run a video clip of it before the next player comes to bat.

All of this reminds me of Han Solo in the Star Wars movies: “Never tell me the odds, kid.” The past is only prelude. And, as the financial disclaimers say, “Past performance is not a predictor of future results.” Insurance actuaries, baseball announcers, and robots might live and die by statistical nuance. Human beings almost never do. “I can win this one!” “I can make that jump!” “I can beat that guy!” “This time will be different!” This is the spirit that the human mind—at least in its healthy state—and the instinct for survival generates when faced by daunting and difficult situations and by long odds.

I imagine that, to achieve something like this with an artificial brain, the designers would have to insert a counterfactual circuit that kicks in whenever the algorithm produces negative or undesirable outcomes. Such a circuit would amend or ignore previous experience, or accentuate only certain aspects of that experience that would tend to support a positive outcome. “Yes, eight times out of ten I have struck out against this pitcher, but twice I got a hit—and one of them was a home run.” It would not do to change the performance algorithm itself, because then all sorts of unexpected actions might result, and the system might never find its way back into equilibrium. No, the adjustment would come in the decision-making process: to go ahead and try when the algorithm and previous experience predict a negative outcome.

Computer programmers would be loath to design and install such a circuit. Right now, artificial intelligences are designed for maximum reliability and caution. You want the program that routes your request through the bowels of Amazon.com’s order system to read the tag, make the selection, send the bill, and ship the product. If the product is out of stock, on back order, or no longer available, you don’t want the computer system to engage some kind of I-Can-Do-This! circuit and make an unauthorized substitution. The system is supposed to flag anomalies and put them aside for decision either by a human being or a higher-level system that will query the customer for a preferred choice.

You don’t want the expert system that is reading your blood tests and biometrics, consulting its database of symptoms linked to causes and disease types, and making a diagnosis to suddenly engage an It’s-All-For-the-Best! circuit and suddenly opt for diagnosing a rare but essentially benign condition when the patient is staring a fully developed, stage 4, metastasized cancer in the face. If there is hope to offer, you want the expert system to display and rank all the possibilities, then let a human doctor or a higher-level system explain their meanings and the correct odds to the patient.

You don’t want a self-driving car to look at a gap in traffic that’s just millimeters wider than the car’s fenders and, ignoring deceleration rates, cross winds, and tire traction, switch to the We-Can-Make-This! circuit and lunge for the gap. Not ever—and not even as a possible option that the system would present to the human driver, who might suddenly want to put his or her hands on the wheel and make a wild and death-defying correction. When a ton or two of moving metal is involved, and multiple lives are at stake, you always want the system to err on the side of caution and safety.

Perhaps human beings, when left to operate the order system, make the expert diagnosis, or take the steering wheel, will put hope before either experience or caution and then select the substitute product, offer the most cheerful guidance, or lunge for the gap. But human society has also instituted programs of training and ethics to temper an excess of spirit. We expect human professionals to react more like machines: rules based, odds driven, and cautious. And we expected that of ourselves long before anyone thought of turning complex operations and decisions over to mechanical systems.

But that is in dealings with other human beings, who put their trust in another person’s performance accuracy and decision power to achieve outcomes of life-and-death or even mere customer satisfaction. When dealing for our own sakes—when confronting the possibility of receiving a surprise package, or beating a cancer diagnosis, or squeezing into a narrow gap—we feel at liberty to err on the side of hope.

And we certainly expect our team, our players, and ourselves to express that excess spirit and make a gallant try when life and safety are not on the line. In a baseball game, the batter might know the odds of hitting against a tough pitcher, but who would expect him to pause, reflect on past performance, step out of the box, and refuse to even try? One team might have lost to the other a dozen times in the past, but no one expects them to give up and forfeit. Spirit, hope, and confidence in the face of long odds are what make the rest of us cheer harder when the batter makes a home run or our team wins against the moneyline bet. They let us forgive more easily when the past does indeed turn out to be a predictor of performance.

And when our own life and safety are on the line—when you must jump from the third floor or stay on the ledge and burn, when the gap between two trucks colliding ahead of you is no wider than your fenders, when the doctor pronounces a disease that has every chance of taking your life—then the excess of spirit, the can-do attitude, the refusal to follow the odds are survival traits. When death is likely but not certain, then it’s best to err on the side of hope and take action. We make up stories about this, and in every story the reader wants the hero to strive against the odds. He or she may not succeed—the actual outcome is left to fate and the author’s skilled hand. But for the hero to face reality and give up before the crisis point would not make a good story. Or it would be the story of a depressed or insecure person who is no sort of hero, no role model, who doesn’t deserve to be the focus of a story in the first place.

Excess of spirit is not just an oddity that we find in the human psyche, it’s something we expect from any healthy person.

1. See Excess Energy from July 24, 2016.

2. Yes, and my hometown Giants went down in the fourth game of the National League Division Series, when the bullpen collapsed in the top of the ninth inning. And we had such hopes.

Sunday, November 6, 2016

All Men Created Equal

In the movie Lincoln with Daniel Day-Lewis, a sequence depicts various of his cabinet members wrestling with the Emancipation Proclamation and the question of whether the black population is “equal” to the white citizenry, or merely “equal before the law.” Even one politician who secretly lives with a black woman can only concede the latter proposition but not the former. At the time I saw the movie this whole question left me stumped, and I still consider it a ding-dong situation—meaning the question itself does not apply.

Let’s start with the obvious case. No free person in the mid-19th century would consider a formerly enslaved population that was newly emancipated to be his or her intellectual, moral, or social equals. The free person has lived without overt coercion, without the fear of death and maiming for the slightest disobedience, with the opportunity to live as he or she wants—within reason and restricted only by social norms—and been permitted to obtain as much education as he or she desires. The enslaved person has been denied freedom, subjected to constant coercion, and forbidden an education. It is through the exercise of personal freedom, the use of one’s own reason, and the attainments of education that a person distinguishes him- or herself and finds his or her place in a society of equals. In 1863, the enslaved black population had none of this, and so could not be considered anyone’s social equals.

But this is not the core of my objection. The proposition that one person and another can be true equals in any intellectual, moral, or social sense—and here, by “social,” I mean in terms of obligations tendered and respect offered—is inane. No two people are exactly equal in any measure, not two persons of a similar race and background, not two persons of the same sex, not even two brothers or sisters. One person is always going to be smarter, more clever, or better educated. One is always going to be better natured or morally stronger. One is always going to be better liked, more respected, or due more personal consideration for achievements attained and good works performed. This is part of the human condition, in the same way that one person is always going to be taller, weigh more, or have a longer reach than the other. People come in all physical sizes, bodily shapes, moral characters, mental capacities, learned experiences, and educational developments. To try to make one person or population equal to the other—or to make yourself believe such a proposition—is a fool’s errand.1

But doesn’t it say in one of our founding documents, the Declaration of Independence, “that all men are created equal”? Wasn’t this a core belief—a “self-evident truth”—of the time? Didn’t people originally believe that all human beings could be compared and found to be no different, one from the other?

Well, not exactly. The author, Thomas Jefferson, was no fool. The quotation has to be read in the context of the second and third clauses: “that they are endowed by their Creator with certain unalienable Rights, that among these are Life, Liberty and the pursuit of Happiness.”2 That is, they are equal—not in intellect, character, social standing, or physical attributes—but equal in the sight of their god, in their possession of certain rights, and so equal under the law.

This is why the question in the movie among the politicians of Lincoln’s cabinet so bothers me. How can mere men be asked to determine the equality of other men in any measurable dimension? How can any human being know another person or group so intimately that he or she can measure and find likeness with him- or herself or with another group?3 To make that judgment requires an intellectual and moral precision which stands outside of—and superior to—what is found in humanity. That is, the standing of a god. Other humans need not apply for the job.

So the only question left to politicians, lawyers, judges, and anyone else who operates in a legal, political, or social sphere is whether they, the emancipated black population—or any “other” in terms of the question—are equal under the law. In terms of the Declaration, the answer is “all men”—and by extension, overriding the prejudices of the times, all women, too. Anyone who qualifies as a human being is equal under the law. In the eyes of their creator, all people are equal in standing if not in quality of intellect, character, or other internal and external attributes. In standing before other political entities, such as in our republic under the Constitution, that equality before the law may be reserved for natural-born or naturalized citizens—although our law does not exempt foreigners from deserving respect and proper treatment; it simply denies them certain rights under the laws pertaining to citizens.

In our society—which I think is still one of the best in the world—people are not granted any more rights because they are smarter and better educated, or enjoy higher social standing and better political connections, or have access to more money. We have no natural aristocracy which can expect immunity under our laws. Everyone arrested and taken into custody goes to central holding until they can appear before a judge and try to post bail. That some people with money and connections will never spend a night in jail, no matter what they do, is popularly perceived as an injustice and not a proper application of the law. That people with money can buy a better defense at trial is countered in most judicial districts by the state providing public defenders to anyone without means.

Ours is not a perfect system. Injustices do occur. But this is because our society is managed by human beings; our institutions are established with good intentions but operated through the actions and perceptions of imperfect individuals. We should not try to improve this state of affairs by handing our rights and our fates over to higher orders of being such as angels, robots, or psychiatrists and social scientists. Instead, we live under a democratic system that permits average people—including their self-appointed advocates and journalists—to point out and discuss injustices, suggest remedies and alternatives, and put them up for a vote. This approach is sloppy, slow, and crude, but it works better than many more streamlined, idealized, artificial systems.

While differences in education, social standing, and wealth will not confer or deny rights under the law, and anyone who checks out as human is accepted into society and its protections, we do sometimes have to take into account marked differences in personal capability. Some people—whether through genetic inheritance, defect in the birthing process, disease, or accident—have lost the faculties that make them fit within accepted norms and so be accepted as fully capable in society. In most cases, they lack the mental capacity to function and so become vulnerable to reduced circumstances and predation by others. In some cases, they lack the moral depth or self-restraint expected of the average person and so become a danger to themselves or others. We have—any society should have—means of identifying, evaluating, and segregating these people from the rest of society.4 We do this for their own good and ours.

But these are not minor differences in mental or moral capacity. We do not deny the rights of a person who might be a few IQ points short of the average. Nor do we deny the rights of a person who has performed some minor indiscretion under the laws governing property or interpersonal relations. Our system is—or should be—designed to care for people who are incapable of functioning in society, and to protect society from those who have proved themselves resolute predators on their fellow citizens. And even those who have been distinguished by bad behavior rather than by diminished capacity are still allowed to change their outlook and redeem themselves.

So our society does, in these cases, make distinctions based on equality of intellectual and moral character, but only in the grossest and most obvious sense. We condemn only those falling in the lowest part of the normal spectrum of human development and achievement. Then our intent is only for the protection of the individual and society. And we are still, in these cases, only talking about equality before the law.

In even the most extreme cases, equality of personal essence, of character, or of soul still lies outside of human judgment, in the realm of whatever god or gods there may be.

1. About the only time we can reasonably call for and expect personal equality is in sporting contests. For example, we want two boxers or wrestlers to compete in the same weight class and have similar training and skills as established by previous performance. The same would apply—with obvious exceptions for the different positions played—to members of a baseball or football team. Certainly, if someone is going to bet on the outcome of a contest, he or she would expect a certain match in physical attributes and skills going into it.

2. The fact that Jefferson could believe his own words and at the same time hold black Africans enslaved, denying them liberty and their pursuit of happiness—and their lives, if he so chose—reflects a popular conceit of the 18th century. To the “civilized” white European, the “savage” black tribes of Africa were not entirely human, not fully members of the species H. sapiens. As such, they could not to be granted equal rights with the white race. This is a latent belief that science and an improved morality has long since demolished—at least among people of greater education and better moral character.

3. In the matter of trying to judge a whole group, I side with Sergeant Kilrain, the fictional character in the movie Gettysburg. In a conversation with his colonel, Kilrain says, “Any man who judges by the group is a pea wit.”

4. Or we used to. In the case of people with clearly defined mental illness, our old system of care through certification and commitment to a state-run hospital has been overridden by concerns about the ill person’s rights. Essentially, we have lost the ability to distinguish between a healthy person deserving of full rights and an incapacitated person who cannot function in society. Where once we took care of them in hospitals, we now leave them to family care or let them roam the streets in proud, defiant misery with access to only occasional and poorly funded services. Something has broken down in our society, and we need the moral courage to fix it.

Sunday, October 30, 2016

Unique or Ubiquitous?

It’s the age-old question: Is life—that curious reversal of entropy—unique to the Earth or ubiquitous in the solar system, the galaxy, and/or the universe? We don’t have an answer for that yet—although a more thorough examination of the surfaces of Mars, Jupiter’s moon Europa, and the planets circling other stars at distances which permit liquid water to flow may soon provide more solid evidence, yea or nay.

Right now, the only hard evidence—and not counting teasing suggestions of ancient water courses on the surface of Mars, microbe-like bubbles in rocks on from that planet, vast oceans and icy geysers on Europa, and so on—is that life seems to be ubiquitous on our home planet, Earth. It exists everywhere and adapts seamlessly to the harshest conditions. Life thrives in hot springs that would sterilize surgical instruments. It metabolizes sulfur particles in the volcanic heat of deep-ocean vents. It lives in Antarctic lakes so far beneath the ice that they never see daylight, and in deep caverns under the Earth’s surface that harbor eternal darkness. Life in the form of single-celled microbes has existed on this planet since the crust was cool enough to walk on and mineralized water first collected in puddles among the rocks. Life exploded half a billion years ago into multi-celled organisms that, through the wandering adaptations of evolution, have since then populated the oceans, crawled up on land, and then flopped back into the oceans. In many forms, and many times, life has invented complex structures for eyes, lungs, wings, and brains. Life has invented tools, and now those tool-users are inventing even more complex tools that emulate life in both its motions and its mind. Life has invented dreams, self-knowledge, and a personal sense of purpose.

But whether our kind of life originated here, through the accidental bonding of one atom to another in a chemical-rich tidal pool, or blew in from interstellar space as a microbial spore, or was deposited here intentionally by alien astronauts on a seeding expedition, or was simply left as an alien microbe inside an astronaut’s dropped glove—any proof one way or the other is lost in the Earth’s earliest history. The only clue we have is that all our forms of life—from ocean vents to deepest caves and all across the planet’s surface—use the same chemicals in the same recording system. All the life that we know uses just four DNA bases in a three-pair reading frame, coding sixty-four possible combinations calling out just twenty different amino acids, to make all the proteins that comprise this planet’s viruses, bacteria, fungi, plants, animals, and human beings. If there ever were competing coding systems, evolutionary rivals to the DNA/RNA/protein domain on Earth—say, with different kinds of nucleic acids,1 or more or fewer possible base-pair combinations, or calling on other amino acids, of which there are many, or creating novel proteins—they all lost out long ago in competition with our kind of DNA coding. And then, in short order, atmospheric weathering and the predations of our kind of life wiped all trace of these competing systems from the face of the Earth.2

But if life is ubiquitous elsewhere in our galaxy and throughout the universe, where is it? That’s the question originally posed by physicist Enrico Fermi. If billions of stars in our galaxy are similar to our Sun, and if a large fraction of them have Earth-like planets, and if life is as simple to create and ubiquitous as we believe—that is, not a divine act by a deity with a particular interest in this one planet—and if many of those stars and planets and ecosystems are far older than Earth, so that ancient spacefaring civilizations should by now have grown up, spread out to other star systems, started exploring their neighborhood and sending radio and other electromagnetic transmissions back and forth … then where is everybody?

I’ve already given one answer.3 If the Sol System, at four billion years old, is one-third of the estimated age of the universe, and if just one of the planets in our system has taken this long, first to develop life itself, and then for that life to become intelligent enough to conduct something so complex as a civilization, and if we’ve only just sent our first probes to other planets and out among the stars … then perhaps ancient, spacefaring cultures are not as common as one might expect. And it’s a big galaxy. The farthest stars we can see with our naked eyes are only about 15,000 light-years away, and the galaxy itself is six times that, or 100,000 light-years in diameter, with a lot of it hidden by interstellar dust. So an active star-based civilization might lie on the other side of the great spiral, and we would never know it. And they would likely never come visiting.

A second answer to the question of where is everybody lies in the imponderable distances between stars. We imagine that humanity will one day form its own interstellar trading empire, composed of colonies sent out from Earth. We imagine that other intelligent species from other star systems have already established their own empires and will one day come visit us, either as explorers, inquisitive academics, and benevolent diplomats or as resource-hungry conquerors.

Both of these imaginings require the existence of ships and interstellar drives that will bridge the distances between the stars in a reasonable amount of time. Right now, these ships and drives are the products of fantasy, created either by writers who want to place their dramas out among the stars among alien peoples, or by mathematicians and scientists who believe that a twist in the physics we know will let these drives and ships one day become reality. But for now, they are fantasy.

Traveling faster than light is prohibited by the persuasive theories and mathematical conclusions of Albert Einstein. Any object with a mass greater than a photon’s cannot attain even the speed of light—let alone exceed it—because then its mass becomes infinite and its passengers’ perception of time stops. Perhaps this limit itself is a mathematical chimera, a fantasy, and ships can zoom past 299,792 kilometers per second (186,282 miles per second) with no ill effects. Perhaps the speed of light as a physical limit has no more standing than the speed of sound did back in the early days of jet aircraft. But for now, that’s not the way to bet.

Of course, any motion by a ship in space requires some gain in acceleration, and this is only achieved by ejecting mass in a direction opposite to the direction of travel. The formula for this is F=ma, force equals mass times acceleration, Newton’s Second Law of Motion. If you’re going to travel fast you have to carry fuel with you. Every rocket escaping Earth’s gravity burns some fuel and oxidizer to create thrust. And early in the flight, you must carry great quantities of fuel and oxidizer—far more than your payload. Notions of other propulsion systems, like “gravity polarizers” or “impulse drives,” are just that—fanciful notions.

Perhaps you can get to the stars by collecting interstellar dust and hydrogen from the region ahead of the ship with a magnetic sweep, compressing it inside a fusion drive, and blasting it out the back—the principle behind the Bussard ramjet. But that sweep has to catch an awful lot of dust and gas for a hard burn. At those densities, the impact of the gas coming at you and being slowed and captured by your magnetic field becomes a serious factor. Perhaps, instead, you can rig huge, reflective sails that catch the emitted sunlight and solar wind of ejected particles from your own star, which lies behind you, and let them drive you outward. But that force becomes exponentially weaker the farther you go. Certainly, you can blast free of your own planet’s gravity and then from your star’s gravity with a rocket or a ramjet or a solar sail and then coast the rest of the way to the nearest star. But that’s the slow way, the really slow way. Coasting outward would require your passengers to enter “suspended animation” or “hyper sleep”—another unproven technology—for hundreds or thousands of years on the voyage. Or, instead, they might pass their genes down through succeeding generations, creating their own mini-civilization, while humanity travels to Proxima Centauri, our nearest neighbor with a possibly habitable planet.4

Writers and scientists imagine ways to get around these limits. One way is to punch a hole through the fabric of spacetime itself, assuming that fabric is wadded up like a crumpled piece of paper or twisted piece of laundry, so that entering the hole at one set of temporospatial coordinates takes you effortlessly and timelessly—without violating the law about light speed—to another set of coordinates which may lie any imaginable distance away. Another trans-light transport system would collapse the fabric of space ahead of a vehicle which itself is moving at sub-light speeds—and stretch out that fabric behind the ship—so that the ship rides the “warp” at any imaginable speed faster than a beam of light but without the ship actually going faster than light in physical spacetime. Aside from the epistemological trickery of moving faster than light through space by manipulating space itself, either of these methods assume that space has some kind of structure or substance rather than being empty nothingness inhabited by random molecules of gas and dust. The notion that space has more than three physical dimensions—x, y, and z, or sideways, up-and-down, and forward-and-back—and one dimension of time, is the subject of mathematical speculation. Physicists can play with these dimensions in their minds and write formulas about them. But no one has ever gone into them, pushed an object through them, or managed to tweak them using any amount of force.

Right now, putting aside all the fantastical drives and all the ways the universe might operate through speculative mathematics, the only method we have for traveling to another star—the only way that we know works—is the blast-free-and-coast method. It’s the way the Voyager probes have left the Sun’s immediate environment … and they’ve been traveling for almost forty years now.5 Maybe alien intelligences will have access to different mathematics, greater energy resources, and different conceptions of space and time. Maybe one day we humans will discover or create these things for ourselves, so that the limits imposed on interstellar travel by our current physical laws will disappear, just like the barrier once represented by “the speed of sound.”

But even with a really solid push, the trip to Proxima Centauri and its newly discovered, possibly habitable planet is going to take much longer than the four years at light speed. Even if we could travel almost that fast and establish a colony that wanted to communicate, trade, or even remain in touch with Earth, the distance makes those interactions problematic. A phone call with a time lag of 4.24 years becomes impossibly frustrating. A shipment of goods that takes huge energies and decades of transit time to deliver becomes impossibly impractical. Encyclopedic knowledge and new scientific discoveries might be worth encoding and sending by tight beam to the colony world, but you would never know how much got garbled in transmission or was misunderstood and misapplied on receipt by people who barely speak your language anymore and no longer share your culture.

If your children or business partners or planetary administrators embark for a life among the stars—even as close a star as Proxima—then you must kiss them goodbye and go on about your Earthly business. If you embark on the journey yourself, then your new family, your trading partners, and your social structure are all sleeping the pods next to yours.

Travel to the stars—by any system that we can say for sure works—will be a one-way migration. The colonists are no more going off to create a trading or political empire than the first bands of Homo sapiens who wandered out of Africa some 65,000 years ago, making their way on foot and by dugout canoe, and eventually pitching their tents in Arabia, Eastern Asia, and the Americas, were intent on trading with or expanding the political sphere of the people they left behind, half a world away, in Africa.

Perhaps life exists across the universe. I certainly believe it does—that our curious reversal of entropy is not unique to Earth but ubiquitous throughout the stars. But unless other intelligences have access to a different and greater understanding of space, time, matter, and energy, they will live as they started: isolated pools on planets separated by imponderable distances. And whether they arrived in each place in their current form by traveling in blast-and-coast ships, or their distantly ancestral DNA blew in as a sporulated microbe riding a chunk of dust and then evolved into uniquely adapted species—this matters not much at all. The travelers will forget about the “home world.” Their children will not know it except in ever more fancifully embroidered stories and legends. And no one will ever go back—only outward in blind migration.

And that’s where everyone is.

1. Of course, deoxy-ribose nucleic acid (DNA) in the cells of microbes and in the cellular nuclei of multi-celled organisms is transcribed into a messenger ribose nucleic acid (mRNA) during the production of proteins. This simpler molecule—missing a hydroxyl group on the ribose ring’s second prime carbon atom—substitutes the base Uracil for Thymine among the four bases of the code. These differences suggest that RNA is an earlier and possibly competing form of the pattern-encoding molecule, or that DNA is an evolutionary development out of RNA. But both are still part of the same coding system.

2. For more on this theme, see DNA is Everywhere from September 5, 2010.

3. See Where Are They? from July 6, 2014.

4. See “Potentially Habitable Planet Found Orbiting Star Closest to Sun” in National Geographic News, August 24, 2016.

5. See Voyager: The Interstellar Mission by NASA’s Jet Propulsion Laboratory.

Sunday, October 23, 2016

Investing in the Future

I’ve been chasing Moore’s Law1 for almost forty years now. The chase started back in the mid-1970s, when I began to get serious about my fiction writing, and for that I needed a good typewriter. The IBM Selectric I had at work was a wonderful machine, and I lusted after one for my personal use. I learned that IBM could outfit these machines with internal padding and a hush hood to deaden the clack! of the ball striking the platen, and this was necessary because I got up early in the morning to write and didn’t want to disturb my sleeping bride. They also made a model with a backspacing correction tape, which I could certainly use, because I was always a fumble-fingered typist but also a textual perfectionist who spent half my time cranking the page up and away from the platen to erase my mistakes.

Back then, you didn’t just walk into an IBM store—although such existed—and buy a typewriter. IBM was a business-to-business enterprise before anyone knew exactly what that meant. To buy a new Selectric that was set up the way I wanted, I had to make an appointment with an IBM sales representative to come to my home and order the machine to be specially manufactured for me, right down to platen width and paint color. I think I paid about seven hundred dollars for this typewriter and waited several weeks for it to be manufactured and delivered. That was a lot of money back then, but I was a serious writer and this going to be a lifetime investment.

In the next four years, I probably put 80,000 words through that machine—or 120,000 if you count backspacing corrections. That word count comprised one complete novel manuscript and about half of another.2 But by then, about 1979, every day on my way to work at a new job in San Francisco, I passed a store called Computerland. One of my roommates in college had majored in computer science, and as a science fiction writer I was always fascinated by computers. So I stopped in and started asking questions. Could the machines do this? Could they do that? I knew there were computerized game consoles out in the world, but I wanted a real computer, not a one-trick pony. The salesman patiently explained that, yes, it could do whatever I wanted, so long as the machine was programmed for it.3

On that basis, and with thoughts of embarking on a new hobby—and maybe a new career—centered around computer programming, I bought an Apple II. It was the full-blown machine, with 16,000 bytes—essentially equivalent to characters—of read-only memory (ROM) for its operating system and another 48,000 bytes of random-access memory (RAM) for the programs I would write. Not having a spare television to use as a computer display, I bought a small monitor with a green-pixel-on-black screen. Not wanting to record and play my programs with a cassette tape, I bought a separate drive that could read 5.25-inch floppy disks with a capacity of 103,000 bytes. The whole setup cost me something like $2,500—much more than my fancy IBM typewriter—but it was an investment in learning a new and exciting business, programming for fun and profit. I even bought myself a subscription to Byte magazine and joined the Apple Users Group.

What I quickly discovered was that programming was easy for an English major to learn, because it involves both logical thinking and fetish-level attention to new rules of grammar and punctuation. But to write elegant programs that performed truly clever feats was a specialty all its own. I could make the machine do simple tasks in the BASIC language, and even dipped into the structured language Pascal. But the first time I saw a Pong game where the puck moved in a curve responding to a supposed gravity field, and I tried to parse and understand the coding involved, I discovered that I was years too late for getting in on the ground floor of professional computer programming. My best talent lay in telling stories rather than making pixels dance.

But along about this time I also discovered that the computer made a marvelous writing machine. It wasn’t linear, like a typewriter laying a track of words, line after line, moving down the page. It didn’t need a backspacing correction key and yards of expensive whiteout tape to fix my fumble fingers. It didn’t have to cut paper with scissors and tape it back together to move paragraphs around. While the word-processing software offered for the Apple II’s native operating system was fairly limited, an adapter card running an Intel microprocessor would let me use the CP/M system and WordStar—and that was high-powered stuff indeed. The one snag was the printer I would need to output my writing efforts. All the Apple models put faint, gray, dot-matrix characters on flimsy thermal paper, which no publisher would accept. So in addition to a new processor card and an expensive piece of software, I plunked down $5,000 for an NEC Spinwriter printer. It did not come with padding and a hush hood, and the machine-gun clatter drove my wife out of the apartment on the days I needed to print out a manuscript. But I was in the writing business at full power.

Since the 103,000-character capacity of the Apple floppies could hardly contain just one chapter from a novel, and I actually needed a second disk drive to do that, because the first drive was running the WordStar program itself, I knew the Apple II with however many additional cards was not long for this world. By that time I was ready to step up to an expert’s CompuPro passive-backplane S-100 system with more robust overall construction, including an Intel 8088 chip set and dual drives reading full-size, eight-inch, one-megabyte floppies. That was the machine which produced my first published novel. A couple of years later, in 1987—and again emulating the machines we now had at work—I gave up the CompuPro for an IBM AT-286 system running my first hard-disk drive, with all of twenty megabytes. I also traded the impact printer for a Hewlett-Packard LaserJet printer, which was faster and quieter.

Over the next fifteen years, I did not buy a new computer system. Instead, I replaced every piece and part of that original IBM three times over: three new motherboards with faster chips and more memory, two new cases and power supplies, two new monitors, four new keyboards, a new printer, dozens of different mice and trackballs, and newly added peripherals like a flatbed scanner and a sound system. I also upgraded my operating system, going from IBM-DOS to PC-DOS to OS/2 to Windows 95. I went from WordStar to Microsoft Word—where I’ve pretty much stayed, just to maintain readability across my various manuscripts and other projects. I added a ton of other software, though, giving me new capabilities in calculation, page layout, project management and scheduling, computer graphics, photo manipulation, audio and video creation, presentations, speech recognition, and any of those other things a computer can do so long as it has the right programming. And finally, after the last upgrade—having paid about three times the cost of a new desktop computer just to bring my now scratch-built home system up to current standards with separately purchased components—and having finally become disappointed with the current Windows systems—I moved over to the Apple world again with a new Mac Pro and all new software.

And that machine is now in its second generation and more powerful than ever. As of this writing, I’m pushing six individual processing cores accessing 32 billion bytes of memory, running the operating system off a solid-state drive for fast startup and processing, and linking the machine to assorted other hard drives for working storage and backup, each with about two trillion bytes of capacity. I use this system to write, format in HTML, and lay out the print-on-demand pages for my novels, maintain my author’s website, process photographs, record and edit the occasional video, maintain my music collection, and do whatever else a computer with the right software programs can do.

The point of my story is not to tell you how great my system is. Anyone with the need for processing speed, storage capacity, and communications capability can trot down to the Apple store, Best Buy, or wherever and purchase this stuff the same day right out of the store’s stock on hand. The point is that the technology in just this one area of writing and communicating has improved so greatly in just the last forty years.

My Apple II in 1979 had about the same random-access memory capacity as the IBM System 360 that ran the entire University Park campus at Penn State when I was there a decade earlier. My first twenty-megabyte hard drive in the IBM AT-286 had four times the storage capacity of the System 360’s clothes washer–sized drums. Today, my pocket telephone has more processing power, more memory, and more embedded software than any of these systems. And I don’t have to push keys and formulate commands in a specialized, coded language. Instead, I tap on little pictures and what I want pops up instantly.

Over the years, I have chased Moore’s law with a vengeance. To do that, I have had to buy, discard and buy anew, and then buy all over again all of the hardware, software, and peripherals comprising my home system. I have had to grapple with and learn a new technical language, teach myself new software and new functions, and change to new ways of working. I’ve done all this cheerfully, because each step represented an amazing increase in the speed, capability, and convenience of my main writing tool. And each time I have consigned to the closet or the recycle center a piece of equipment that just three or four years ago was shiny and new but now is old, slow, outmoded, and no longer supported—but never because it was faulty or broke down. As an early adopter of this technology, I know I’ve been making an “investment in the future.” If I and others like me didn’t buy into the next wave and support the continuing development of this technology, then the developments would stop coming, we would all enter a vast middle period of same-old-same-old, and the future would become a little less bright.

In the wider world, we have all seen the same turnover of technology in other contexts. Each advance in automotive design brings us cars that are more fuel efficient, lighter, and structurally safer, with more safety and convenience features like satellite navigation and rear-view cameras. We get household appliances that are more energy efficient, quieter, with greater capacity in a smaller footprint. We get cell phones that are less bulky, less expensive, offer greater coverage, and which double as record players, note takers, appointment books, cameras, calorie trackers, and anything else you can do with a camera, microphone, GPS antenna, screen, and data stream. And in each case we buy into the next generation of technology not because the old model no longer works or works badly, but because it simply can’t keep up.

Is this process of creation and destruction a bad thing? It is, if you view each purchase in your life as I did that IBM Selectric typewriter: as a lifetime investment.4 You might buy good furniture that way, because the seating capacity and underlying structure of sofas and chairs hasn’t changed much in a hundred years, although the materials have certainly improved. But any tool that is susceptible to improvements in design, energy use, materials, and connectivity is now going to be subject to a process of continuing evolution. This is how nature improves on organic structures and capabilities. This is the course of technological innovation and obsolescence that Western Civilization has been following since the rise of scientific trailblazers like Newton and Descartes and inventors like Fulton, Edison, and Bell.

Sometimes, I think about stepping off this escalator to the future. I dream about writing with a really good fountain pen in a notebook filled with pages of creamy white paper. At age sixteen I wrote my very first novel that way, longhand, in pen, with the second draft pecked out on my grandfather’s upright Underwood typewriter with the glass insets. I typed on two sheets at once sandwiched with carbon paper in between. It’s a dream of returning to a simpler age of slow changes and eternal values.

But, damn, no! Erasing and correcting all that fumble-fingered typing on two copies with an erasure shield stuck in front of the carbon layer … Hell, no! Never again!

1. For our visitors from Alpha Centauri, Gordon Moore of Fairchild Semiconductor and later co-founder of Intel predicted in 1965 that the number of transistors per square inch in an integrated circuit would double every two years. What he meant was that computers and their component chips would keep getting exponentially smaller and more powerful, and as a corollary the cost of computing power and capability would go down. As of right now, the law is still in effect, although some predict that when circuit widths in complementary metal-oxide semiconductor (CMOS) transistors get down to about seven to five nanometers—a capability predicted to arrive sometime in the early 2020s—the shrinking will stop due to the vagaries of quantum mechanics. Thomas’s Law predicts that, by the time this happens, some new technology will likely have already made the transistor obsolete.

2. I wrote two complete books and started several more before embarking on my first manuscript to be published, The Doomsday Effect, which came out in 1986. And even that novel took a wrong turn at the beginning and had to be completely rethought and rewritten before it could find a home with Baen Books. This is part of any author’s story: the “first novel” is almost never the first book you try to write. Those first, stillborn books are the process of learning the craft.

3. This was a bit of an exaggeration, of course. Any machine has built-in limitations, which is why a Volkswagen is not a Ferrari. But in essence what he said was true: computers simply run programs, and the program becomes the core of whatever the machine is supposed to be doing.

4. About a dozen years ago I took that IBM Selectric down to a used typewriter store and gave it to them, hoping it would find a good home. I had kept the machine only to fill out paper forms, and by then virtually every transaction in my life was online. That typewriter still worked perfectly, but I needed the desk space.