Sunday, October 30, 2016

Unique or Ubiquitous?

It’s the age-old question: Is life—that curious reversal of entropy—unique to the Earth or ubiquitous in the solar system, the galaxy, and/or the universe? We don’t have an answer for that yet—although a more thorough examination of the surfaces of Mars, Jupiter’s moon Europa, and the planets circling other stars at distances which permit liquid water to flow may soon provide more solid evidence, yea or nay.

Right now, the only hard evidence—and not counting teasing suggestions of ancient water courses on the surface of Mars, microbe-like bubbles in rocks on from that planet, vast oceans and icy geysers on Europa, and so on—is that life seems to be ubiquitous on our home planet, Earth. It exists everywhere and adapts seamlessly to the harshest conditions. Life thrives in hot springs that would sterilize surgical instruments. It metabolizes sulfur particles in the volcanic heat of deep-ocean vents. It lives in Antarctic lakes so far beneath the ice that they never see daylight, and in deep caverns under the Earth’s surface that harbor eternal darkness. Life in the form of single-celled microbes has existed on this planet since the crust was cool enough to walk on and mineralized water first collected in puddles among the rocks. Life exploded half a billion years ago into multi-celled organisms that, through the wandering adaptations of evolution, have since then populated the oceans, crawled up on land, and then flopped back into the oceans. In many forms, and many times, life has invented complex structures for eyes, lungs, wings, and brains. Life has invented tools, and now those tool-users are inventing even more complex tools that emulate life in both its motions and its mind. Life has invented dreams, self-knowledge, and a personal sense of purpose.

But whether our kind of life originated here, through the accidental bonding of one atom to another in a chemical-rich tidal pool, or blew in from interstellar space as a microbial spore, or was deposited here intentionally by alien astronauts on a seeding expedition, or was simply left as an alien microbe inside an astronaut’s dropped glove—any proof one way or the other is lost in the Earth’s earliest history. The only clue we have is that all our forms of life—from ocean vents to deepest caves and all across the planet’s surface—use the same chemicals in the same recording system. All the life that we know uses just four DNA bases in a three-pair reading frame, coding sixty-four possible combinations calling out just twenty different amino acids, to make all the proteins that comprise this planet’s viruses, bacteria, fungi, plants, animals, and human beings. If there ever were competing coding systems, evolutionary rivals to the DNA/RNA/protein domain on Earth—say, with different kinds of nucleic acids,1 or more or fewer possible base-pair combinations, or calling on other amino acids, of which there are many, or creating novel proteins—they all lost out long ago in competition with our kind of DNA coding. And then, in short order, atmospheric weathering and the predations of our kind of life wiped all trace of these competing systems from the face of the Earth.2

But if life is ubiquitous elsewhere in our galaxy and throughout the universe, where is it? That’s the question originally posed by physicist Enrico Fermi. If billions of stars in our galaxy are similar to our Sun, and if a large fraction of them have Earth-like planets, and if life is as simple to create and ubiquitous as we believe—that is, not a divine act by a deity with a particular interest in this one planet—and if many of those stars and planets and ecosystems are far older than Earth, so that ancient spacefaring civilizations should by now have grown up, spread out to other star systems, started exploring their neighborhood and sending radio and other electromagnetic transmissions back and forth … then where is everybody?

I’ve already given one answer.3 If the Sol System, at four billion years old, is one-third of the estimated age of the universe, and if just one of the planets in our system has taken this long, first to develop life itself, and then for that life to become intelligent enough to conduct something so complex as a civilization, and if we’ve only just sent our first probes to other planets and out among the stars … then perhaps ancient, spacefaring cultures are not as common as one might expect. And it’s a big galaxy. The farthest stars we can see with our naked eyes are only about 15,000 light-years away, and the galaxy itself is six times that, or 100,000 light-years in diameter, with a lot of it hidden by interstellar dust. So an active star-based civilization might lie on the other side of the great spiral, and we would never know it. And they would likely never come visiting.

A second answer to the question of where is everybody lies in the imponderable distances between stars. We imagine that humanity will one day form its own interstellar trading empire, composed of colonies sent out from Earth. We imagine that other intelligent species from other star systems have already established their own empires and will one day come visit us, either as explorers, inquisitive academics, and benevolent diplomats or as resource-hungry conquerors.

Both of these imaginings require the existence of ships and interstellar drives that will bridge the distances between the stars in a reasonable amount of time. Right now, these ships and drives are the products of fantasy, created either by writers who want to place their dramas out among the stars among alien peoples, or by mathematicians and scientists who believe that a twist in the physics we know will let these drives and ships one day become reality. But for now, they are fantasy.

Traveling faster than light is prohibited by the persuasive theories and mathematical conclusions of Albert Einstein. Any object with a mass greater than a photon’s cannot attain even the speed of light—let alone exceed it—because then its mass becomes infinite and its passengers’ perception of time stops. Perhaps this limit itself is a mathematical chimera, a fantasy, and ships can zoom past 299,792 kilometers per second (186,282 miles per second) with no ill effects. Perhaps the speed of light as a physical limit has no more standing than the speed of sound did back in the early days of jet aircraft. But for now, that’s not the way to bet.

Of course, any motion by a ship in space requires some gain in acceleration, and this is only achieved by ejecting mass in a direction opposite to the direction of travel. The formula for this is F=ma, force equals mass times acceleration, Newton’s Second Law of Motion. If you’re going to travel fast you have to carry fuel with you. Every rocket escaping Earth’s gravity burns some fuel and oxidizer to create thrust. And early in the flight, you must carry great quantities of fuel and oxidizer—far more than your payload. Notions of other propulsion systems, like “gravity polarizers” or “impulse drives,” are just that—fanciful notions.

Perhaps you can get to the stars by collecting interstellar dust and hydrogen from the region ahead of the ship with a magnetic sweep, compressing it inside a fusion drive, and blasting it out the back—the principle behind the Bussard ramjet. But that sweep has to catch an awful lot of dust and gas for a hard burn. At those densities, the impact of the gas coming at you and being slowed and captured by your magnetic field becomes a serious factor. Perhaps, instead, you can rig huge, reflective sails that catch the emitted sunlight and solar wind of ejected particles from your own star, which lies behind you, and let them drive you outward. But that force becomes exponentially weaker the farther you go. Certainly, you can blast free of your own planet’s gravity and then from your star’s gravity with a rocket or a ramjet or a solar sail and then coast the rest of the way to the nearest star. But that’s the slow way, the really slow way. Coasting outward would require your passengers to enter “suspended animation” or “hyper sleep”—another unproven technology—for hundreds or thousands of years on the voyage. Or, instead, they might pass their genes down through succeeding generations, creating their own mini-civilization, while humanity travels to Proxima Centauri, our nearest neighbor with a possibly habitable planet.4

Writers and scientists imagine ways to get around these limits. One way is to punch a hole through the fabric of spacetime itself, assuming that fabric is wadded up like a crumpled piece of paper or twisted piece of laundry, so that entering the hole at one set of temporospatial coordinates takes you effortlessly and timelessly—without violating the law about light speed—to another set of coordinates which may lie any imaginable distance away. Another trans-light transport system would collapse the fabric of space ahead of a vehicle which itself is moving at sub-light speeds—and stretch out that fabric behind the ship—so that the ship rides the “warp” at any imaginable speed faster than a beam of light but without the ship actually going faster than light in physical spacetime. Aside from the epistemological trickery of moving faster than light through space by manipulating space itself, either of these methods assume that space has some kind of structure or substance rather than being empty nothingness inhabited by random molecules of gas and dust. The notion that space has more than three physical dimensions—x, y, and z, or sideways, up-and-down, and forward-and-back—and one dimension of time, is the subject of mathematical speculation. Physicists can play with these dimensions in their minds and write formulas about them. But no one has ever gone into them, pushed an object through them, or managed to tweak them using any amount of force.

Right now, putting aside all the fantastical drives and all the ways the universe might operate through speculative mathematics, the only method we have for traveling to another star—the only way that we know works—is the blast-free-and-coast method. It’s the way the Voyager probes have left the Sun’s immediate environment … and they’ve been traveling for almost forty years now.5 Maybe alien intelligences will have access to different mathematics, greater energy resources, and different conceptions of space and time. Maybe one day we humans will discover or create these things for ourselves, so that the limits imposed on interstellar travel by our current physical laws will disappear, just like the barrier once represented by “the speed of sound.”

But even with a really solid push, the trip to Proxima Centauri and its newly discovered, possibly habitable planet is going to take much longer than the four years at light speed. Even if we could travel almost that fast and establish a colony that wanted to communicate, trade, or even remain in touch with Earth, the distance makes those interactions problematic. A phone call with a time lag of 4.24 years becomes impossibly frustrating. A shipment of goods that takes huge energies and decades of transit time to deliver becomes impossibly impractical. Encyclopedic knowledge and new scientific discoveries might be worth encoding and sending by tight beam to the colony world, but you would never know how much got garbled in transmission or was misunderstood and misapplied on receipt by people who barely speak your language anymore and no longer share your culture.

If your children or business partners or planetary administrators embark for a life among the stars—even as close a star as Proxima—then you must kiss them goodbye and go on about your Earthly business. If you embark on the journey yourself, then your new family, your trading partners, and your social structure are all sleeping the pods next to yours.

Travel to the stars—by any system that we can say for sure works—will be a one-way migration. The colonists are no more going off to create a trading or political empire than the first bands of Homo sapiens who wandered out of Africa some 65,000 years ago, making their way on foot and by dugout canoe, and eventually pitching their tents in Arabia, Eastern Asia, and the Americas, were intent on trading with or expanding the political sphere of the people they left behind, half a world away, in Africa.

Perhaps life exists across the universe. I certainly believe it does—that our curious reversal of entropy is not unique to Earth but ubiquitous throughout the stars. But unless other intelligences have access to a different and greater understanding of space, time, matter, and energy, they will live as they started: isolated pools on planets separated by imponderable distances. And whether they arrived in each place in their current form by traveling in blast-and-coast ships, or their distantly ancestral DNA blew in as a sporulated microbe riding a chunk of dust and then evolved into uniquely adapted species—this matters not much at all. The travelers will forget about the “home world.” Their children will not know it except in ever more fancifully embroidered stories and legends. And no one will ever go back—only outward in blind migration.

And that’s where everyone is.

1. Of course, deoxy-ribose nucleic acid (DNA) in the cells of microbes and in the cellular nuclei of multi-celled organisms is transcribed into a messenger ribose nucleic acid (mRNA) during the production of proteins. This simpler molecule—missing a hydroxyl group on the ribose ring’s second prime carbon atom—substitutes the base Uracil for Thymine among the four bases of the code. These differences suggest that RNA is an earlier and possibly competing form of the pattern-encoding molecule, or that DNA is an evolutionary development out of RNA. But both are still part of the same coding system.

2. For more on this theme, see DNA is Everywhere from September 5, 2010.

3. See Where Are They? from July 6, 2014.

4. See “Potentially Habitable Planet Found Orbiting Star Closest to Sun” in National Geographic News, August 24, 2016.

5. See Voyager: The Interstellar Mission by NASA’s Jet Propulsion Laboratory.

Sunday, October 23, 2016

Investing in the Future

I’ve been chasing Moore’s Law1 for almost forty years now. The chase started back in the mid-1970s, when I began to get serious about my fiction writing, and for that I needed a good typewriter. The IBM Selectric I had at work was a wonderful machine, and I lusted after one for my personal use. I learned that IBM could outfit these machines with internal padding and a hush hood to deaden the clack! of the ball striking the platen, and this was necessary because I got up early in the morning to write and didn’t want to disturb my sleeping bride. They also made a model with a backspacing correction tape, which I could certainly use, because I was always a fumble-fingered typist but also a textual perfectionist who spent half my time cranking the page up and away from the platen to erase my mistakes.

Back then, you didn’t just walk into an IBM store—although such existed—and buy a typewriter. IBM was a business-to-business enterprise before anyone knew exactly what that meant. To buy a new Selectric that was set up the way I wanted, I had to make an appointment with an IBM sales representative to come to my home and order the machine to be specially manufactured for me, right down to platen width and paint color. I think I paid about seven hundred dollars for this typewriter and waited several weeks for it to be manufactured and delivered. That was a lot of money back then, but I was a serious writer and this going to be a lifetime investment.

In the next four years, I probably put 80,000 words through that machine—or 120,000 if you count backspacing corrections. That word count comprised one complete novel manuscript and about half of another.2 But by then, about 1979, every day on my way to work at a new job in San Francisco, I passed a store called Computerland. One of my roommates in college had majored in computer science, and as a science fiction writer I was always fascinated by computers. So I stopped in and started asking questions. Could the machines do this? Could they do that? I knew there were computerized game consoles out in the world, but I wanted a real computer, not a one-trick pony. The salesman patiently explained that, yes, it could do whatever I wanted, so long as the machine was programmed for it.3

On that basis, and with thoughts of embarking on a new hobby—and maybe a new career—centered around computer programming, I bought an Apple II. It was the full-blown machine, with 16,000 bytes—essentially equivalent to characters—of read-only memory (ROM) for its operating system and another 48,000 bytes of random-access memory (RAM) for the programs I would write. Not having a spare television to use as a computer display, I bought a small monitor with a green-pixel-on-black screen. Not wanting to record and play my programs with a cassette tape, I bought a separate drive that could read 5.25-inch floppy disks with a capacity of 103,000 bytes. The whole setup cost me something like $2,500—much more than my fancy IBM typewriter—but it was an investment in learning a new and exciting business, programming for fun and profit. I even bought myself a subscription to Byte magazine and joined the Apple Users Group.

What I quickly discovered was that programming was easy for an English major to learn, because it involves both logical thinking and fetish-level attention to new rules of grammar and punctuation. But to write elegant programs that performed truly clever feats was a specialty all its own. I could make the machine do simple tasks in the BASIC language, and even dipped into the structured language Pascal. But the first time I saw a Pong game where the puck moved in a curve responding to a supposed gravity field, and I tried to parse and understand the coding involved, I discovered that I was years too late for getting in on the ground floor of professional computer programming. My best talent lay in telling stories rather than making pixels dance.

But along about this time I also discovered that the computer made a marvelous writing machine. It wasn’t linear, like a typewriter laying a track of words, line after line, moving down the page. It didn’t need a backspacing correction key and yards of expensive whiteout tape to fix my fumble fingers. It didn’t have to cut paper with scissors and tape it back together to move paragraphs around. While the word-processing software offered for the Apple II’s native operating system was fairly limited, an adapter card running an Intel microprocessor would let me use the CP/M system and WordStar—and that was high-powered stuff indeed. The one snag was the printer I would need to output my writing efforts. All the Apple models put faint, gray, dot-matrix characters on flimsy thermal paper, which no publisher would accept. So in addition to a new processor card and an expensive piece of software, I plunked down $5,000 for an NEC Spinwriter printer. It did not come with padding and a hush hood, and the machine-gun clatter drove my wife out of the apartment on the days I needed to print out a manuscript. But I was in the writing business at full power.

Since the 103,000-character capacity of the Apple floppies could hardly contain just one chapter from a novel, and I actually needed a second disk drive to do that, because the first drive was running the WordStar program itself, I knew the Apple II with however many additional cards was not long for this world. By that time I was ready to step up to an expert’s CompuPro passive-backplane S-100 system with more robust overall construction, including an Intel 8088 chip set and dual drives reading full-size, eight-inch, one-megabyte floppies. That was the machine which produced my first published novel. A couple of years later, in 1987—and again emulating the machines we now had at work—I gave up the CompuPro for an IBM AT-286 system running my first hard-disk drive, with all of twenty megabytes. I also traded the impact printer for a Hewlett-Packard LaserJet printer, which was faster and quieter.

Over the next fifteen years, I did not buy a new computer system. Instead, I replaced every piece and part of that original IBM three times over: three new motherboards with faster chips and more memory, two new cases and power supplies, two new monitors, four new keyboards, a new printer, dozens of different mice and trackballs, and newly added peripherals like a flatbed scanner and a sound system. I also upgraded my operating system, going from IBM-DOS to PC-DOS to OS/2 to Windows 95. I went from WordStar to Microsoft Word—where I’ve pretty much stayed, just to maintain readability across my various manuscripts and other projects. I added a ton of other software, though, giving me new capabilities in calculation, page layout, project management and scheduling, computer graphics, photo manipulation, audio and video creation, presentations, speech recognition, and any of those other things a computer can do so long as it has the right programming. And finally, after the last upgrade—having paid about three times the cost of a new desktop computer just to bring my now scratch-built home system up to current standards with separately purchased components—and having finally become disappointed with the current Windows systems—I moved over to the Apple world again with a new Mac Pro and all new software.

And that machine is now in its second generation and more powerful than ever. As of this writing, I’m pushing six individual processing cores accessing 32 billion bytes of memory, running the operating system off a solid-state drive for fast startup and processing, and linking the machine to assorted other hard drives for working storage and backup, each with about two trillion bytes of capacity. I use this system to write, format in HTML, and lay out the print-on-demand pages for my novels, maintain my author’s website, process photographs, record and edit the occasional video, maintain my music collection, and do whatever else a computer with the right software programs can do.

The point of my story is not to tell you how great my system is. Anyone with the need for processing speed, storage capacity, and communications capability can trot down to the Apple store, Best Buy, or wherever and purchase this stuff the same day right out of the store’s stock on hand. The point is that the technology in just this one area of writing and communicating has improved so greatly in just the last forty years.

My Apple II in 1979 had about the same random-access memory capacity as the IBM System 360 that ran the entire University Park campus at Penn State when I was there a decade earlier. My first twenty-megabyte hard drive in the IBM AT-286 had four times the storage capacity of the System 360’s clothes washer–sized drums. Today, my pocket telephone has more processing power, more memory, and more embedded software than any of these systems. And I don’t have to push keys and formulate commands in a specialized, coded language. Instead, I tap on little pictures and what I want pops up instantly.

Over the years, I have chased Moore’s law with a vengeance. To do that, I have had to buy, discard and buy anew, and then buy all over again all of the hardware, software, and peripherals comprising my home system. I have had to grapple with and learn a new technical language, teach myself new software and new functions, and change to new ways of working. I’ve done all this cheerfully, because each step represented an amazing increase in the speed, capability, and convenience of my main writing tool. And each time I have consigned to the closet or the recycle center a piece of equipment that just three or four years ago was shiny and new but now is old, slow, outmoded, and no longer supported—but never because it was faulty or broke down. As an early adopter of this technology, I know I’ve been making an “investment in the future.” If I and others like me didn’t buy into the next wave and support the continuing development of this technology, then the developments would stop coming, we would all enter a vast middle period of same-old-same-old, and the future would become a little less bright.

In the wider world, we have all seen the same turnover of technology in other contexts. Each advance in automotive design brings us cars that are more fuel efficient, lighter, and structurally safer, with more safety and convenience features like satellite navigation and rear-view cameras. We get household appliances that are more energy efficient, quieter, with greater capacity in a smaller footprint. We get cell phones that are less bulky, less expensive, offer greater coverage, and which double as record players, note takers, appointment books, cameras, calorie trackers, and anything else you can do with a camera, microphone, GPS antenna, screen, and data stream. And in each case we buy into the next generation of technology not because the old model no longer works or works badly, but because it simply can’t keep up.

Is this process of creation and destruction a bad thing? It is, if you view each purchase in your life as I did that IBM Selectric typewriter: as a lifetime investment.4 You might buy good furniture that way, because the seating capacity and underlying structure of sofas and chairs hasn’t changed much in a hundred years, although the materials have certainly improved. But any tool that is susceptible to improvements in design, energy use, materials, and connectivity is now going to be subject to a process of continuing evolution. This is how nature improves on organic structures and capabilities. This is the course of technological innovation and obsolescence that Western Civilization has been following since the rise of scientific trailblazers like Newton and Descartes and inventors like Fulton, Edison, and Bell.

Sometimes, I think about stepping off this escalator to the future. I dream about writing with a really good fountain pen in a notebook filled with pages of creamy white paper. At age sixteen I wrote my very first novel that way, longhand, in pen, with the second draft pecked out on my grandfather’s upright Underwood typewriter with the glass insets. I typed on two sheets at once sandwiched with carbon paper in between. It’s a dream of returning to a simpler age of slow changes and eternal values.

But, damn, no! Erasing and correcting all that fumble-fingered typing on two copies with an erasure shield stuck in front of the carbon layer … Hell, no! Never again!

1. For our visitors from Alpha Centauri, Gordon Moore of Fairchild Semiconductor and later co-founder of Intel predicted in 1965 that the number of transistors per square inch in an integrated circuit would double every two years. What he meant was that computers and their component chips would keep getting exponentially smaller and more powerful, and as a corollary the cost of computing power and capability would go down. As of right now, the law is still in effect, although some predict that when circuit widths in complementary metal-oxide semiconductor (CMOS) transistors get down to about seven to five nanometers—a capability predicted to arrive sometime in the early 2020s—the shrinking will stop due to the vagaries of quantum mechanics. Thomas’s Law predicts that, by the time this happens, some new technology will likely have already made the transistor obsolete.

2. I wrote two complete books and started several more before embarking on my first manuscript to be published, The Doomsday Effect, which came out in 1986. And even that novel took a wrong turn at the beginning and had to be completely rethought and rewritten before it could find a home with Baen Books. This is part of any author’s story: the “first novel” is almost never the first book you try to write. Those first, stillborn books are the process of learning the craft.

3. This was a bit of an exaggeration, of course. Any machine has built-in limitations, which is why a Volkswagen is not a Ferrari. But in essence what he said was true: computers simply run programs, and the program becomes the core of whatever the machine is supposed to be doing.

4. About a dozen years ago I took that IBM Selectric down to a used typewriter store and gave it to them, hoping it would find a good home. I had kept the machine only to fill out paper forms, and by then virtually every transaction in my life was online. That typewriter still worked perfectly, but I needed the desk space.

Sunday, October 16, 2016

A Child’s Science in Baseball

As a child, I was never much interested in baseball.1 My father always wanted to see the game when it was on television, and I would sit nearby moping, wanting to watch an old movie or teleplay—something that told an actual story. But I learned some of the basics because he would patiently try to explain them while I half-watched. I also played a few games in grade school—never good at it, always the last picked, and usually sent to play outfield in that part of the schoolyard that lay inside the edge of the woods. But still, I picked up a sense of the game.

What has stuck with me is that baseball depends on a child’s mystical sense of basic scientific properties. The game is rooted in them in a way that affects only a few other sports.

To begin, the baseball itself is endowed with a primitive energy, a kind of life force, that has nothing to do with the velocity of a pitcher’s throw or its impact in a glove or elsewhere on arrival. The ball is considered to be “live” and playable in certain parts of the field and at certain times during a play, and “dead” and out of play at other times.

The ball is live or “fair” from the point at which it leaves the pitcher’s hand until it arrives securely in the catcher’s glove or within his2 possession and control, at which point it becomes dead for further play on that pitch. If this arc from pitcher to catcher is interrupted by the batter’s hitting the ball, then it remains live until someone on the opposing team within the confines of the field catches and holds on to it, in which case it also becomes dead and ends not only the play on that pitch but the batter’s turn at bat. But if the player catching the ball drops or fumbles it, the ball remains live and in play.

If an opposing player catches the hit ball in foul territory before it hits the ground, it is also dead and ends the at bat. However, if the ball first touches the ground outside the extended baselines, it becomes foul and out of play, but it only ends play only on that pitch, and the batter gets another chance to hit. What happens to the ball after it clears first or third base on the infield is subject to other rules and conditions, mostly determined by where the ball ends up. Note that it is the position of the ball, not the player touching it, that conditions its status.

If the catcher fails to catch or hold on to the ball after the pitch, when the batter has failed to contact it and no member of the opposing team is on base or in a position to advance, the ball is technically live but everyone simply ignores it, because it has no more use in play. However, if the opposing team has one or more members on base, then the catcher may throw it to stop them from advancing to the next base. And a fumbled ball remains fully live and those players may advance until the catcher recovers it and throws to one of his teammates to tag a player out.

If the ball is hit and lands beyond the outer limits marked on the field between the two foul lines, it remains live but unplayable, and all the men on base may advance to home and score—a home run plus runs “batted in.” If the ball lands on the field and then bounces out of play, so that the outfielders cannot retrieve it, the ball remains live but the batter is limited to taking second base—a “ground rule double”—while any runners ahead of him may similarly advance only two bases.

What I’ve given here is just the briefest sketch of the most common states of play. Many others exist, such as what happens if the pitcher enters his “set” position and starts his windup to throw the ball home and then does something else—like move his foot off the rubber strip on the mound or turns to throw out a runner at one of the bases—or makes any of about a dozen other errors in protocol. This is a “balk” and the batter automatically advances to first base. The batter also advances if he is hit by the pitched ball—but not if his bat has contacted the ball first.

This concept of a ball being alive or dead is not limited to baseball, its offshoot softball, and its English predecessor cricket. Balls can be in play or out of play in games like tennis, squash and handball, badminton, basketball, and other contests arising from polite, rule-based play in the last few centuries. Even football—which I don’t consider very polite—has rules for when the ball, which in this case is not really a sphere, can be played and when it is dead.

The liveness of the ball can also contribute its power through a kind of electrical current to the things and people that touch it. For example, if a baseman—or any of the opposing team’s players—catches a live ball while standing on the base, or with at least one foot in contact with the base, or touches the base after catching the ball, then the energy of that live ball is transmitted to the base itself, and any runner trying to reach that base is ruled out. Similarly, if the baseman or another opposing player catches the ball and then touches the runner with the ball or his gloved hand before the runner reaches and touches the base, the runner is out. But if the player touches the runner with some other part of his body—an elbow or a shin, say—the runner is not out. And again, many rules apply to this process, mostly for the safety of the baseman and the runner, such as what line the runner must follow and whether the baseman can block the run or slide into base.

These various conditions emulate the closing of a circuit and transfer of energy. We see this concept of touch and transfer in simple children’s games like tag and, with a certain amount of physicality, in red rover. We also see the concept in the most cerebral of games. For example, in the formal play of chess, a player is considered to be finished with a move only when he takes his hand off the piece he has touched. And in the game of go, a player’s stones are considered to establish a “house”—an area of captured space surrounded by a kind of unbreakable force field—when they form connecting lines around a certain number of empty spaces into which the opponent cannot legally place his stones.

Games like these are made up of rules which usually start out simply and become more involved and complex as play matures. In their simplest state, however, and especially when played by children, those rules can parallel the human mind’s attempts to see, interpret, and understand the visible universe and then play according to that understanding. In this sense, games are an elemental form of education, shared communication, and obedience to forces outside the individual’s will—as well as being fun to play.

1. See also The Asymmetric Beauty of Baseball from September 14, 2014. I only became interested in the game when our home team, the San Francisco Giants, started winning the World Series in even-numbered years—a trend that everyone here thought would continue in 2016 (“Keep on BeliEven!”). Unfortunately, the team ended the 2016 National League Division Series by blowing a three-run lead to the Chicago Cubs in the top of the ninth. So close …

2. Throughout, I will use the masculine pronoun instead of my usual choice of “he or she,” “his or her” to represent both sexes. Yes, women play baseball and softball. Someday, I have no doubt, women will join the majors and play well, because baseball is a game of skill at throwing, catching, running, and paying attention to the state of play—not physical contact, like football and basketball—and these are tasks at which women can excel. But for now, at the time of this writing, major league play is still a man’s sport.

Sunday, October 9, 2016

Keep Them Turning Pages

Everyone agrees that if you want to write a good story, one that will attract and hold a reader’s attention, you have to give him or her a reason to keep turning the pages of your book. But what does this mean in terms of the craft? How does a writer do this? While I am no bestselling author, I’ve learned a few tricks over the years, and they seem to serve the purpose.

Don’t Tell All You Know

First, resist the urge to explain everything. Oh, you don’t want to make every detail and piece of action so obscure and mysterious that the reader is left confused and annoyed. But the writer shouldn’t feel compelled to tell all that he or she knows at the time of writing. Readers participate in a story because they want to find out what’s going on and what will happen next—that is the essence of storytelling. To offer some elements that are described and explained in detail, while others are left to be questioned, presumed, or imagined, is the power that the author, who is the owner of the story, has over the reader, who is its discoverer.

This applies ever if you are not writing a mystery or spy story, where the heart of the plot is a big secret. Who committed the murder? Who stole the secret formula? In most novels, there is no aha! surprise to be explained at the end by a detective who is smarter than everyone else in the room. But in every story there still exists a secret to be learned. What has really happened? Why is this important? And what will happen next?

The author’s focus—his or her power of description and explanation at this point in the text—is like a searchlight probing a darkened landscape, or like a thief’s flashlight exploring a darkened room. In the reader’s mind, at the start of the story, there is only darkness. The writer should then choose, illuminate, and explain only those things that serve one of two purposes. The first is to reveal character and establish setting. The second is to advance the plot.

If it’s important for the reader to have a mental image of the character, then those details should be brought forward. For example, in my novel ME, Too: Loose in the Network, one of the main characters is described as being a tall, thin woman with short, blonde hair. These details become important when she is required to alter her appearance to impersonate someone else, and then the resulting confusion in her identity becomes the basis for a wrongful death. As a writer, I will generally throw in extra details like the shape of her face or the color of her eyes just to round out the image and not focus too narrowly on the important descriptives. But it is not necessary to explain at any point that the character has two arms and two legs unless losing one or the other might become a plot element.

Similarly, it helps to describe the story as being set in a particular place, say, Rome. And there the choice of what to describe—a famous landmark, or a favorite trattoria, or simply a crumbling wall—should help focus the reader’s attention on what the character on site would happen notice and find interesting. If describing the interior of a character’s home, the details should establish the person socially, economically, or in some other dimension. Are the furnishings gaudy and expensive, or old and threadbare? Each of the details, like each of the book’s subordinate characters, must work to earn their place in the story. And just as it’s not necessary to say the character has two arms, it is wasted words to say the action takes place on Earth—unless, of course, the possibility of going somewhere else is going to become a plot point.

Plant a Curiosity Bomb

The second use of the writer’s focus—as a device for advancing the plot—can be used to establish the elements of mystery and wonder that will keep the reader turning pages. For example, in describing the threadbare room in a poor character’s house, the writer might leave in plain sight a piece of jewelry or a recognizable work of art. To fuss over it at that point in the story—to an extent that would require an explanation of why it’s there—might satisfy the reader’s immediate curiosity. But isn’t it better to show the reader an anomaly and then leave it unexplained? How the character acquired this valuable, what it means in his or her life, and how its existence or loss might affect the story—all of these possibilities need to stay alive and unanswered in the reader’s imagination. This creates a sense of anticipation and so an active participation in the story.

Of course, if the writer introduces such an anomaly, it must later be used and explained. The universe may be full of casual coincidences, but—to remain viable in the reader’s mind—a well-constructed story must close all of its loops. Otherwise, the reader will feel cheated and disappointed.

This is where the author must be a good and curious reader in his or her own reading for pleasure. Only by participating in stories as an active and observant reader can a person get a sense of what other readers will find dull and unimportant, or important enough for establishing character and place but not likely to be relevant to the plot, or so damned curious and unexplained that the reader just knows it’s going to be important later—but how and why?

An author working in a specialized field, like fantasy or science fiction, faces a particular challenge in this creative use of detail. For example, many things on a spaceship might need to be explained—or better yet, alluded to in a way that seems to explain them. Is the air filtered and recycled, or bottled and dispensed? On long journeys, where does the food come from, and where do the human wastes go? Food out of freeze-dried pouches, wastes evacuated into space? Or everything through a matter recycler? What is the propulsive power, and how is the reaction mass, if any, stored and handled? These issues can satisfy the reader’s curiosity about the setting—or whet the appetite if the intended audience is made up of techno-junky gearheads. Not all of these details, however, are intended to become plot points. But some, particularly faults in the air or propulsion systems, might well be. In that case the author needs to know enough technology, and explain it well enough for the lay reader, so that a loose wire or a leaky valve will attract and hold the reader’s attention. The rest can be used as camouflage which—like the color of the character’s eyes—will keep the important detail from standing out as too obvious.

For another example, in my novel The Children of Possibility, the mechanics of this story’s peculiar forms of time travel play a major part in the plot. Some of the details, like the energy source driving the ships, are merely interesting and support the reader’s technical understanding. Other details—particularly the open framework of the ships and how vulnerable the occupants are to any malfunction—later become crucial to the plot structure.

This matter of camouflage can also become the subject of precise calculation on the author’s part. How much detail should he or she give in any one scene? How much to make tantalizingly relevant? How much to leave for the simple processes of character revelation and place setting? The answer is going to be different for every author and every book, derived from an internal calculus based on the story’s pace and dynamics. And here again, a writer who is also an avid reader will have the best feel for the underlying formula. Camouflage is the landscape, the visible background, over which that searchlight beam wanders.

Break Up the Action

Another key to creating anticipation and suspense is to avoid telling the story in neat and complete sequences. Instead, the writer can break up the action by ending a scene or chapter before it’s fully resolved. Of course, it doesn’t work to break off randomly, between punches thrown in the middle of a fight, between shot fired and shot received, or between lines of dialogue in the middle of an argument—or rather, it doesn’t work to do this more than once … or twice. If the author uses this technique repeatedly, the reader will know that its purpose is just to tease his or her sense of the pace and timing.1 And then the game, from the reader’s point of view, will be revealed and the excitement ends.

The best way to break the action is to conceive it as a series of apparently resolved vignettes in the first place. Each piece of action feels whole and complete in itself but does not bring the whole course of the story to a satisfying conclusion. After the first break, the reader discovers there is something more to come. And each apparent conclusion brings the pleasant anticipation that the action is not over yet.

In the novel Sunflowers, the sequence of various harmless actions—minor workplace accidents and errors—which eventually lead to a devastating fire in a coal-washing plant are broken up and told in rapidly building order, interspersed with the mundane actions of the characters who will have to respond to the fire. Later in the book, the story of a terrorist attack on the engineering company which is designing a new solar-power project is told over the span of several chapters. The separate steps of the terrorists preparing weapons and escape routes, scouting the site, making the initial entry, and conducting each stage of the assault are interspersed with the mundane but interesting work of the engineers as they discuss challenges and resolve issues with the project—and then with the reactions of one of the attack’s survivors.

This technique works best, again, when the author does not tell everything he or she knows at every point. The author simply presents the action without explaining its purpose and the viewpoint character’s overriding motives. The reader follows the sequences and builds up in his or her own mind what exactly is happening and what will—must—come next.

To conduct this kind of sequential action—and to provide a reason for breaking off at key points—it helps to tell the story from the viewpoints of separate characters. This is one reason I favor writing each scene from a single character’s perspective, rather than jumping around from one person’s head and set of ears and eyeballs to another’s in the course of a single scene. By controlling the point of view, I can have one character approach a door in one scene and another character—unknown to the first—crouching behind it in the next scene. The reader is the only person who knows the complete action and so experiences both anticipation and tension as it builds.2

For this reason, I like to write books with more than one main character. In First Citizen, which is told in the first person, the narrative switches back and forth between Granny Corbin, who first trains as a lawyer, becomes an industrialist, then a military leader and politician, and the roughneck Billy Birdsong, who over the course of the story attaches to Corbin as his bodyguard and closest confidant. Another of my novels, the two-volume Coming of Age, revolves around two main characters—a construction executive, John Praxis, and the lawyer who sues him in the opening chapter, Antigone Wells—both of whom suffer life-threatening illnesses and receive cellular-regeneration treatments that cumulatively will extend their lives for another century. In essence, this is a story that follows an ensemble cast of their family members, where dominant characters arise, play their part, and fade away in each section. By cutting between one character’s point of view and another’s in any of this action, I can break the movement of these stories into as many pieces as needed.

Begin with a Prologue

One way to break up the action is to start the reader with a scene or group of scenes that falls outside of the main story. This fragment of action can provide a trigger or inciting cause that leads to the story in the book, as the action against Hoover Dam sets up the federal project to build a solar power plant in orbit in Sunflowers.

The prologue should not only establish the need for the story but give the reader a dash of action and suspense that may be missing in the book’s first couple of chapters. In getting the general action going, the author usually must provide background, character definition, and place setting—activities that are not always conducive to exciting action. In this case, the prologue is a promise that, bear with me, this thing will pick up speed and snap your head pretty soon.

Or the prologue may preview the denouement, the final outcome or unraveling, of the story’s main plot. I am attempting something like this in the sequel to Children, another novel of time travel tentatively titled The House at the Crossroads. The prologue is actually a kind of epilogue, the last piece of the action. But since this is time travel, and one character’s point of view is always going to be either behind or ahead of another’s, I can get away with it.

Besides, tricking the reader with a logical or interpretive puzzle—at least in science fiction—is no sin, or not in my technique. I believe that the sort of readers who enter here want to be challenged, pushed up on their toes or rocked back on their heels, and have their mettle tested. Everyone else can go read a Harlequin romance.

Reveal a New Problem

Finally, an alternative to breaking up a single piece of business, such as the story of an attack on an engineering office, into its separate and partially resolved stages, the author can use one piece of completed action to reveal and foreshadow a new problem that the characters must deal with in the next part of the book. This is not so much a new technique as just the basic premise of good storytelling, where one action leads organically to another.

I am having fun with this approach right now in The House at the Crossroads. This story traces the history of the time portal built into the Carrefour House hotel in London’s Seven Dials district in the first book. One set of characters comprises the young couple who will travel back in time from the eighth millennium to plant the seed that opens the time portal in an English country inn outside medieval London. Another group—the Jongleur Coel Rydin and his mechanical friend Cinquemain from the first book, The Children of Possibility—works to stop their venture and destroy the portal before it can be set in place. In each case, the action of one group precipitates a response and causes a new series of actions by the other. For example, just when the keepers of the house are prepared to travel to the fifteenth century, the destruction of their pathway into the past places them five hundred years earlier and on the other side of the continent. And so it goes, back and forth—or so I hope.

The point is that any story told in straightforward, linear order, from one point of view, with everything described and explained to the reader’s complete satisfaction the moment it occurs … is boring. The action may be exciting. The characters may be appealing. But nothing teases the reader’s imagination. Nothing is left to the reader’s interpretation and speculation, with the possibility of greater satisfaction or disappointment.

The best way to keep the reader turning pages is to make him or her wonder what’s going to happen next.

1. Too many television screenplays do this, and the breaks are always timed right before a commercial, to lure the audience back to the action. This kind of obvious sequencing gets tiresome.

2. This technique is, frankly, based on modern cinematic usage. I suppose that much of the way stories are now written could not have evolved if it were not for creative uses of the camera in film. Unlike stage plays—or the early Stanley Kubrick movies, with their extreme wide-angle shots—where all the actors move and interact at once in an open-sided room or space, the camera in a film can now follow one character and then another through the action. In its best uses, the camera can adopt the character’s internal viewpoint, cutting back and forth between the actor in the process of observing, and the action he or she is seeing, hearing, and—along with the audience—beginning to understand.

Sunday, October 2, 2016

Science Fantasy

For decades—oh, up until the mid-1980s or early ’90s, perhaps—there was a single genre in literature called “science fiction.”1 Also called “speculative fiction,” its stories dealt with themes and developments in the hard sciences: physics and chemistry, astronomy, geology, biology and evolution, and medicine, tempered with some of the softer areas of study like economics, political science, and anthropology. This literature dealt with the issues that human beings will have to face as we progress in the vast cycle of research, invention, development, and commercialization which began with Newton, Descartes, and the other practical scientists of the 17th century in the period called “the Enlightenment” and will continue as long as there is a functioning Western Civilization.

Science fiction as a separate genre actually started with authors in the late 19th century like Jules Verne and H. G. Wells, who wrote about new technologies that were just coming over their horizon. Verne described travels and adventures under the sea by submarine, across Africa by aerial balloon, and under the skin of the Earth by exploring extinct volcanoes and subterranean passages. Wells examined biological experiments upon the human form and potential encounters with extraterrestrials, both by our going to visit them on the Moon and by their coming to visit us from Mars.

While science fiction was maturing and getting its popular legs in the mid-20th century, another form of speculative fiction was growing out of more natural or organic roots in folklore and legend. Authors like J. R. R. Tolkien, C. S. Lewis, and E. R. Eddison were telling stories of power and conflict based on magical premises and involving creatures that had human form but were different from human—elves, kobold dwarves, witches, and other figments of imagination. Their heroes and heroines were familiar with and often used magic.2 Fantasy became its own literary genre in which the unknown and unknowable power of myth and magic has greater influence than the known and definable power of science and technology.

A third genre, horror, has also grown out of roots in both of these fictional streams. Horror focuses on the negative effects, the willful opening of one’s eyes to the ugly side of science and legend—and then squeezing them shut again. The science roots of this genre go back to Mary Shelley’s Frankenstein, and predate the more hopeful stories of Verne and Wells. She describes what happens when a scientist ventures too far into the realm of pure science.3 On the fantasy side, horror probably starts with Bram Stoker’s Dracula, where a mythical and ageless creature doesn’t just coexist with human beings but feeds on them.

But put aside horror for the moment—a genre I sometimes read but never tried to write. Put aside, also, the realm of formal fantasy. I once was invited to collaborate on a book about wizards but declined, because I really know nothing about magic. It always seems to me to be too much like wishful thinking.

My head and my heart have always been with “hard” science fiction, primarily because I’m curious about the mechanics behind what we find in the world. That’s just one effect of being the son of a mechanical engineer. My talent as a technical writer was always to take a complicated process, explore it within my own imagination, and break it down into steps that anyone could follow. My talent as an employee communicator was to take the science and technology behind my current company’s products, find examples and analogies from everyday life, and explain these mysteries so that everyone from the accountants to the janitors could find in them something interesting.

My first published novel4 was The Doomsday Effect, about a micro black hole orbiting around and through the Earth, what it was doing to the planet, how it was discovered, and what a team of scientists and engineers could do to stop it. It was a story of pure science—well, except for the end, where I played fast and loose with a vial of antimatter. That stuff is more of a science fiction meme and theoretical substance than an actual material you can put in a bottle. But readers accept it as something we can manipulate, especially when the plot needs an awesome explosion or a powerful starship drive.

My second novel, First Citizen, was hardly science fiction at all—except that it portrayed an alternate history of the United States, including a major war in Central America, the collapse of the federal government through a rogue nuclear attack, and the rise of feuding despots in the manner of the civil wars of the late Roman Republic. I still managed to inject a healthy dose of technology, including the possibilities of mining municipal solid waste and advanced techniques of alternative warfare.

It was my third novel, ME: A Novel of Self-Discovery, that launched me into what I’ve since come to regard as a new genre, “science fantasy.” The premise of the book—and of its sequel, ME, Too: Loose in the Network—is that a piece of computer software written in a version of the Lisp programming language can be both small and large at the same time: small and agile enough to port from one computer operating system to another as a viral infiltrator and spy; large and complex enough to become a self-aware artificial intelligence with understanding, aspirations, and the possibility of a soul. This was not just bending the rules of science, like putting antimatter in a bottle, but throwing them out the window and using the trappings of computer technology to tell stories about a kind of sprite or a wood elf.

More recently, I’ve done it again with The Children of Possibility, and with its prequel (now in production) tentatively called The House at the Crossroads. Both deal with time travel—and not just a solitary inventor who creates a machine of gears and wheels that pushes itself forward through time, but two competing systems, alternate theories of mechanics and mathematics, and the societies that grow up using them for competing purposes. Since time travel, like antimatter or artificial intelligence, is the stuff of imagination based in physics and mathematics but not the occasion of everyday reality, like separating garbage into fuels and metals or fighting a war with remotely piloted drones, these stories are pure fantasy.

But—important disclaimer—I am neither a scientist nor an engineer. My formal training was in English literature, the old and dusty kind, and the origins of story going back to Homer and the ancient Greek playwrights. My family upbringing and my early jobs as an editor, first at a publisher of railroad histories and then at an engineering and construction company, nurtured in me a fascination with science and technology. But I never got the training to go deep into the weeds of mathematics and physics. So I never learned to be precise and pedantic about the limits between what is real and what must be imaginary.

In this I am not alone. Writers of “hard” science fiction have been skating on the edges of reality since the beginning. Verne’s Journey to the Center of the Earth imaginatively followed an adventurer through realms deep inside the planet, where even the geology of the author’s time would have suggested the pressures are too great to support vast caverns filled with alien biology. And Wells’s The Time Machine, without the benefit of modern mathematical theories and quantum mechanics, takes a visitor through the dimension of time—but not those of space—by purely mechanical means. Both writers were creating fantasies draped in no more than loose garlands of scientific terminology.

I would also consider any stories about faster-than-light travel as being a science fantasy—although the jury is still out on the subject of warp drives, wormholes, and whether space is pure emptiness or a structure that can be bent and pierced. Without hyper-light travel, all notions of interstellar trade, warfare, and empire recede into the realm of fantasy, a tale of medieval or Renaissance politics played out among the stars between vast duchies and provinces.

I still have a few stories that are based on hard science. They are classed as science fiction only because they haven’t happened yet. One such was my novel about building a solar power plant in orbit, Sunflowers, which is so mundane that I classify it on my author’s website as general fiction. Another is the two-volume Coming of Age, which follows two people from this decade who get to live remarkably extended lives through cellular regeneration technologies which are being developed right now.

As a writer who tries to be neat and precise—as well as honest in my dealings with the reader—I try to keep these two genres separate in my mind: science fiction for what is proven to be technically possible; science fantasy for what is impossible, or only slightly plausible, but great fun. It’s not always easy. Homer, in creating the first saga of Western Civilization, struggled with this impulse, too. And he kept stepping over the line, dealing with the gods as living characters and having his characters journey into the afterlife.

I guess that only means fantasy exists all around us and is part of the human condition.

1. I’m using “literature” here in its proper sense, the art of telling stories through the printed word. This includes everyone from Geoffrey Chaucer and Miguel de Cervantes to Dashiell Hammett and Ernest Hemingway and everyone who makes a living, or tries to, by writing books, short stories, and various species of poetry. The word literature has gotten a bad reputation these days, coming to mean the sort of dry, dusty, and obscure books that one is forced to read in English class, cannot understand, and so despises. There is also a genre of its own, “literary fiction,” which tries to emulate those dusty old books by taking out all the dangerous, daft, fun stuff and inserting long passages of introspection where nothing much happens. That’s not the kind of literature I mean here.
       However, I have tried my hand at literary fiction—see The Judge’s Daughter and its sequel, The Professor’s Mistress. These are stories about people in a certain place and time in the past, not the future. Although the books have a certain amount of insight and introspection, I promise that a lot still happens and some of it’s fun.

2. Yes, of course, as Arthur C. Clarke noted, “Any sufficiently advanced technology is indistinguishable from magic.” But in these stories the characters themselves usually remain uncertain about the origins of their magical power and stand in awe of its effects.

3. Yes, and there’s one of Clarke’s other laws: “The only way of discovering the limits of the possible is to venture a little way past them into the impossible.”

4. I wrote two other complete novels, worked on a number of fragmentary manuscripts, and dreamed up even more discarded ideas before I finally published this book. Every author has to learn and practice the art before going public. When you see a “first novel” that’s really good and makes headway with the critics, you can bet the author has two or three unpublishable experiments that predate it.