Sunday, October 23, 2016

Investing in the Future

I’ve been chasing Moore’s Law1 for almost forty years now. The chase started back in the mid-1970s, when I began to get serious about my fiction writing, and for that I needed a good typewriter. The IBM Selectric I had at work was a wonderful machine, and I lusted after one for my personal use. I learned that IBM could outfit these machines with internal padding and a hush hood to deaden the clack! of the ball striking the platen, and this was necessary because I got up early in the morning to write and didn’t want to disturb my sleeping bride. They also made a model with a backspacing correction tape, which I could certainly use, because I was always a fumble-fingered typist but also a textual perfectionist who spent half my time cranking the page up and away from the platen to erase my mistakes.

Back then, you didn’t just walk into an IBM store—although such existed—and buy a typewriter. IBM was a business-to-business enterprise before anyone knew exactly what that meant. To buy a new Selectric that was set up the way I wanted, I had to make an appointment with an IBM sales representative to come to my home and order the machine to be specially manufactured for me, right down to platen width and paint color. I think I paid about seven hundred dollars for this typewriter and waited several weeks for it to be manufactured and delivered. That was a lot of money back then, but I was a serious writer and this going to be a lifetime investment.

In the next four years, I probably put 80,000 words through that machine—or 120,000 if you count backspacing corrections. That word count comprised one complete novel manuscript and about half of another.2 But by then, about 1979, every day on my way to work at a new job in San Francisco, I passed a store called Computerland. One of my roommates in college had majored in computer science, and as a science fiction writer I was always fascinated by computers. So I stopped in and started asking questions. Could the machines do this? Could they do that? I knew there were computerized game consoles out in the world, but I wanted a real computer, not a one-trick pony. The salesman patiently explained that, yes, it could do whatever I wanted, so long as the machine was programmed for it.3

On that basis, and with thoughts of embarking on a new hobby—and maybe a new career—centered around computer programming, I bought an Apple II. It was the full-blown machine, with 16,000 bytes—essentially equivalent to characters—of read-only memory (ROM) for its operating system and another 48,000 bytes of random-access memory (RAM) for the programs I would write. Not having a spare television to use as a computer display, I bought a small monitor with a green-pixel-on-black screen. Not wanting to record and play my programs with a cassette tape, I bought a separate drive that could read 5.25-inch floppy disks with a capacity of 103,000 bytes. The whole setup cost me something like $2,500—much more than my fancy IBM typewriter—but it was an investment in learning a new and exciting business, programming for fun and profit. I even bought myself a subscription to Byte magazine and joined the Apple Users Group.

What I quickly discovered was that programming was easy for an English major to learn, because it involves both logical thinking and fetish-level attention to new rules of grammar and punctuation. But to write elegant programs that performed truly clever feats was a specialty all its own. I could make the machine do simple tasks in the BASIC language, and even dipped into the structured language Pascal. But the first time I saw a Pong game where the puck moved in a curve responding to a supposed gravity field, and I tried to parse and understand the coding involved, I discovered that I was years too late for getting in on the ground floor of professional computer programming. My best talent lay in telling stories rather than making pixels dance.

But along about this time I also discovered that the computer made a marvelous writing machine. It wasn’t linear, like a typewriter laying a track of words, line after line, moving down the page. It didn’t need a backspacing correction key and yards of expensive whiteout tape to fix my fumble fingers. It didn’t have to cut paper with scissors and tape it back together to move paragraphs around. While the word-processing software offered for the Apple II’s native operating system was fairly limited, an adapter card running an Intel microprocessor would let me use the CP/M system and WordStar—and that was high-powered stuff indeed. The one snag was the printer I would need to output my writing efforts. All the Apple models put faint, gray, dot-matrix characters on flimsy thermal paper, which no publisher would accept. So in addition to a new processor card and an expensive piece of software, I plunked down $5,000 for an NEC Spinwriter printer. It did not come with padding and a hush hood, and the machine-gun clatter drove my wife out of the apartment on the days I needed to print out a manuscript. But I was in the writing business at full power.

Since the 103,000-character capacity of the Apple floppies could hardly contain just one chapter from a novel, and I actually needed a second disk drive to do that, because the first drive was running the WordStar program itself, I knew the Apple II with however many additional cards was not long for this world. By that time I was ready to step up to an expert’s CompuPro passive-backplane S-100 system with more robust overall construction, including an Intel 8088 chip set and dual drives reading full-size, eight-inch, one-megabyte floppies. That was the machine which produced my first published novel. A couple of years later, in 1987—and again emulating the machines we now had at work—I gave up the CompuPro for an IBM AT-286 system running my first hard-disk drive, with all of twenty megabytes. I also traded the impact printer for a Hewlett-Packard LaserJet printer, which was faster and quieter.

Over the next fifteen years, I did not buy a new computer system. Instead, I replaced every piece and part of that original IBM three times over: three new motherboards with faster chips and more memory, two new cases and power supplies, two new monitors, four new keyboards, a new printer, dozens of different mice and trackballs, and newly added peripherals like a flatbed scanner and a sound system. I also upgraded my operating system, going from IBM-DOS to PC-DOS to OS/2 to Windows 95. I went from WordStar to Microsoft Word—where I’ve pretty much stayed, just to maintain readability across my various manuscripts and other projects. I added a ton of other software, though, giving me new capabilities in calculation, page layout, project management and scheduling, computer graphics, photo manipulation, audio and video creation, presentations, speech recognition, and any of those other things a computer can do so long as it has the right programming. And finally, after the last upgrade—having paid about three times the cost of a new desktop computer just to bring my now scratch-built home system up to current standards with separately purchased components—and having finally become disappointed with the current Windows systems—I moved over to the Apple world again with a new Mac Pro and all new software.

And that machine is now in its second generation and more powerful than ever. As of this writing, I’m pushing six individual processing cores accessing 32 billion bytes of memory, running the operating system off a solid-state drive for fast startup and processing, and linking the machine to assorted other hard drives for working storage and backup, each with about two trillion bytes of capacity. I use this system to write, format in HTML, and lay out the print-on-demand pages for my novels, maintain my author’s website, process photographs, record and edit the occasional video, maintain my music collection, and do whatever else a computer with the right software programs can do.

The point of my story is not to tell you how great my system is. Anyone with the need for processing speed, storage capacity, and communications capability can trot down to the Apple store, Best Buy, or wherever and purchase this stuff the same day right out of the store’s stock on hand. The point is that the technology in just this one area of writing and communicating has improved so greatly in just the last forty years.

My Apple II in 1979 had about the same random-access memory capacity as the IBM System 360 that ran the entire University Park campus at Penn State when I was there a decade earlier. My first twenty-megabyte hard drive in the IBM AT-286 had four times the storage capacity of the System 360’s clothes washer–sized drums. Today, my pocket telephone has more processing power, more memory, and more embedded software than any of these systems. And I don’t have to push keys and formulate commands in a specialized, coded language. Instead, I tap on little pictures and what I want pops up instantly.

Over the years, I have chased Moore’s law with a vengeance. To do that, I have had to buy, discard and buy anew, and then buy all over again all of the hardware, software, and peripherals comprising my home system. I have had to grapple with and learn a new technical language, teach myself new software and new functions, and change to new ways of working. I’ve done all this cheerfully, because each step represented an amazing increase in the speed, capability, and convenience of my main writing tool. And each time I have consigned to the closet or the recycle center a piece of equipment that just three or four years ago was shiny and new but now is old, slow, outmoded, and no longer supported—but never because it was faulty or broke down. As an early adopter of this technology, I know I’ve been making an “investment in the future.” If I and others like me didn’t buy into the next wave and support the continuing development of this technology, then the developments would stop coming, we would all enter a vast middle period of same-old-same-old, and the future would become a little less bright.

In the wider world, we have all seen the same turnover of technology in other contexts. Each advance in automotive design brings us cars that are more fuel efficient, lighter, and structurally safer, with more safety and convenience features like satellite navigation and rear-view cameras. We get household appliances that are more energy efficient, quieter, with greater capacity in a smaller footprint. We get cell phones that are less bulky, less expensive, offer greater coverage, and which double as record players, note takers, appointment books, cameras, calorie trackers, and anything else you can do with a camera, microphone, GPS antenna, screen, and data stream. And in each case we buy into the next generation of technology not because the old model no longer works or works badly, but because it simply can’t keep up.

Is this process of creation and destruction a bad thing? It is, if you view each purchase in your life as I did that IBM Selectric typewriter: as a lifetime investment.4 You might buy good furniture that way, because the seating capacity and underlying structure of sofas and chairs hasn’t changed much in a hundred years, although the materials have certainly improved. But any tool that is susceptible to improvements in design, energy use, materials, and connectivity is now going to be subject to a process of continuing evolution. This is how nature improves on organic structures and capabilities. This is the course of technological innovation and obsolescence that Western Civilization has been following since the rise of scientific trailblazers like Newton and Descartes and inventors like Fulton, Edison, and Bell.

Sometimes, I think about stepping off this escalator to the future. I dream about writing with a really good fountain pen in a notebook filled with pages of creamy white paper. At age sixteen I wrote my very first novel that way, longhand, in pen, with the second draft pecked out on my grandfather’s upright Underwood typewriter with the glass insets. I typed on two sheets at once sandwiched with carbon paper in between. It’s a dream of returning to a simpler age of slow changes and eternal values.

But, damn, no! Erasing and correcting all that fumble-fingered typing on two copies with an erasure shield stuck in front of the carbon layer … Hell, no! Never again!

1. For our visitors from Alpha Centauri, Gordon Moore of Fairchild Semiconductor and later co-founder of Intel predicted in 1965 that the number of transistors per square inch in an integrated circuit would double every two years. What he meant was that computers and their component chips would keep getting exponentially smaller and more powerful, and as a corollary the cost of computing power and capability would go down. As of right now, the law is still in effect, although some predict that when circuit widths in complementary metal-oxide semiconductor (CMOS) transistors get down to about seven to five nanometers—a capability predicted to arrive sometime in the early 2020s—the shrinking will stop due to the vagaries of quantum mechanics. Thomas’s Law predicts that, by the time this happens, some new technology will likely have already made the transistor obsolete.

2. I wrote two complete books and started several more before embarking on my first manuscript to be published, The Doomsday Effect, which came out in 1986. And even that novel took a wrong turn at the beginning and had to be completely rethought and rewritten before it could find a home with Baen Books. This is part of any author’s story: the “first novel” is almost never the first book you try to write. Those first, stillborn books are the process of learning the craft.

3. This was a bit of an exaggeration, of course. Any machine has built-in limitations, which is why a Volkswagen is not a Ferrari. But in essence what he said was true: computers simply run programs, and the program becomes the core of whatever the machine is supposed to be doing.

4. About a dozen years ago I took that IBM Selectric down to a used typewriter store and gave it to them, hoping it would find a good home. I had kept the machine only to fill out paper forms, and by then virtually every transaction in my life was online. That typewriter still worked perfectly, but I needed the desk space.

Sunday, October 16, 2016

A Child’s Science in Baseball

As a child, I was never much interested in baseball.1 My father always wanted to see the game when it was on television, and I would sit nearby moping, wanting to watch an old movie or teleplay—something that told an actual story. But I learned some of the basics because he would patiently try to explain them while I half-watched. I also played a few games in grade school—never good at it, always the last picked, and usually sent to play outfield in that part of the schoolyard that lay inside the edge of the woods. But still, I picked up a sense of the game.

What has stuck with me is that baseball depends on a child’s mystical sense of basic scientific properties. The game is rooted in them in a way that affects only a few other sports.

To begin, the baseball itself is endowed with a primitive energy, a kind of life force, that has nothing to do with the velocity of a pitcher’s throw or its impact in a glove or elsewhere on arrival. The ball is considered to be “live” and playable in certain parts of the field and at certain times during a play, and “dead” and out of play at other times.

The ball is live or “fair” from the point at which it leaves the pitcher’s hand until it arrives securely in the catcher’s glove or within his2 possession and control, at which point it becomes dead for further play on that pitch. If this arc from pitcher to catcher is interrupted by the batter’s hitting the ball, then it remains live until someone on the opposing team within the confines of the field catches and holds on to it, in which case it also becomes dead and ends not only the play on that pitch but the batter’s turn at bat. But if the player catching the ball drops or fumbles it, the ball remains live and in play.

If an opposing player catches the hit ball in foul territory before it hits the ground, it is also dead and ends the at bat. However, if the ball first touches the ground outside the extended baselines, it becomes foul and out of play, but it only ends play only on that pitch, and the batter gets another chance to hit. What happens to the ball after it clears first or third base on the infield is subject to other rules and conditions, mostly determined by where the ball ends up. Note that it is the position of the ball, not the player touching it, that conditions its status.

If the catcher fails to catch or hold on to the ball after the pitch, when the batter has failed to contact it and no member of the opposing team is on base or in a position to advance, the ball is technically live but everyone simply ignores it, because it has no more use in play. However, if the opposing team has one or more members on base, then the catcher may throw it to stop them from advancing to the next base. And a fumbled ball remains fully live and those players may advance until the catcher recovers it and throws to one of his teammates to tag a player out.

If the ball is hit and lands beyond the outer limits marked on the field between the two foul lines, it remains live but unplayable, and all the men on base may advance to home and score—a home run plus runs “batted in.” If the ball lands on the field and then bounces out of play, so that the outfielders cannot retrieve it, the ball remains live but the batter is limited to taking second base—a “ground rule double”—while any runners ahead of him may similarly advance only two bases.

What I’ve given here is just the briefest sketch of the most common states of play. Many others exist, such as what happens if the pitcher enters his “set” position and starts his windup to throw the ball home and then does something else—like move his foot off the rubber strip on the mound or turns to throw out a runner at one of the bases—or makes any of about a dozen other errors in protocol. This is a “balk” and the batter automatically advances to first base. The batter also advances if he is hit by the pitched ball—but not if his bat has contacted the ball first.

This concept of a ball being alive or dead is not limited to baseball, its offshoot softball, and its English predecessor cricket. Balls can be in play or out of play in games like tennis, squash and handball, badminton, basketball, and other contests arising from polite, rule-based play in the last few centuries. Even football—which I don’t consider very polite—has rules for when the ball, which in this case is not really a sphere, can be played and when it is dead.

The liveness of the ball can also contribute its power through a kind of electrical current to the things and people that touch it. For example, if a baseman—or any of the opposing team’s players—catches a live ball while standing on the base, or with at least one foot in contact with the base, or touches the base after catching the ball, then the energy of that live ball is transmitted to the base itself, and any runner trying to reach that base is ruled out. Similarly, if the baseman or another opposing player catches the ball and then touches the runner with the ball or his gloved hand before the runner reaches and touches the base, the runner is out. But if the player touches the runner with some other part of his body—an elbow or a shin, say—the runner is not out. And again, many rules apply to this process, mostly for the safety of the baseman and the runner, such as what line the runner must follow and whether the baseman can block the run or slide into base.

These various conditions emulate the closing of a circuit and transfer of energy. We see this concept of touch and transfer in simple children’s games like tag and, with a certain amount of physicality, in red rover. We also see the concept in the most cerebral of games. For example, in the formal play of chess, a player is considered to be finished with a move only when he takes his hand off the piece he has touched. And in the game of go, a player’s stones are considered to establish a “house”—an area of captured space surrounded by a kind of unbreakable force field—when they form connecting lines around a certain number of empty spaces into which the opponent cannot legally place his stones.

Games like these are made up of rules which usually start out simply and become more involved and complex as play matures. In their simplest state, however, and especially when played by children, those rules can parallel the human mind’s attempts to see, interpret, and understand the visible universe and then play according to that understanding. In this sense, games are an elemental form of education, shared communication, and obedience to forces outside the individual’s will—as well as being fun to play.

1. See also The Asymmetric Beauty of Baseball from September 14, 2014. I only became interested in the game when our home team, the San Francisco Giants, started winning the World Series in even-numbered years—a trend that everyone here thought would continue in 2016 (“Keep on BeliEven!”). Unfortunately, the team ended the 2016 National League Division Series by blowing a three-run lead to the Chicago Cubs in the top of the ninth. So close …

2. Throughout, I will use the masculine pronoun instead of my usual choice of “he or she,” “his or her” to represent both sexes. Yes, women play baseball and softball. Someday, I have no doubt, women will join the majors and play well, because baseball is a game of skill at throwing, catching, running, and paying attention to the state of play—not physical contact, like football and basketball—and these are tasks at which women can excel. But for now, at the time of this writing, major league play is still a man’s sport.

Sunday, October 9, 2016

Keep Them Turning Pages

Everyone agrees that if you want to write a good story, one that will attract and hold a reader’s attention, you have to give him or her a reason to keep turning the pages of your book. But what does this mean in terms of the craft? How does a writer do this? While I am no bestselling author, I’ve learned a few tricks over the years, and they seem to serve the purpose.

Don’t Tell All You Know

First, resist the urge to explain everything. Oh, you don’t want to make every detail and piece of action so obscure and mysterious that the reader is left confused and annoyed. But the writer shouldn’t feel compelled to tell all that he or she knows at the time of writing. Readers participate in a story because they want to find out what’s going on and what will happen next—that is the essence of storytelling. To offer some elements that are described and explained in detail, while others are left to be questioned, presumed, or imagined, is the power that the author, who is the owner of the story, has over the reader, who is its discoverer.

This applies ever if you are not writing a mystery or spy story, where the heart of the plot is a big secret. Who committed the murder? Who stole the secret formula? In most novels, there is no aha! surprise to be explained at the end by a detective who is smarter than everyone else in the room. But in every story there still exists a secret to be learned. What has really happened? Why is this important? And what will happen next?

The author’s focus—his or her power of description and explanation at this point in the text—is like a searchlight probing a darkened landscape, or like a thief’s flashlight exploring a darkened room. In the reader’s mind, at the start of the story, there is only darkness. The writer should then choose, illuminate, and explain only those things that serve one of two purposes. The first is to reveal character and establish setting. The second is to advance the plot.

If it’s important for the reader to have a mental image of the character, then those details should be brought forward. For example, in my novel ME, Too: Loose in the Network, one of the main characters is described as being a tall, thin woman with short, blonde hair. These details become important when she is required to alter her appearance to impersonate someone else, and then the resulting confusion in her identity becomes the basis for a wrongful death. As a writer, I will generally throw in extra details like the shape of her face or the color of her eyes just to round out the image and not focus too narrowly on the important descriptives. But it is not necessary to explain at any point that the character has two arms and two legs unless losing one or the other might become a plot element.

Similarly, it helps to describe the story as being set in a particular place, say, Rome. And there the choice of what to describe—a famous landmark, or a favorite trattoria, or simply a crumbling wall—should help focus the reader’s attention on what the character on site would happen notice and find interesting. If describing the interior of a character’s home, the details should establish the person socially, economically, or in some other dimension. Are the furnishings gaudy and expensive, or old and threadbare? Each of the details, like each of the book’s subordinate characters, must work to earn their place in the story. And just as it’s not necessary to say the character has two arms, it is wasted words to say the action takes place on Earth—unless, of course, the possibility of going somewhere else is going to become a plot point.

Plant a Curiosity Bomb

The second use of the writer’s focus—as a device for advancing the plot—can be used to establish the elements of mystery and wonder that will keep the reader turning pages. For example, in describing the threadbare room in a poor character’s house, the writer might leave in plain sight a piece of jewelry or a recognizable work of art. To fuss over it at that point in the story—to an extent that would require an explanation of why it’s there—might satisfy the reader’s immediate curiosity. But isn’t it better to show the reader an anomaly and then leave it unexplained? How the character acquired this valuable, what it means in his or her life, and how its existence or loss might affect the story—all of these possibilities need to stay alive and unanswered in the reader’s imagination. This creates a sense of anticipation and so an active participation in the story.

Of course, if the writer introduces such an anomaly, it must later be used and explained. The universe may be full of casual coincidences, but—to remain viable in the reader’s mind—a well-constructed story must close all of its loops. Otherwise, the reader will feel cheated and disappointed.

This is where the author must be a good and curious reader in his or her own reading for pleasure. Only by participating in stories as an active and observant reader can a person get a sense of what other readers will find dull and unimportant, or important enough for establishing character and place but not likely to be relevant to the plot, or so damned curious and unexplained that the reader just knows it’s going to be important later—but how and why?

An author working in a specialized field, like fantasy or science fiction, faces a particular challenge in this creative use of detail. For example, many things on a spaceship might need to be explained—or better yet, alluded to in a way that seems to explain them. Is the air filtered and recycled, or bottled and dispensed? On long journeys, where does the food come from, and where do the human wastes go? Food out of freeze-dried pouches, wastes evacuated into space? Or everything through a matter recycler? What is the propulsive power, and how is the reaction mass, if any, stored and handled? These issues can satisfy the reader’s curiosity about the setting—or whet the appetite if the intended audience is made up of techno-junky gearheads. Not all of these details, however, are intended to become plot points. But some, particularly faults in the air or propulsion systems, might well be. In that case the author needs to know enough technology, and explain it well enough for the lay reader, so that a loose wire or a leaky valve will attract and hold the reader’s attention. The rest can be used as camouflage which—like the color of the character’s eyes—will keep the important detail from standing out as too obvious.

For another example, in my novel The Children of Possibility, the mechanics of this story’s peculiar forms of time travel play a major part in the plot. Some of the details, like the energy source driving the ships, are merely interesting and support the reader’s technical understanding. Other details—particularly the open framework of the ships and how vulnerable the occupants are to any malfunction—later become crucial to the plot structure.

This matter of camouflage can also become the subject of precise calculation on the author’s part. How much detail should he or she give in any one scene? How much to make tantalizingly relevant? How much to leave for the simple processes of character revelation and place setting? The answer is going to be different for every author and every book, derived from an internal calculus based on the story’s pace and dynamics. And here again, a writer who is also an avid reader will have the best feel for the underlying formula. Camouflage is the landscape, the visible background, over which that searchlight beam wanders.

Break Up the Action

Another key to creating anticipation and suspense is to avoid telling the story in neat and complete sequences. Instead, the writer can break up the action by ending a scene or chapter before it’s fully resolved. Of course, it doesn’t work to break off randomly, between punches thrown in the middle of a fight, between shot fired and shot received, or between lines of dialogue in the middle of an argument—or rather, it doesn’t work to do this more than once … or twice. If the author uses this technique repeatedly, the reader will know that its purpose is just to tease his or her sense of the pace and timing.1 And then the game, from the reader’s point of view, will be revealed and the excitement ends.

The best way to break the action is to conceive it as a series of apparently resolved vignettes in the first place. Each piece of action feels whole and complete in itself but does not bring the whole course of the story to a satisfying conclusion. After the first break, the reader discovers there is something more to come. And each apparent conclusion brings the pleasant anticipation that the action is not over yet.

In the novel Sunflowers, the sequence of various harmless actions—minor workplace accidents and errors—which eventually lead to a devastating fire in a coal-washing plant are broken up and told in rapidly building order, interspersed with the mundane actions of the characters who will have to respond to the fire. Later in the book, the story of a terrorist attack on the engineering company which is designing a new solar-power project is told over the span of several chapters. The separate steps of the terrorists preparing weapons and escape routes, scouting the site, making the initial entry, and conducting each stage of the assault are interspersed with the mundane but interesting work of the engineers as they discuss challenges and resolve issues with the project—and then with the reactions of one of the attack’s survivors.

This technique works best, again, when the author does not tell everything he or she knows at every point. The author simply presents the action without explaining its purpose and the viewpoint character’s overriding motives. The reader follows the sequences and builds up in his or her own mind what exactly is happening and what will—must—come next.

To conduct this kind of sequential action—and to provide a reason for breaking off at key points—it helps to tell the story from the viewpoints of separate characters. This is one reason I favor writing each scene from a single character’s perspective, rather than jumping around from one person’s head and set of ears and eyeballs to another’s in the course of a single scene. By controlling the point of view, I can have one character approach a door in one scene and another character—unknown to the first—crouching behind it in the next scene. The reader is the only person who knows the complete action and so experiences both anticipation and tension as it builds.2

For this reason, I like to write books with more than one main character. In First Citizen, which is told in the first person, the narrative switches back and forth between Granny Corbin, who first trains as a lawyer, becomes an industrialist, then a military leader and politician, and the roughneck Billy Birdsong, who over the course of the story attaches to Corbin as his bodyguard and closest confidant. Another of my novels, the two-volume Coming of Age, revolves around two main characters—a construction executive, John Praxis, and the lawyer who sues him in the opening chapter, Antigone Wells—both of whom suffer life-threatening illnesses and receive cellular-regeneration treatments that cumulatively will extend their lives for another century. In essence, this is a story that follows an ensemble cast of their family members, where dominant characters arise, play their part, and fade away in each section. By cutting between one character’s point of view and another’s in any of this action, I can break the movement of these stories into as many pieces as needed.

Begin with a Prologue

One way to break up the action is to start the reader with a scene or group of scenes that falls outside of the main story. This fragment of action can provide a trigger or inciting cause that leads to the story in the book, as the action against Hoover Dam sets up the federal project to build a solar power plant in orbit in Sunflowers.

The prologue should not only establish the need for the story but give the reader a dash of action and suspense that may be missing in the book’s first couple of chapters. In getting the general action going, the author usually must provide background, character definition, and place setting—activities that are not always conducive to exciting action. In this case, the prologue is a promise that, bear with me, this thing will pick up speed and snap your head pretty soon.

Or the prologue may preview the denouement, the final outcome or unraveling, of the story’s main plot. I am attempting something like this in the sequel to Children, another novel of time travel tentatively titled The House at the Crossroads. The prologue is actually a kind of epilogue, the last piece of the action. But since this is time travel, and one character’s point of view is always going to be either behind or ahead of another’s, I can get away with it.

Besides, tricking the reader with a logical or interpretive puzzle—at least in science fiction—is no sin, or not in my technique. I believe that the sort of readers who enter here want to be challenged, pushed up on their toes or rocked back on their heels, and have their mettle tested. Everyone else can go read a Harlequin romance.

Reveal a New Problem

Finally, an alternative to breaking up a single piece of business, such as the story of an attack on an engineering office, into its separate and partially resolved stages, the author can use one piece of completed action to reveal and foreshadow a new problem that the characters must deal with in the next part of the book. This is not so much a new technique as just the basic premise of good storytelling, where one action leads organically to another.

I am having fun with this approach right now in The House at the Crossroads. This story traces the history of the time portal built into the Carrefour House hotel in London’s Seven Dials district in the first book. One set of characters comprises the young couple who will travel back in time from the eighth millennium to plant the seed that opens the time portal in an English country inn outside medieval London. Another group—the Jongleur Coel Rydin and his mechanical friend Cinquemain from the first book, The Children of Possibility—works to stop their venture and destroy the portal before it can be set in place. In each case, the action of one group precipitates a response and causes a new series of actions by the other. For example, just when the keepers of the house are prepared to travel to the fifteenth century, the destruction of their pathway into the past places them five hundred years earlier and on the other side of the continent. And so it goes, back and forth—or so I hope.

The point is that any story told in straightforward, linear order, from one point of view, with everything described and explained to the reader’s complete satisfaction the moment it occurs … is boring. The action may be exciting. The characters may be appealing. But nothing teases the reader’s imagination. Nothing is left to the reader’s interpretation and speculation, with the possibility of greater satisfaction or disappointment.

The best way to keep the reader turning pages is to make him or her wonder what’s going to happen next.

1. Too many television screenplays do this, and the breaks are always timed right before a commercial, to lure the audience back to the action. This kind of obvious sequencing gets tiresome.

2. This technique is, frankly, based on modern cinematic usage. I suppose that much of the way stories are now written could not have evolved if it were not for creative uses of the camera in film. Unlike stage plays—or the early Stanley Kubrick movies, with their extreme wide-angle shots—where all the actors move and interact at once in an open-sided room or space, the camera in a film can now follow one character and then another through the action. In its best uses, the camera can adopt the character’s internal viewpoint, cutting back and forth between the actor in the process of observing, and the action he or she is seeing, hearing, and—along with the audience—beginning to understand.

Sunday, October 2, 2016

Science Fantasy

For decades—oh, up until the mid-1980s or early ’90s, perhaps—there was a single genre in literature called “science fiction.”1 Also called “speculative fiction,” its stories dealt with themes and developments in the hard sciences: physics and chemistry, astronomy, geology, biology and evolution, and medicine, tempered with some of the softer areas of study like economics, political science, and anthropology. This literature dealt with the issues that human beings will have to face as we progress in the vast cycle of research, invention, development, and commercialization which began with Newton, Descartes, and the other practical scientists of the 17th century in the period called “the Enlightenment” and will continue as long as there is a functioning Western Civilization.

Science fiction as a separate genre actually started with authors in the late 19th century like Jules Verne and H. G. Wells, who wrote about new technologies that were just coming over their horizon. Verne described travels and adventures under the sea by submarine, across Africa by aerial balloon, and under the skin of the Earth by exploring extinct volcanoes and subterranean passages. Wells examined biological experiments upon the human form and potential encounters with extraterrestrials, both by our going to visit them on the Moon and by their coming to visit us from Mars.

While science fiction was maturing and getting its popular legs in the mid-20th century, another form of speculative fiction was growing out of more natural or organic roots in folklore and legend. Authors like J. R. R. Tolkien, C. S. Lewis, and E. R. Eddison were telling stories of power and conflict based on magical premises and involving creatures that had human form but were different from human—elves, kobold dwarves, witches, and other figments of imagination. Their heroes and heroines were familiar with and often used magic.2 Fantasy became its own literary genre in which the unknown and unknowable power of myth and magic has greater influence than the known and definable power of science and technology.

A third genre, horror, has also grown out of roots in both of these fictional streams. Horror focuses on the negative effects, the willful opening of one’s eyes to the ugly side of science and legend—and then squeezing them shut again. The science roots of this genre go back to Mary Shelley’s Frankenstein, and predate the more hopeful stories of Verne and Wells. She describes what happens when a scientist ventures too far into the realm of pure science.3 On the fantasy side, horror probably starts with Bram Stoker’s Dracula, where a mythical and ageless creature doesn’t just coexist with human beings but feeds on them.

But put aside horror for the moment—a genre I sometimes read but never tried to write. Put aside, also, the realm of formal fantasy. I once was invited to collaborate on a book about wizards but declined, because I really know nothing about magic. It always seems to me to be too much like wishful thinking.

My head and my heart have always been with “hard” science fiction, primarily because I’m curious about the mechanics behind what we find in the world. That’s just one effect of being the son of a mechanical engineer. My talent as a technical writer was always to take a complicated process, explore it within my own imagination, and break it down into steps that anyone could follow. My talent as an employee communicator was to take the science and technology behind my current company’s products, find examples and analogies from everyday life, and explain these mysteries so that everyone from the accountants to the janitors could find in them something interesting.

My first published novel4 was The Doomsday Effect, about a micro black hole orbiting around and through the Earth, what it was doing to the planet, how it was discovered, and what a team of scientists and engineers could do to stop it. It was a story of pure science—well, except for the end, where I played fast and loose with a vial of antimatter. That stuff is more of a science fiction meme and theoretical substance than an actual material you can put in a bottle. But readers accept it as something we can manipulate, especially when the plot needs an awesome explosion or a powerful starship drive.

My second novel, First Citizen, was hardly science fiction at all—except that it portrayed an alternate history of the United States, including a major war in Central America, the collapse of the federal government through a rogue nuclear attack, and the rise of feuding despots in the manner of the civil wars of the late Roman Republic. I still managed to inject a healthy dose of technology, including the possibilities of mining municipal solid waste and advanced techniques of alternative warfare.

It was my third novel, ME: A Novel of Self-Discovery, that launched me into what I’ve since come to regard as a new genre, “science fantasy.” The premise of the book—and of its sequel, ME, Too: Loose in the Network—is that a piece of computer software written in a version of the Lisp programming language can be both small and large at the same time: small and agile enough to port from one computer operating system to another as a viral infiltrator and spy; large and complex enough to become a self-aware artificial intelligence with understanding, aspirations, and the possibility of a soul. This was not just bending the rules of science, like putting antimatter in a bottle, but throwing them out the window and using the trappings of computer technology to tell stories about a kind of sprite or a wood elf.

More recently, I’ve done it again with The Children of Possibility, and with its prequel (now in production) tentatively called The House at the Crossroads. Both deal with time travel—and not just a solitary inventor who creates a machine of gears and wheels that pushes itself forward through time, but two competing systems, alternate theories of mechanics and mathematics, and the societies that grow up using them for competing purposes. Since time travel, like antimatter or artificial intelligence, is the stuff of imagination based in physics and mathematics but not the occasion of everyday reality, like separating garbage into fuels and metals or fighting a war with remotely piloted drones, these stories are pure fantasy.

But—important disclaimer—I am neither a scientist nor an engineer. My formal training was in English literature, the old and dusty kind, and the origins of story going back to Homer and the ancient Greek playwrights. My family upbringing and my early jobs as an editor, first at a publisher of railroad histories and then at an engineering and construction company, nurtured in me a fascination with science and technology. But I never got the training to go deep into the weeds of mathematics and physics. So I never learned to be precise and pedantic about the limits between what is real and what must be imaginary.

In this I am not alone. Writers of “hard” science fiction have been skating on the edges of reality since the beginning. Verne’s Journey to the Center of the Earth imaginatively followed an adventurer through realms deep inside the planet, where even the geology of the author’s time would have suggested the pressures are too great to support vast caverns filled with alien biology. And Wells’s The Time Machine, without the benefit of modern mathematical theories and quantum mechanics, takes a visitor through the dimension of time—but not those of space—by purely mechanical means. Both writers were creating fantasies draped in no more than loose garlands of scientific terminology.

I would also consider any stories about faster-than-light travel as being a science fantasy—although the jury is still out on the subject of warp drives, wormholes, and whether space is pure emptiness or a structure that can be bent and pierced. Without hyper-light travel, all notions of interstellar trade, warfare, and empire recede into the realm of fantasy, a tale of medieval or Renaissance politics played out among the stars between vast duchies and provinces.

I still have a few stories that are based on hard science. They are classed as science fiction only because they haven’t happened yet. One such was my novel about building a solar power plant in orbit, Sunflowers, which is so mundane that I classify it on my author’s website as general fiction. Another is the two-volume Coming of Age, which follows two people from this decade who get to live remarkably extended lives through cellular regeneration technologies which are being developed right now.

As a writer who tries to be neat and precise—as well as honest in my dealings with the reader—I try to keep these two genres separate in my mind: science fiction for what is proven to be technically possible; science fantasy for what is impossible, or only slightly plausible, but great fun. It’s not always easy. Homer, in creating the first saga of Western Civilization, struggled with this impulse, too. And he kept stepping over the line, dealing with the gods as living characters and having his characters journey into the afterlife.

I guess that only means fantasy exists all around us and is part of the human condition.

1. I’m using “literature” here in its proper sense, the art of telling stories through the printed word. This includes everyone from Geoffrey Chaucer and Miguel de Cervantes to Dashiell Hammett and Ernest Hemingway and everyone who makes a living, or tries to, by writing books, short stories, and various species of poetry. The word literature has gotten a bad reputation these days, coming to mean the sort of dry, dusty, and obscure books that one is forced to read in English class, cannot understand, and so despises. There is also a genre of its own, “literary fiction,” which tries to emulate those dusty old books by taking out all the dangerous, daft, fun stuff and inserting long passages of introspection where nothing much happens. That’s not the kind of literature I mean here.
       However, I have tried my hand at literary fiction—see The Judge’s Daughter and its sequel, The Professor’s Mistress. These are stories about people in a certain place and time in the past, not the future. Although the books have a certain amount of insight and introspection, I promise that a lot still happens and some of it’s fun.

2. Yes, of course, as Arthur C. Clarke noted, “Any sufficiently advanced technology is indistinguishable from magic.” But in these stories the characters themselves usually remain uncertain about the origins of their magical power and stand in awe of its effects.

3. Yes, and there’s one of Clarke’s other laws: “The only way of discovering the limits of the possible is to venture a little way past them into the impossible.”

4. I wrote two other complete novels, worked on a number of fragmentary manuscripts, and dreamed up even more discarded ideas before I finally published this book. Every author has to learn and practice the art before going public. When you see a “first novel” that’s really good and makes headway with the critics, you can bet the author has two or three unpublishable experiments that predate it.

Sunday, September 18, 2016

A World Without Borders

It sounds lovely. A world without borders—those imaginary lines which we also think of as barriers. A world where people can travel without the intrusions of government interfering with a person’s right to go where he or she wants and do whatever suits his or her needs—and for “intrusions” read pieces of paper, arranged and signed ahead of time, inspected by men with guns, and the inevitable waiting period, usually in a room with a door that locks. If we could only eliminate borders and barriers, then we could eliminate the paperwork, the men with guns, and the locks. Then people would be free to pursue their dreams.

But you would then also necessarily have a world in which no one would be able to hold onto a fixed address. It would be—or rapidly become—a world with no stable form of government, no organized rules of commerce, no property rights, no right of ownership to anything that can’t be carried in your pockets. It would be a world with no assurance that people and their families who have stayed in one place for generations would not become homeless by next week.

Why do I say all this? Because a world where anyone can go anywhere without restrictions, where a person who just walked across an imaginary line is just as good—that is, has just as many rights and just as much say in outcomes—as the people who have lived there for generations and invested their time, effort, and wealth into building up the infrastructures, trading patterns, and cultural values of the place, is a world without the natural and obvious distinctions between builders and drifters. In such a world, people will tend to form ad hoc, temporary associations to gain power and possessions for their group’s members at the expense of anyone else who gets in their way. Ultimately, the boldest and the bravest—usually those most willing to give up what they have always known in their search for something better—will prevail.

In short, you would return to the world of the homeless wanderers that existed before about 3,500 BC, before people began settling in the Indus, Tigris, Euphrates, and Nile river valleys, staking out fields for plowing and sowing, arranging ditches and gates for irrigation, learning to read and write, starting up a civilization, and learning to be comfortable under its rules and restrictions. You would return to the world where the biggest gang, the collection of the toughest and most aggressive individuals, gets to sit down wherever it wants, sleep in armed camps wherever their leader chooses, and eat whatever comes to hand.

A world without borders is a world without a functional civilization. It is a world without citizenship, of people who owe allegiance to no nationality or culture—except some vague and unresponsive “brotherhood of man,” or perhaps to a distant and squabbling, self-proclaimed organization like the United Nations or the European Union, which is full of good ideas for how everyone else should live but with very little practical experience on the ground.

I know that’s not the ideal. The vision of a world without borders was created in the 20th century after two horrendous wars that seemed to be virtually without borders themselves. First came the League of Nations in 1920, arising out of the Paris Peace Conference that ended the 1914-18 European war. And when that organization proved ineffective, the United Nations arose to prevent another conflict like the 1939-45 war, which engulfed pretty much the whole world. The idea was that if all the nations of the Earth could come together peacefully to discuss and resolve their differences, humankind could eliminate the need for war.

But the question was always one of sovereignty. For such an organization to be effective, to enable it to establish and enforce its mandates, the member states must give up some—if not all—of their separate rights and responsibilities, just as single human beings give up some of their personal rights and responsibilities in order to claim the protection of a state or nation. The proposition for creating such a worldwide government is that nations are analogous to—and not essentially different from—individual people, just on a larger scale. But is this true?

A person—at least one whom society considers to be in possession of his or her faculties—has a single identity and the ability to form fixed intentions and follow through on them. By the time they are adults, people as individuals can have fully formed ideas that they are unlikely to change as they age further. A person can decide to be—and remain, for the rest of his or her natural life—a good citizen, a reliable father or mother, a hard-working employee, a steady church goer, or a loyal party member. Circumstances may change, calling for the individual to try out new ideas and test new values. But the person usually remains true to one set of ideas, values, and commitments for most of his or her working life. Exceptions exist, of course, but they do not disprove the general nature of human psychology.

Nations are not necessarily like this. Neither is any large group of people who have come together over time. Putting aside issues of “national character,” which are usually impressions and stereotypes about a culture gained by outsiders—Italians are excitable, Germans are sober and dour, Frenchmen are passionate—the nature of any group is fluid. People as individuals may have relatively fixed ideas and values, but as groups they tend to discuss, disagree, and influence one another, and they can form only a slippery and temporary consensus. That consensus may represent a government in power, making laws and creating institutions that have the appearance of an individual’s fixed commitments. But governments fall from power and their commitments change over time—relatively slowly and peaceably in the representative democracies, suddenly and harshly in oligarchies and dictatorships.

A single person might commit to join a nation and live under its laws. But a nation—the collective and changeable will of a large group of individuals—cannot make such promises.

The notion that, over time, a world government will emerge and the individual states which currently meet and debate in the United Nations or European Union will wither away from useless redundancy is fanciful. The idea is just as fanciful as the notion in economic Marxism that, once communism has been firmly established and people are peaceably trading their personal labor for goods and services, the revolutionary state which established this condition will simply wither away from having nothing else to do. But that is not human nature. History doesn’t come to an end, and people don’t give up their personal interests and political advantages, just because they can’t think of what to do next.

Perhaps, with our current world’s continuing developments in technology, with global and instant communications, with a fairer distribution of natural resources and the fruits of education and science, and with institutions and infrastructures which will equitably provide goods and services to all the peoples of the Earth … perhaps then we will see national distinctions fall away and people on every continent become citizens of the world. It’s a nice idea, but I doubt it will happen. In just one area—global communications, represented both by networked broadcast services, which send common ideas and values out to the mass of people, and by networked social media, which allow people to exchange and discuss their reactions to those ideas among themselves—the proposed unity has not developed. Instead, social media have allowed new groups of like-minded people to form. These groups may no longer be bound by geography and personal acquaintance, but they still coalesce around shared real-world experiences. It’s a nice idea that people will think globally, but they will continue to act locally and in relation to what they know, think, and believe.1

I can imagine only two conditions that would support a global association of humanity under a single world government. The first would be when our descendants form colonies and associations on other planets in the Sol system and out among the stars. This is well described in James S. A. Corey’s Expanse series. When Luna, Mars, and the Belt all have their own governments, shared cultures, developing languages, and genetic drift, Earth will, in response, come together as a single political entity. The second condition would be our discovery that human beings are not alone in the universe. This is a standard theme in science fiction: when the aliens come down from the stars, either as peaceful traders and teachers or as ravaging conquerors and usurpers, the distinctions among human-type people will fall away and we will, in response, become one culture, one civilization.

In the meantime, who benefits from promoting a “world without borders”? I mean, apart from the naïve idealists who live with their heads in the 23rd century?2

The first kind would be the people who have their own borders well secured, thank you very much, but would like a stake in the land behind yours. Back during the Cold War, when the Soviet Union and the People’s Republic of China both imagined that their chosen system of government had the destiny of conquering the world, various sympathizers and fellow travelers promoted the ideal of a world without borders. They looked forward to world peace at the price of submission to an ideology that came from beyond the borders of Western civilization.

The second kind would be people who despise the civilization they see around them—perhaps because they have not been successful in it, perhaps because they want to shortcut the political process—and want to see it gobbled up by a wave of foreign invaders. They imagine these invaders will provide the instability and the political liquidity to dislodge the power structure which they despise but their fellow citizens are too dull and stupid to throw off themselves. They want a revolution but lack the guns, the organization, and the numbers to bring one about. So they believe that encouraging immigration en masse will create better conditions for their purposes.

Neither of these motives is one that I admire or subscribe to. At heart, I am something of a libertarian, believing people should be allowed to go where they want and do what they need. But I also know that human beings, like all the great apes, are social beings. We need and want to find our own place, among people with whom we can find agreement and common cause. We want to build something that we can preserve, protect, and pass along to our children. And we value those children as the product of our own genes and extension of our own lives, rather than as vaguely deserving “citizens of the world.”

Maybe one day—with enough enlightenment, technology, and freely available goods and services provided through unlimited energy resources and automation—we can walk across those invisible lines and settle anywhere we find that’s pleasant and accommodating. But that day has not arrived. And it may not until the 23rd century.

1. In this, I am reminded of two more examples. The first is the drift of languages, as described in John McWhorter’s The Power of Babel: A Natural History of Language. Over time, human languages tend to develop differences among speaker groups. A powerful political influence, like the Roman and British empires, can temporarily bring people together to speak a common language. But when that influence falters or wanes, as did both Rome and Great Britain, people will go back to creating their own local dialects, word usages, and shared colloquial meanings. McWhorter points out that Italian, French, and Spanish are nothing but Latin that has been left alone in different and relatively isolated places for speakers to develop their own idiom. And each of those languages that we think of as unified wholes—again, Italian, French, and Spanish—are actually a collection of local dialects, like langue d’oc and langue d’oil in medieval France, which over time will develop into separate languages of their own.
       In similar fashion, large human associations with developed trade routes and easy movement among disparate cultures, like the Mongol Empire, can mix up the gene pool and over time create a heterogeneous population. See, for example, the wide spread of Genghis Khan’s Y chromosome. But let that empire fall and the associations wither, and people will go back to socializing with their near neighbors and second cousins. And then individual variations, like red-haired Scots and blond Scandinavians, will emerge from the mix.

2. A place I’ve been known to go and visit from time to time.

Sunday, September 11, 2016

Cruel Fate

It has long been observed that evolution is no respecter of individuals. The process has given us quadrupeds both as graceful as horses and as sturdy as elephants; invented flight numerous times with species as different as dragonflies, hawks, and bats; brought lungfish to walk on the land and whales and dolphins to swim in the ocean. Evolution in one way or another has created everything you can see on this planet, including the contents and colors of the sea and the sky. Evolution is the engine of creation, but it cares not a bit for the individual of any species. It works on the flow of genes but it lacks direction—except toward what works and survives. And along the way, that flow will discard a hundred, a thousand, a million failed attempts—all of them individual beings with otherwise developed potential.

This is a hard thing for most people to understand. We instinctively want a creator which—or, in most minds, Who—cares about us. Not just the human species as a curious kind of experiment along the way from monkeys to post-apocalyptic apes, but humans as some peak of attainment. We want to see our kind as rising toward a level of genius, awareness, independence of thought, and freedom of action that was prophesied back when the first microbial cell divided and differentiated in the primordial, comet-fed seas. This yearning for attention is part of our mammalian and human heritage, based on our being born as helpless, half-formed embryos with enlarged primate skulls too wide to gestate fully in the womb and then pass through the narrow primate birth canal. We are totally dependent in our earliest years on loving parents to feed, protect, and teach us. And when mother and father themselves prove to be all too human and fallible, we look to the sky for a loving bringer of order and to the earth for a nurturing presence.1

Moreover, we look for a creator that had us, ourselves, our own personhood, in mind when we were born. With our random gift of self-awareness, we humans each believe that we, as individuals, as the person following the particular life course we’ve chosen, with the dreams of childhood behind us and the ambitions of adulthood still driving us forward, have a unique place in the creator’s purpose. We want to be loved. We want a force stronger than ourselves to tell us that we will win the race, achieve our goals, obtain the love and respect of our family and peers, that we matter in this life.

The notion of that creator as the engine of evolution, which randomly hands out helpful or harmful genetic mutations before we are even born, which often dumps us into environments that may be both physically and—for us now, with our bigger brains and calculating self-awareness—psychologically productive and sustainable, or not, and which dooms a large fraction of our peers to random accidents, diseases, and death … we find that notion hateful. The universe is not supposed to work this way. Our mothers assured us we would be safe. Our fathers fought to make us safe.

The truth is even worse than that. Life itself and the history in which we place such intellectual store are both crapshoots.

Think of our great philosophers and teachers—Pythagoras, Aristotle, Buddha, Christ. Think of our brightest minds—Newton, Goethe, Einstein. Think of our history-changing leaders—Moses, Caesar, Genghis Khan, Napoleon. All of them are accidents in the sense that they happened to be born with the right kind of intellectual or emotional capacity, placed in the right familial and societal environments, either encouraged or simply allowed to develop and exercise those personal gifts, and survived long enough to begin applying them successfully.

Certainly, in an army as large as Napoleon’s—more than half a million men in the Grand Armëe at its peak—there must have been four or five other men with the native charisma to inspire those around them, the organizational capacity and memory to know the quality and current status of hundreds of fighting units both during the march and on the battlefield, and the imagination and vision to conjure up campaigns for them to pursue and strategies that would enable them to win their battles. But of those four or five other men, who will never be named, they were either born into humble homes and never attained an officer’s rank and training, or they died of wounds—or more likely dysentery—in their first or second campaign. History has given us a Napoleon and a Wellington. There might well have been other and better men. We will never know.

Something of this kind actually happened in the late 17th century, when the English physicist Sir Isaac Newton and the German mathematician Gottfried Leibniz independently came up with mathematical systems that became our modern system of calculus.2 They began working on their ideas at different times and, although each man left manuscripts or published papers at different dates, there is no indication that one copied from or even knew about the other.

Interestingly, a third mathematical system was created much earlier but only survived in a Byzantine manuscript that copied out the works of Archimedes of Syracuse—a parchment that was later overwritten with Christian theological texts. Once the religious material was scraped away, the manuscript underneath showed that the ancient Greek had come up with a “method” for solving physics problems for which we now would use calculus. Who knows what advances human engineering and technology might have achieved far ahead of their time if this method of calculation had spread and been used so much earlier in history. The fact that the Archimedes method was written down and survived only once, and then not rediscovered until the early years of the 20th century, is simply a matter of cruel fate.

If human intellectual and creative development is a crapshoot, so is the history in which it operates and which we take to be somehow foreordained and immutable.

We like to believe that the great turning points, the decisive battles upon which the course of history swung like a bank-vault door on a jeweled point, were destined to come out that way. Think of Hastings and the success of the Norman invasion of England, which brought French language and manners to England, and involved the English royal family in French affairs for half a millennium. Think of Waterloo and the success of the English and German armies in stopping a resurgent Napoleon from the reconquest of Europe, and subsequently establishing a hundred years of relative peace on the continent. Think of Gettysburg and the success of the Army of the Potomac in stopping a Confederate march on Washington, DC, from the north, which might have forced an end to the American Civil War favorable to the South and its secession.

From my experience of fighting some these battles in both tabletop miniature and board games (see War by Other Means …)—the different days of Gettysburg at least four times, and Waterloo and other Napoleonic battles at least once—I can attest that the course of history was not so obvious. In the hands of different generals, and with even slightly different tactical approaches, these battles could have gone the other way. There are no sure things in history.

As a science fiction writer, I wrestle with these ideas. What does evolution look like on other planets? And how might our own planet have developed differently? What great minds might a cruel fate have subtracted from the human past—or added to it—to change the path of our intellectual, emotional, and political development? What turning points, which we see so clearly in our telling of history, might never have occurred, or resulted differently, to redraw the political map of the world?

It’s a fascinating imaginative playground—and one that I explored briefly in the novel The Children of Possibility and am now re-entering with its sequel, due out sometime next year, tentatively titled The House at the Crossroads. The only difficulty, for a writer, is that if one concatenates too many changes onto history, the human experience tends to become unrecognizable for the reader.

But like evolution and fate, a writer’s imagination can sometimes be cruel.

1. As I’ve noted before, the human conception of creation and deity would be very different if our species had arisen from the line of, say, sea turtles instead of the great apes. Hatched in the dry sand, with their first act destined to be a crazed dash toward the surf and the light of the full moon, being picked off twenty or a hundred to one by the waiting seabirds and then by the snapping fish in shallow waters, the surviving turtles would have a much darker notion of the creator’s purpose.

2. Calculus, for those who are as mathematically innumerate as I am, is a method for studying changes in a system, such as the area bounded by a continuous, smooth curve, or the effects of changing rates of acceleration on motion. The word is from the Latin for a small pebble used for counting.

Sunday, September 4, 2016

My GO! Button

It seems as if all my life I’ve been pushing on a button in my brain—perhaps linked to the startle response, perhaps to the pituitary gland and adrenaline release—which sets me up to do the things I must. From twelve years in school, and then through college, followed by forty more years in the business world, I have been responding to the needs of the outside world, the commitments I’ve made to meeting them, and to my own demands upon myself.

The alarm rings at four o’clock in the morning—push the GO! button to rouse myself, get out of bed, stumble through my get-ready routine, and sit at the typewriter for an hour or so to work on the novel I am trying to write before leaving for school or work.

When the clock edges up on seven—stop what I’m doing, push the GO! button again, and prepare for the morning commute. For the last ten years of my working life that meant putting on my riding gear, wiping off and wheeling out the motorcycle, and driving through traffic on the two worst commute-hour corridors in the country: westbound I-80 toward the Bay Bridge, then southbound I-880 through Oakland. San Leandro, and Hayward, with the San Mateo Bridge toll plaza jam-up and then its seven-mile, arrow-straight slog still ahead of me.

When I arrive at work, pour my first cup of coffee, sit down to log in at the computer, and while the machine is churning—push the GO! button to deal with an unknown number of voicemail messages behind the phone’s blinking light. These will have collected overnight because this is a global company with people calling or returning my voice messages from the East Coast, England, and Singapore. Ten minutes later, with every voicemail either answered or logged for subsequent action, push the GO! button again to enter the slipstream of overnight emails and deal with every new alert, request, and problem each one brings. Forty minutes later, I can take my second sip of cold coffee and begin the planned part of my day.

As the hour of each scheduled meeting or interview appointment approaches—push the GO! button to prepare my mind, psychologically and emotionally, for the meeting agenda, for the new information and directions the session will likely bring, and the pitfalls it will probably hide, or—equally stressful—for the questions I must ask my interview subject, the amount of blind probing I must do, and the unique personality I must deal with in order to get information for the next article I must write.

If the meeting is the quarterly internal business review with all employees, push that GO! button dozens of times in the weeks beforehand as I prepare slides for the various speakers, make arrangements for the meeting space and video connections, send out companywide announcements and reminders, and remember to order an assortment of refreshments—tempting but not too rich and costly—for the estimated number of attendees. And then one big push as the hour of the meeting approaches and the hall starts to fill.

If the interview subject is a company officer, press the GO! button a couple of extra times to deal with schedule changes, session interruptions, and the Shadow Kabuki–like play of his or her political and personal sensitivities. Even if the officer is known to me from past associations, and even if our past discussions have been cordial and even friendly, the subject itself will be new, and a whole kaleidoscope of novel implications will overlie the results from our previous dealings.

Then, when my schedule opens up and it’s time to write the next article, or prepare the next set of speaker slides, or pull together the next issue of the newsletter or the next refresh of the internal website—press the GO! button to steel my mind for diving into this set of details, driving toward this overarching message, and bending the arc so this story finds a strong, logical, and credible resolution in the reader’s or viewer’s mind.

Finally, when the five o’clock hour, or six o’clock, or sometimes seven or eight, comes around—press the GO! button once more to prepare myself, physically and emotionally, to swing my leg over the motorcycle again and face the reverse commute over that bridge and through those commute corridors from hell. Riding a motorcycle is usually exhilarating, and doubly so when I’m headed home and know there’s no scheduled arrival time for which I must push the travel envelope. Motorcycles in California are automatically entered into the carpool lanes, and if traffic in all lanes grinds to a stop, I can still split them to get through the jam—although that involves its own repeated pushes on the GO! button: look ahead, figure the available width, divide it for the size of my bike and clearances, watch out for that car wobbling in its lane, keep an eye on that semi crowding the line, and so on for mile after mile of jangling alerts.

If it’s raining that day, and I’ve chosen to drive the car rather than wrestle with my rain gear and deal with the stresses of wet tires on grooved pavement, the commute adds the dimension of sitting in stalled traffic, where I’m safe, dry, warm, and have the radio or a CD to listen to, but also trapped, staring at the bumper of the car ahead of me, counting the minutes as the flow creeps forward, brake lights winking, making excuses in my head for the meeting I’m going to miss on the work-bound commute, or the apologies I’ll have to make on the home-bound route.

Twenty times a day, a hundred times a week, for year after year, my brain has taken that shot of psychic energy and adrenaline.

It wears you down.

Now that I’m retired and working on the sort of writing I used to do at four o’clock in the morning, I have to push the GO! button a lot less often. I might have a doctor or dentist appointment to go to during the day, or a lunch with friends for which I don’t want to be late. Some Saturdays I might have a war game scheduled (see War by Other Means …), and the house of the gamer who’s hosting the event might be as far away as my old commute, but the traffic on Saturdays is usually light and the motorcycle ride is fun rather than nerve-racking. I still get emails every day, but they are usually chats from friends or commercial messages that I can safely ignore. I still get the occasional unprovoked phone calls and voice messages, but they are easily screened.

Curiously, one of the stressors I once experienced at work, plunging into the details of the next article or speech that would have to be completed on deadline and then sent for review with both the subject matter expert and other approvers—which usually entailed its own set of stressors returning through the voicemail or email stream—does not carry over into my fiction writing. Although I try to maintain a schedule with my writing, working to an outline, in order to bring out a new novel every year or so, the pace is at my discretion. And, unlike an article where the objective, the points to cover, and the details not to be missed are directed by somebody else, my own writing is under my control and that of my subconscious mind (see Working With the Subconscious from September 20, 2012). When I sit down to write fiction, it is because the story has been percolating through my brain, the pieces have started coming together, I’ve just thought of the opening line of dialogue, or incident, or sensory image to start the scene—and the keyboard draws me to it like an old friend. I don’t have to push any internal buttons because my mind is already flowing in that direction, eager to get these new ideas down in specific words, images, and plot structures, creating an experience that will feel real and concrete in the reader’s mind, where before there was only a blank page.

After all those years in school and then in the working world, I can wake up when the birds start singing and the dawn light shows in my bedroom window. I move through my morning routine out of unforced habit, taking a few extra minutes here and there if I want. I do my karate exercises (see Isshinryu Karate) before breakfast because the workout makes me feel better and lighter during the rest of the day. I read the newspaper with as much attention as I want while I eat, because I’m interested and not because it’s an assignment. Then I turn on the computer, pour my coffee, and see if my subconscious has sent me more of the novel to salt away as finished scenes and chapters. And if not, I can go sit in my chair and read a book. Or I can get on my motorcycle in the middle of the day and ride out across the countryside, picking my own route, enjoying the sun and wind, and not minding a schedule.

This is good because, after all those years of pushing, pushing, pushing, my GO! button is broken. My life is in my own hands at last.