Sunday, February 23, 2020

Ancient Computers

Perpetual calendar

About twenty years ago, I used to keep on my desk—partly as ornament, partly paperweight, and somewhat as a useful device when I was writing fiction about the near future—a perpetual calendar like the one pictured here. It was a simple device: You align the month on the inner dial with the intended year on the outer dial, then read off the dates for the days of the week in the window at bottom. It took a minute to set, being mindful of leap years, and gave accurate readings over a span of fifty years.

This was a form of computer with no electronics and only one moving part. It was the sort of thing we all used to find specific information before we could carry a computer in our pockets that is the size of a deck of cards—and now wear one on our wrists that is the size of a matchbox.

I am of two minds about this, because I have always loved small devices with screens and keyboards. That love goes back to the first toy that my father made for me when I was about three years old. It was a box with a series of toggle switches and a line of small lights with red, green, yellow, and blue lenses. When you threw the switches, the lights would come on in different orders. It did nothing useful, except fascinate a small child. But it fixed my mind in a pattern that endures to this day.

Ever since the dawn of the Microprocessor Age, I have been chasing the ultimate handheld computer. It started with the first “personal digital assistants,” or PDAs, usually made in Japan and with crippled keyboards that required you to hold down three not-all-that-obvious keys to get a capital letter or a punctuation mark. Being a book editor and a stickler for form, I laboriously worked to get the right spelling and punctuation in my entries, so using the thing productively took forever. I then adopted, in rapid succession, a Palm Pilot—where you spelled everything out with a stylus or your fingertip—and then a variety of Hewlett-Packard calculators and tiny computers, chasing that holy grail.

My first cell phone had the traditional arrangement of rotary-dial numbers as a limited form of keyboard. That is, the digits 2 to 9 were each accompanied by three letters in sequence from the alphabet.1 You could store people’s names and numbers in the phone’s memory by “typing” them in using the keypad: Press 2 once for A, twice for B, three times for C, and wait a bit for the phone to sort out the right code and show your desired letter. And, of course, there were no lower-case letters or punctuation. It was really easier to keep your phone list separately, in a booklet or on a piece of paper, except that then your friends wouldn’t be on speed dial. But I digress …

Before we had computers at our fingertips, we had all sorts of handy ways to work out useful information.

The oldest is probably the Antikythera mechanism, a device of brass with geared wheels, now encrusted with corrosion and coral, discovered in a shipwreck off the Greek island of Antikythera in 1901. It has since been dated to about 200 BC, and x-rays of the gears and a reconstruction of their turning suggest that the mechanism was used to calculate astronomical positions and possibly to predict solar eclipses. The corollary would be the modern mechanical orrery, which dates back to the late medieval period and shows the positions of the sun and planets at any particular point in their continuously revolving orbits.

But mechanical representations of physical conditions are not the only form of ancient computer.

When I was compiling engineering resumes at the construction company, I came across a man whose work responsibilities included compiling “nomographs.” At first, I thought this was a typo and that he must actually be writing monographs—a literary pursuit, but an odd one for an engineer. Further checking revealed that, no, he really did make nomographs, also called nomograms. These are two-dimensional diagrams representing a range of variables associated with a mathematical function, usually shown as number sets along two or three parallel lines. Rather than solve the function mathematically, all an inquiring engineer had to do was draw a line of the correct angle across the parallel lines to achieve the answer.

As a form of computer, the nomograph is just a little more complex than a table of common logarithms2 or a telephone book—closer to a database than a calculation. And if we’re going to call a computer any device that gives you accurate astronomical readings, then a sailor’s sextant for “shooting the sun” at noon to determine latitude—and before that the astrolabe for calculating the angle between the horizon and the North Star—are also in the running as early “computers.”

But the point of this meditation is not to show how clever ancient peoples were but how much we are losing in the digital age. Orreries and sextants are now mechanical curiosities and decorative artifacts—the one lost to telescopes and observational satellites tied into much more sophisticated computer modeling, the other lost to satellite-based global positioning systems (GPS). Nobody writes nomographs anymore. The phone company doesn’t even publish the Yellow Pages anymore, or not on paper. Everything is online. And I can answer texts from friends by drawing letters with my fingertip—in both upper and lower case, with punctuation—on the crystal face of my Apple Watch, which doesn’t even need a keyboard.

The knowledge of the entire world along with real-time information, like GPS positioning and footstep counting with conversion to calories, is in our pockets and on our wrists. And that’s a wonderful thing.

But when the batteries die—or when future archeologists dig my Apple Watch out of a shipwreck, corroded with salt and perfectly nonfunctional—we will be left with lumps of silicon and dozens of questions. Who will draw the nomographs then?

1. Except for the 7, which picked up P, Q, R, and S, and the 9, which had W, X, Y, and Z. Presumably Q, X, and Z weren’t expected to get much use.

2. And a little less complex than a slide rule.

Sunday, February 16, 2020

Flying Cars

Taylor Aerocar

The Taylor Aerocar from the 1950s

So it’s now 2020 and the refrain I hear from all sides—including once on these pages—is, “Where’s my flying car?” We were promised in the tabloids and the Sunday supplements that our cars would fly by now. So where are they?

But let’s think about this a bit. First, what do you mean by “car”? Second, what do you mean by “fly”?

If a car is a vehicle that takes a driver and a number of passengers and their personal luggage on a flying trip of several hundred miles, then we had such a vehicle in my childhood. The Taylor Aerocar (pictured nearby) was available in 1954. Perhaps this vehicle fell short of the “flying car” definition because it didn’t just take off from the street. The owner trailed the two wings and tail section with its pusher propeller behind the vehicle on the road and then assembled the flight and control surfaces after arriving at the airport.

For a complete flying vehicle, we’ve long had small airplanes like the Cessna 172 Skyhawk, which can carry four people and their baggage about 736 miles at a top speed of 143 miles per hour. That’s a convenient flying distance and time, but the plane has to start and stop the trip at an airport or prepared landing strip.1 Also, the pilot needs a course of special instruction, must be licensed, and has to file a flight plan before each trip.

Both vehicles can actually fly, but neither can park in your driveway, roll into the street, and take off from there without FAA authorization and clearance.

As to the question of flight, a hovercraft would technically qualify as “flying,” although it seldom gets more than a few inches to a foot off the ground surface. So while you can travel with this vehicle across country and even over smooth water, the dream of taking your flying car up into the air, well above traffic, and over the rooftops cannot be satisfied with a hovercraft.

No, when we think of a “flying car,” we mean the sort of compact, wingless vehicle we all saw in movies like Blade Runner or The Fifth Element. There the cars occasionally might touch down and roll along the ground, but they mostly lift into the air and fly over and between buildings. We want our everyday parkable sedan but, you know … flying.

I have seen several designs and claimed pre-production models of such vehicles. Most use some assortment of ducted fans to generate lift and then, once aloft, forward motion. The ubiquitous aerial drone2 is a model for this sort of propulsion, using computers for control of its four to six rotors in maintaining stability and direction. Having a computer keep a flying car in level flight would go a long way toward removing one of the barriers to this concept, that of requiring the driver to maintain the vehicle in level flight and control it through all maneuvers and under all conditions of wind and turbulence, the way the pilot of a fixed-wing aircraft must constantly monitor the flight envelope. An extension of this computer control would allow the proposed flying car to maintain altitude separation, avoid collisions, and make protected takeoffs and soft, on-target landings. Indeed, the pilot/driver would only have to pick a destination and route, then sit back and become an interested observer of the passing countryside.

But the block to these cars becoming practical has more to do with energy than aerodynamics. It takes more energy to lift a body and maintain it aloft with a directed airstream like a ducted fan than to propel it forward through the air using an airfoil or wing to provide the passive lift. Even a helicopter provides its lift with an airfoil: those large rotor blades, which are so unwieldy in a parking lot. But small-diameter fans of the sort lifting any flying car we can envision will provide much less lift and so require more power.

Right now, four small but powerful gas engines driving the fans would consume about four times the fuel of an old-style Aerocar. And they would weigh more than the car’s performance parameters would probably allow.3 Electric motors would be far more efficient and infinitely more controllable, as well as quieter, but the battery weight and performance would again be outside the vehicle’s desired parameters. A flying car powered by internal combustion or electricity might have enough fuel capacity or battery charge to take you to the grocery store and back at low altitude, but it wouldn’t be much more than a rich man’s toy, like the very first automobiles: more trouble than they’re worth and bought only for the excitement and display value. Such a car would not be able to take you to the mountains or to Las Vegas and back for the weekend.

This is not to say that flying cars are impossible dreams, or that their development and practical use is more than a century in the future, if they are possible at all. But like so many other technologies we see in the movies, they wait upon developments in basic physics and in energy production, storage, and release that are still years away—even though some of our best academic minds, backyard inventors, and dreamers are working on the problem all the time.

In the meantime, as the link above shows, somebody’s got a working Aerocar for sale. All you need is a trailer.

1. And we won’t get into the questions of initial cost, regular service and maintenance, special storage requirements, and inspection intervals—all of which are much more intensive than for a regular family vehicle that would qualify as a “car.”

2. The lightweight kind with four small propellers and a camera on board, not an unmanned aerial vehicle (UAV) like the MQ-9 Reaper, which observes our enemies remotely and then rains missiles down on their heads.

3. Not to mention quadrupling service costs and time.

Sunday, January 26, 2020

Living With One Hand

Hand in cast

About a month ago, on Christmas Eve to be exact, I stumbled and fell on the sidewalk, landing on the side of my face and across the back of my left hand. Among other bangs and bruises, which have since healed, I broke three bones in that hand and wrist. Between the splint put on in the emergency room and the cast fashioned by the hand specialist—luckily, there was no need for corrective surgery—I now have limited use of those four fingers and the barest pinch between thumb and forefinger. Luckily also, I am right-handed. According to the hand doctor, the bones are not shifting and they are knitting well. But still, I may be another month or so in the cast and then a removable splint.

I try to treat everything as a learning experience. So what have I learned from this, other than to be more careful and watch my step? Well, for one thing, I have more understanding for those who, by accident or illness, have lost the use of their fingers or their whole hand. Here are the limitations—temporary limitations, I remind myself—so far encountered.

Typing: I am by training—self-taught from a typewriter manual one winter break during high school—and by nature a two-hand, ten-finger touch typist. No hunt-and-peck for me. So much so that being able to sit down and share my brain through my fingers is now part of my writing process.1 I can still type by spider-walking my right hand across the keyboard, using my left thumb or forefinger to hold down the occasional shift or control key. The result is barely faster than keying into the search function on your DVD player using the remote’s arrow pad. And I still hit the wrong key, or a combination of right and wrong keys, ten percent of the time. So writing is hard.2

Pockets: For fifty-odd years the distribution of daily necessities in my pants pockets has been: knife, comb (when I carried one), and handkerchief in right front; keys, coins, and sundry tokens in left front; wallet in left rear; nothing, or a washcloth for use as an emergency towel, in right rear. With my left hand encumbered by the cast, this time-honored practice goes out the window, and everything, stripped down to minimum, goes on the right side. I’m still fumbling around with this.

Child-proof caps: These devices of the devil, approved for medications taken by sick and feeble people, are impossible to work one-handed, even with the support of a few incompetent and weak fingers and a thumb on the other hand. Once I finally get a cap off, I leave it sitting lopsided on top of the bottle until I need to take another pill.

Zip-lock bags: They’re ingenious—until you realize they’re designed for two working hands with full grip strength. No combination of two fingers on one side, two on the other, can break the seal. The only way I can get them open is to pin one side against the countertop with my left thumb and grip and pull up the other side with my right hand. The contents only occasionally spill out all over the counter.

Buttons: Another ingenious device that really takes two hands. I can use my left thumb working against my cast to hold the buttonhole steady, if not partway open, then use all the fingers and thumb of my right hand to work the bottom edge of the button through the hole and try to anchor the top edge before the bottom slips out again. I can work the big buttons on a sweater, but I give up on all the tiny buttons on a dress shirt. So I wear mostly polo shirts these days. And sleeve buttons, right or left—forget it!

Jackets: Any garment, really, that does not have an expandable sleeve opening, like a winter coat with a knit cuff, blocks passage and traps the bulky cast. I’m too cold to wear my coat draped over the left shoulder, like a Hussar’s pelisse, and too frugal to cut open the seams of the left sleeve on a good coat for a temporary situation like this.

Knots: Unless I can use my teeth, knots are impossible. Shoelaces are hopeless. So I wear mostly slip-on sandals. And socks are hopeless, too. I can do a sloppy granny knot with something as big and thick as a bathrobe belt, but it doesn’t hold for long.

Zippers: I can work a zipper if it’s anchored at one end, like the fly in men’s trousers. But the open-ended version, like on a jacket, is impossible. I just can’t get the pin on one side inserted into the box on the other, manage to hold them both steady, and still pull up on the slider tab. So my jacket flies loose in the wind.

Manual transmission: Yeah, I’m driving a stick again, now that I’m not commuting 35 miles a day each way through heavy traffic. I traveled by Uber for the first couple of days after the fall, when my whole arm was in a splint. My cast now lets me take a light grip on the steering wheel while shifting; so I can still drive—but carefully. And sharp turns require a big over-and-under maneuver with my right hand; so I must plan ahead to avoid shifting while turning. The clutch lever on the motorcycle is hopeless, however, and I have no strength to hold onto the left grip, much less steer; so the bike is on a battery tender for a couple of months. Anyway, I can’t get my left arm into my leather riding jacket.

Personal grooming: Washing my face one-handed feels weird and incomplete. I can shower using a big rubber-and-vinyl glove that slips on over the cast, but then the cleaning process is mostly feeling around one-handed with the soap or washcloth, and my right side gets much less thorough attention than my left. Shaving and toothbrushing are no problem with modern electronic devices, but putting deodorant on my right armpit using only my right hand is an exercise in gymnastics. Luckily, I’m still flexible enough.

Heavy lifting: The hand specialist warned against this. Any use of my left thumb or fingers other than to steady a load in my right hand is perilous. Anything that tugs or pulls on those fingers—like holding the dog’s leash while I lock or unlock the door—risks moving the bones that are trying to knit, giving rise to an ominous ache. I still use the left hand in a limited fashion, but carefully and with occasional pain.

Living with one hand and a few weak fingers, everything is harder, it takes longer, and if I put any pressure on the broken bones or the wrist, it hurts. So life goes on—but at a much slower pace.3

1. I’ve tried the Dragon Naturally Speaking dictation software through a number of its upgrades. Ninety-five percent accurate is still one goofball error every twenty words, and the voice-correction protocol—required so that the software can learn from its mistakes—is painfully slow and irritating. Also, my mind just does not compose through the spoken word.

2. I once bought an ingenious little device called a Frogpad, a one-handed keyboard, but never learned to use it comfortably. The company now seems to be out of business. Pity—I could use it this time.

3. Given how difficult it is to type, and that the last few blogs were already written and in the can before the accident, this may be the last posting I can make for another month or so. I have to take these things one at a time.

Sunday, January 19, 2020

The Writer’s Job

Midnight writer

I have been pursuing this profession—writer and sometimes editor—for almost sixty years now. I first got the inkling1 when I was about twelve years old and attempted my first novel.

That was a fragment, not even a complete story, about a steam locomotive and passenger cars in the Old West that pull into the station with everyone on board dead. It was a fine setup for a mystery, except that I didn’t understand at the time that first you have to know what happened, then you wind it back to what the reader—and presumably the detective, of which I had none—first learns. So I had a setup with no premise.2 But it was a start. I wrote out what I had on an old Royal portable typewriter that was in the family, created a cardboard-and-crayon cover, and stitched it together with yarn. It was a rude start, but I was on my way.

What drew me to writing, when I knew nothing about it, was that a writer—specifically a fiction writer, specifically a novelist—could apparently work for himself from home, rather than for somebody else in an office, and could count his time as his own, rather than keeping to somebody else’s schedule. Well, it was a dream. But it fit my bookish and solitary nature. Besides, it was clean, literary, intellectual work, and you didn’t have to hurt anybody.3

My second novel, written at age sixteen, was a much grander affair: with a first draft in fountain pen on white, lined tablets; second draft typed double spaced, with margins, two copies using carbon paper on my grandfather’s upright Underwood, just like the publishers wanted; and running 472 pages, or about 60,000 words, all told. It was a dreadful space opera about a renegade university professor and rebel leader against an interstellar empire, with a romantic subplot. It had a beginning, middle, and ending—and I knew even as I finished it that the damn thing was unpublishable.4 But the effort was what counted, and it got me fixed on my present course.

My novel career paused when I went to college and studied English literature. I had no ideas for another book—having been emotionally drained by the space opera—and was too busy anyway with my studies and the mountains of reading they required. But that reading gave me perspective on literature and the language. And all along I had thought that, when I graduated, I would immediately write another novel and make my name with my first published book. I had dreamed that I would support myself with fiction writing.

But about three months before graduation I took mental stock and realized I still had no usable ideas, nothing to say. This is not surprising, because few people in their early twenties have much to say to the adult world—which was my preferred venue—and the market for Young Adult literature is limited. So I was suddenly faced with the realization that I needed a “day job” to support my imminent debut into the real world.5 And what I was best qualified for was work as an editor. Through the graces of one of my professors, I got a junior position at the Pennsylvania State University Press. It was eight hours a day on my butt with a blue pencil, correcting spelling and grammar, untangling sentence structure, and marking copy for typesetting, all according to the Chicago Manual. But I loved it. After the university press, I went to a tradebook publisher—where I learned about that railroad tragedy, and much else about the West and my newly adopted state of California—and from there to technical editing of engineering and construction reports and proposals.

My third unpublishable novel came about in my late twenties, while I was working for the engineering company. Based on the time-honored mantra of “write what you know,”6 I tried to write a business novel based on the scramble of a second-tier construction company to answer a request for proposal from a major client for a mega-million-dollar mining development for a Third World client.7 That book progressed as far as a rough first draft, although I never sought a publisher.

In the meantime, I went from engineering and construction, to a public utility providing electricity and gas, to an international pharmaceutical company, to a pioneering maker of genetic analysis instruments, with stop-offs working as a temp in administration at two local oil refineries. In each case, I worked first as a technical writer—learning the secrets of the company’s respective industry—and then moved into internal communications—explaining the company’s business to its employees. And in every case, I was building myself an understanding of and intimacy with the business world and its technological basis, an understanding that I have been mining ever since as background for most of my novels.

So … what is the writer’s job in all of this? It is the same, pretty much, whether the task is editing another writer’s work, or creating and editing technical documents, writing newsletter articles and press releases, or writing a full-blown novel, whether a historical fiction or far-future adventure.

First—and especially in fiction—take the reader somewhere new, show the reader the unique side of an everyday life situation, or of a product or technology, something that he or she has never considered before. There are two ways to approach a story: you can come at the topic head-on and flat out, or from an oblique angle and on the bounce. Think of the latter as putting “English” on the ball, making it spin. That slightly-to-one-side approach puts the reader’s natural defenses off guard and simultaneously raises his or her curiosity. This also works for a new product description or policy analysis—although not in a tightly prescribed document format like a pharmaceutical batch record.

Second—especially in technical writing, communications, and non-fiction editing—make the obscure explicit and the confusing understandable. It is an article of faith with me that nothing is so complex that it cannot be made intelligible to a bright ten-year-old child. But you have to use simple words, work in everyday analogies, and take some extra steps and make some supporting linkages in your reasoning. And you have to use that bounce thing described above to make the reader care in the first place.

Third—and this applies to all writing and editing—be rigorous in your logic and sequence, and honest in your evidence and conclusions. You are invading the reader’s mind, and this is hollowed ground. You can play with the reader’s perceptions and trick his or her understanding in the same way that a magician’s sleight of hand arouses an audience’s awe and wonder. But you can’t lie to the reader or offend his or her senses of fairness, right and wrong, or proportion. And you can never disrespect the reader. For you are playing in another person’s sandbox and, if you offend, will be asked to go home with a slamming of the book.

Fourth—and this applies to almost all types of writing, except perhaps for instruction manuals—paint pictures and tell stories. The human mind is not exactly immune to bare facts, but we have a hard time understanding them and fixing them in memory without a context. This is why storytelling and visual representation have been so powerful in almost all human cultures. This is why religious groups and political parties create a “narrative” to support their core truths. Your job is to create in the reader’s mind a structure made of words, mental images, and associations that carries your message.

To be a writer is to be, effectively, inside somebody’s head by invitation. Play nice. And have fun.

1. What a writerly word! You can almost smell the printer’s ink and hear the presses hum.

2. Curiously, I was foreshadowing one of the tragedies of early railroading, when the trains of the Central Pacific Railroad had to navigate miles of mountain tunnels and snow sheds in the Sierra Nevada, and the accumulated coal smoke asphyxiated their engine crews. From this was born the generation of oil-fired, cab-forward steam engine designs, which worked that route for years.

3. Except, of course, your characters—which I also didn’t understand at the time. But they are only made of smoke and dreams.

4. Anytime you hear about anyone writing a brilliant first novel, count on it being their second or third completed manuscript. Even Harper Lee had to go through this process.

5. Back in my teens, when I was working on the space opera, I wrote to one of my favorite authors, Ray Bradbury, asking if he would read my novel. He politely declined, which was a blessing. But he then suggested that, to become a fiction writer, I should not go to college and instead get a menial job as a dishwasher and just write, write, write, and submit everything to publishers. That course for me, as a teenager, would have been a disaster. Given my subsequent history of real-world, practical experience, I don’t think it would have worked out any better for me as a college graduate.

6. Which is a trap. The command should be “write what excites you, that you know a little something about, but you want to know much, much more.” If your current life is dull—that future as a dishwasher toward which I had been urged—it shouldn’t limit your scope and imagination. And these days, with all of the online resources available, research is easy, right down to getting street views of any city and most towns around the globe.

7. Well, not every idea is a good one. However, some of that story—and so much more—found its way into Medea’s Daughter forty-odd years later.

Sunday, January 12, 2020

Memory and Imagination

One hand

As I think through the way I go about constructing the elements of a story line for a novel and then determining the next actions, images, details, and anecdotes that will support the plot, I have made an interesting discovery: these things pop into my mind—presumably from the subconscious1—in the same way that random memories come to mind.

This act of remembering in this way is different from sitting down, at least mentally, and asking myself what I can recall from a past experience: my graduation day, my wedding day, or any memorable event that is supposed to “stick in the mind.” No, this is the sort of sense, image, or snatch of conversation that surprises me when I’m doing something completely different: a flash of the roadway where I was driving on a trip three years ago, or the image of a house I once visited, or a fragment of what someone once said to me. These things come “unbidden” and when they are least expected.

The difference with the ideas, images, and dialog fragments that come to me for my work in progress is, first, these imagined things did not happen, have never happened, and usually have no relationship to—are not an echo of—anything that has ever happened to me. Second, and unlike the act of forced recall, I find myself unable to sit down, mentally, and ask myself about what should happen after the starting point of a story, or what image or sound or sensory perception would best illustrate the next action scene, or what the characters would say in the situation. I can pose to myself the question of what should happen next, but then I have to go off and take a walk or bike ride, or get my morning shower (warm water hitting my right shoulder and the side of my neck seems to be especially inducive to creative thoughts), and otherwise let my subconscious mull the problem. And then later, out of nowhere, like an unbidden memory, I know what the next action, image, or dialog bit should be.

So it’s apparent that both memory and imagination—at least in my case, and in dealing with a developing story—come from the same place, the subconscious. But the one is a reflection of something that actually happened, while the other is a bit of imagery or dialog with no context other than the mental structure of the novel I’m writing at the time.2

I have noted for many years, going back to my time as a teenager, something I call “the Return.” This is an instance of my hearing a piece of music and then later—two to four days, usually—having it pop into my mind, again unbidden. I will find myself singing or humming the melody or, more often, just “hearing” it in my head. I have also noted that the speed of the Return is linked to my mental and physical health. When I am in good spirits and feeling well, the music will come back in two days—but seldom less. When I am feeling poorly, the return might take four days or not happen at all.

For comparison, the return of a random memory—a place image or a conversation—that occurs while I’m doing something else might take two to a dozen years, while the subconscious production of a scene, action, image, or dialog bit that I have posed to myself for consideration generally takes less than two days. So the two processes—memory and imagination—while alike, are not identical.

Interestingly, none of this has anything to do with dreaming, which apparently comes from another part of the brain. While I have vivid, busy, and sometimes frustrating dreams—but seldom what you would call nightmares—few of them have to do with specific memories, although some are linked to places in my memory, specifically two of the houses we lived in when I was a child. No dream has yet produced anything related to one of my books—not a usable plot point, image, or bit of dialog. However, I occasionally wake up, after dreaming whatever dreams I had, and suddenly get an idea that might help the current book along. Only once in all my experience did I dream of a character in one of my books, and that was my first novel, written as a teenager, and the dream occurred about two years after I finished writing it.

All of this might mean something to a psychologist, or maybe a neurologist, but to me this is just the way my mind works. And the most important conclusion—at least for me—is that I cannot force the construction of a plot or of any particular part of a story. I have no secret list of plot outlines,3 no mechanism for character generation, nor other out-of-the-box novel-writing aids. The story comes from the dark—as in hidden or unilluminated, rather than foreboding or evil—places of my mind.

The story, characters, inciting incidents, and resulting actions have to come from someplace else—outside me, inside but hidden … the gods … the stars—in order for me to believe in them as real people doing real things that have meaning. Yes, I know that they are made up—but I have to forget that part for the story to live inside me. And once I have the outline—that composite of structure and belief—as well as a notion of the associated actions, sense images, and dialog, I can let the magic happen.

Magic? That’s when I sit down at the keyboard, put my mind on its invisible track, forget myself, and let the story flow through me.

1. See Working With the Subconscious from September 30, 2012.

2. Curiously, works that I wrote in the past, stopped writing, and put an end to, do not generate new imagery or dialog. I might suddenly wake up and recall a word, image, or dialog line that I once put on the page and now know to be an error, or lacking, or incompletely linked to something else in the novel—and usually this is still related to the work in progress. But I don’t generate new ideas and directions for the old books. My internal mental process seems to know when what’s done is done.

3. I’ve heard it said that there are only seven plots in all of storytelling. If so, I’d like to know what they are. Whenever I hear this, the examples given are so general (“Well, there’s boy-meets-girl …” or “the hero’s journey …”) as to be practically useless. Once you’ve picked that type of story, then the real work of generating setting, character, incident, and action begins.

Sunday, January 5, 2020

Twenty-Twenty

Crystal ball

The first time I had occasion to write the calendar date “2020” on anything was a few weeks ago, in preparing the draft of the January-February edition of our local NAMI chapter’s newsletter. And as I keyed in those digits, a chill went through me. It was as if I knew, in that sleepless but dormant part of my brain that reads the tea leaves and maps them onto the stars, that this year would be significant.

Oh, I know, we’re going to have a watershed election. And given the political emotions that are running in this country, the campaigning, the election day, and the aftermath are going to be traumatic for one group or another. But that was not the reason for my chill. This was more a moment of existential dread.

Twenty-twenty marks the start of something, but I don’t know what. Well, yes, in one way, we are all past the dystopian vision of the year 2019 as given in the movie Blade Runner, and we still don’t have Aztec-style corporate pyramids and towers shooting flames into the murky night sky of Los Angeles. (And my car still doesn’t fly, dammit!) So that’s a plus for the new year. But that’s not what concerned me.

I can remember from thirty-odd years ago hearing a speaker from The Economist describe how the character of each decade doesn’t change or become consistent and recognizable on the year ending in zero, but on the fourth year into the new decade. So the decade of the 1950s didn’t really end and the 1960s start until 1964. By then we’d been through the Cuban revolution and our ill-fated invasion, the first moves in the war in Vietnam, and the assassination of a beloved president. And then we had the Sixties.

In similar fashion, the Sixties didn’t end in 1970 but not until 1974, with the culmination of that war and the impending impeachment and resignation of the president who came in promising to end it. And the economic malaise—recession and inflation, “stagflation”—that started with the oil embargo and gas crisis in that year marked the Seventies and didn’t really end until about 1984. (Another year whose mention will give you willies.) And then the computer revolution and economic growth of the Eighties, which started with wider acceptance of small, desktop computers and personal software, fueling the “tech stock” market boom, changed the economic structure of the country and continued through the early Nineties.

You can define your own parallels to this theory, in cultural norms, music, clothing styles, and sexual mores, but I think the pattern holds true. Decades don’t change on the zero year but about four years later.

I think something similar happens with centuries. But there the change is not in the fourth decade but in the third, the years counted in twenties..

The 18th century, which was marked—at least in the Western European sphere—with the wars between England and France, culminating in revolution in the American colonies and then in France, the rise of Napoleon, and the struggle for all of Europe, extending to the shores of the Nile and the heart of Moscow, did not really end until the middle of the eighteen-teens. Similarly, the breakaway of the United States from England and the finding of this new country’s own character did not really end until the War of 1812. After that decade, the 19th century came into its own.

And then, the Victorian, imperial, colonial über-culture that—at least in Western Europe—took the superiority of one race and one extended culture for granted, did not end until the shattering madness of World War I. And what came out of that war was the new century, with a perhaps more enlightened view about races and cultures—at least for some people—but also a clutch of hardened counter-ideologies and the technological means to pursue them to horrific ends. After that first global war, the 20th century came into its own.

And finally, the 20th century has been with us ever since. The fall of the Soviet Union and putative end of the Cold War in 1989 was a highway marker, but the effects of that lingering aggression and its bunch of little colonial brush wars and invasions (Korea, Vietnam, Grenada, Kuwait, Iraq, Afghanistan) continued along the same lines, although those lines were becoming blurred with the rise of Arab and Muslim nationalism and the flexing of Chinese muscles. And over them all has loomed the technological changes in warfare that started in World War I, with the machine gun and chemical agents, and continued in World War II, with the jet engine, the rocket, and the atomic bomb. The century saw war become less about armies marching against each other on a common battlefield and, because of “the bomb,” more about political and ideological maneuvering, guerrilla techniques, and terrorist tactics.

You can define your own parallels to this theory with the way that political and cultural norms, music, clothing styles, and even sexual mores changed in the 1920s. The theory still holds: centuries don’t change on the zero-zero year but about twenty years later.

So, although we have been living in the 21st century for the past two decades, it still feels like an extension of the 20th century. We have the same international tensions, the same small wars in far-off countries, the same political differences at home. And, aside from the power of my handheld phone and the number of small computers controlling everything from my car’s ignition to my coffeepot, this century doesn’t feel all that different from the one in which I was born and raised. But still, I sense big things coming, and hence the existential dread.

What are these “things”? My internal leaf-reader and star-mapper doesn’t know. But my sense as a science-fiction writer can offer a few guideposts.

First, the technical revolution that brought us the small computer as a workhorse in everyday life will bring us artificial intelligence in the same role, but maybe more aggressive. And unlike the one big IBM 360 that sat in the basement of the computer science lab at Penn State in the 1960s and ran most of the university’s administrative functions on campus, or the one all-pervasive “Skynet” that destroys the world in the Terminator movies, artificial intelligence will be distributed entities, functioning everywhere. As soon as we work out the neural-net (i.e., capable of learning) architecture and marry it to the right programming, these intelligences will proliferate through our technology and our society like chips and software.1

But don’t think of “intelligence” as being a human-type soul or a little man in a software hat or even a Siri or Alexa. You won’t be holding a verbal discussion with your coffeemaker about whether you feel like dark or medium roast this morning. Instead, you will find that your world is going to have an eerie sense of precognition: answers and opportunities are going to come your way almost seamlessly, based on your past behavior and choices. Your phone is going to become your life coach, trainer, technical expert, and conscience. This trend is only going to expand and multiply—and that’s just on the personal level.

On the macro level, business operations and relationships, lawyers and judges, doctors and hospitals, and any sphere you can think of where knowledge and foresight are valuable, will change remarkably. You won’t go into a contract negotiation, a court appearance, or a diagnostic session without the expert system at your elbow as a kind of silicon consigliere. We’ve already seen the Jeopardy-playing IBM Watson prove itself as a master of languages, puzzles and games, and historical, cultural, scientific, and technical references. The company is now selling Watson Analytics to help manage business operations. This trend is only going to expand and multiply.

Second, the biological revolution that brought genetics into the study of medicine—and the sequencing of the human genome was completed on the cusp of this 21st century—will see a complete makeover of the practice. In this century, we will come to know and understand the function of every gene in the human body, which means every protein, which means almost every chemical created in or affecting the human body. That will change our understanding not only of disease but of health itself. Sometime in this century—all of the ethical handwringing aside—we are going to be modifying human genes in vivo as well as in egg, sperm, and embryo. From that will come children with superior capabilities, designer hair and eye color (just for a start), and stronger immune systems, among other things. The definition of “human” will be rewritten. Adults will benefit, too, by short-circuiting disease and regaining strength in old age. This trend is already under way with gene therapies, and it will explode in practice and popularity.

Moreover, the nature of our material world will change. Already, scientists are examining the genetic capabilities of other organisms—for example, Craig Venter’s people sifting the seawater on voyages across the oceans, looking for plankton with unique gene sequences—and adapting them to common bacteria and algae. You want a pond scum that lies in the sun and releases lipids that you can then skim up and refine like oil? That’s in the works. You want to harvest beans that have valuable nutritional proteins, bleed red, and taste like meat? That’s in development, too. You want synthetic spider silk and plastics as strong as—or stronger than—steel? You want carbon graphene sheets and Bucky-ball liquids that have the strength of diamonds, the electrical capacitance of metals, and the lightness and flexibility of silk? Just wait a few years.

As they say in the song, “You ain’t seen nothing yet.”

1. See Gutenberg and Automation from February 20, 2011.

Sunday, December 29, 2019

Writing Ourselves into a Box

Time warp

Maybe … contrary to our theories, the universe is not uniform throughout, or not at all scales. Maybe gravity is not a constant at all scales but has different values at different scales.

We know that the force of gravity appears to be nonexistent at the subatomic scale, apparently playing no part in quantum mechanics. We can measure the force of gravity and use gravity equations at the planetary, solar, and interstellar scales, where it governs the attraction between bodies with which we are familiar. But we observe that, at the galactic scale, gravity is apparently not strong enough to produce the effects we can observe. That stars in a spiral galaxy spin in a manner—as if they were stuck to a platter instead of floating freely—which suggests each galaxy contains more matter than we can account for by the stars that shine and the planets, brown dwarfs, and black holes that we presume must accompany them.

Instead of considering variable gravity, we hypothesize that something else must be at work: a substance called “dark matter,” which does not react with normal matter or electromagnetism at all, but has a measurable effect on gravity—but only, apparently, at very large scales. The effect on galaxies is so apparent that we attribute this “dark matter” with comprising 85% of the substance of the universe and about 25% of its total energy density. All this for something we cannot see, touch, feel, or detect—only “observe its effects” at very large scales.

Perhaps we are wrong in our assumptions about the gravitational constant being fixed at all scales larger than that of quantum mechanics.

This kind of “wrong” thinking—if it is wrong, and perhaps “dark matter” actually does exist—has happened before. During the 19th century, geologists debated between “catastrophism” (the theory that geological changes in the Earth’s crust were created by sudden and violent events) and “gradualism” (the theory that profound change is the cumulative product of slow but continuous processes). Gradualism would be the slow accumulation of sediments to form rocks like sandstone and shale. Catastrophism would be sudden explosions like volcanos or events like the formation of the Snake River Canyon from the collapse of the earth- or ice-dam holding back Lake Bonneville about 15,000 years ago. What has since been resolved is that either theory on its own is too constricting, too reductive, and that both theories and systems played their part in the formation of the Earth’s surface.

This is probably not the case with gravity. A variable gravitational constant vs. the existence of dark matter is probably an either/or rather than a both/and situation. One kind of physics that explains the observations does not necessarily make room for a second kind to explain them. However, it wouldn’t surprise me if each galaxy contained more normal matter that we can’t see by either the ignition or the reflection of starlight—that is, black holes, brown dwarfs, and rogue planets—than we have habitually counted. And still there may be something going on with the effects of gravity at the very small and the extremely large scales that is out of whack with the gravitational constant we see at the planetary and stellar scale.

Time and distance are variables that we have only been seriously considering over the past hundred years or so—ever since Edwin Hubble exploded the idea that the local stars in the Milky Way galaxy were all that comprised the universe and instead introduced the idea that the fuzzy patches (“nebulae,” or clouds) on photographic plates were actually distant galaxies that are not much different from the Milky Way, and that they all were receding from us. In one conceptual leap, the universe, the cosmos, all of creation, became really, really—add a few exponents here—big.

But the universe did not become correspondingly old. We still had various measurements and computations putting the age of the oldest stars and the universe itself at approximately thirteen billion years. That’s about four generations of stars from the creation of matter itself to the Sun with its disk of gas and dust that contains elements which must have formed in the fusion processes and catastrophic collapse of older stars. By winding back the clock on the expansion, the scientific consensus came up with a universe that started in the explosion of, or expansion from, a single, tiny, incredibly hot, incredibly dense—add more exponents here, with negative signs—point. That singular point contained all the matter we can see in the billion or so galaxies within our view.

Because that view comprises a distance that the farthest galaxies could not have reached in thirteen billion years, even when receding from a common point at the speed of light, a scientific consensus originating with Alan Guth has formed around an “inflationary” universe. At some time, the story goes, at or near the very earliest stages of the expansion, the universe blew up or inflated far faster than lightspeed. Perhaps from a hot, dense thing the size of a proton to a cloud of unformed matter probably the size of a solar system, the whole shebang expanded in a blink—whereas a photon from the Sun’s surface now takes about five and a half hours to reach Pluto.

While I don’t exactly challenge the Big Bang and the inflationary universe, I think we are reaching for an easily understandable creation story here. We want to think that, because we know the universe is now expanding, and because our view of physics is that basic principles are the same in every place and at every scale, and that processes continue for all time unless some other observable force or process interferes with them, then the universe must have been expanding from the beginning: one long process, from the hot, dense subatomic pencil point that we can imagine to the vast spray of galaxies that we see—extending even farther in all directions than we can see—today. And with such thinking, we may be writing our imagination and our physics into a box.

The truth may lie somewhere in between. And I don’t think we’ve got our minds around it yet.