Sunday, September 18, 2016

A World Without Borders

It sounds lovely. A world without borders—those imaginary lines which we also think of as barriers. A world where people can travel without the intrusions of government interfering with a person’s right to go where he or she wants and do whatever suits his or her needs—and for “intrusions” read pieces of paper, arranged and signed ahead of time, inspected by men with guns, and the inevitable waiting period, usually in a room with a door that locks. If we could only eliminate borders and barriers, then we could eliminate the paperwork, the men with guns, and the locks. Then people would be free to pursue their dreams.

But you would then also necessarily have a world in which no one would be able to hold onto a fixed address. It would be—or rapidly become—a world with no stable form of government, no organized rules of commerce, no property rights, no right of ownership to anything that can’t be carried in your pockets. It would be a world with no assurance that people and their families who have stayed in one place for generations would not become homeless by next week.

Why do I say all this? Because a world where anyone can go anywhere without restrictions, where a person who just walked across an imaginary line is just as good—that is, has just as many rights and just as much say in outcomes—as the people who have lived there for generations and invested their time, effort, and wealth into building up the infrastructures, trading patterns, and cultural values of the place, is a world without the natural and obvious distinctions between builders and drifters. In such a world, people will tend to form ad hoc, temporary associations to gain power and possessions for their group’s members at the expense of anyone else who gets in their way. Ultimately, the boldest and the bravest—usually those most willing to give up what they have always known in their search for something better—will prevail.

In short, you would return to the world of the homeless wanderers that existed before about 3,500 BC, before people began settling in the Indus, Tigris, Euphrates, and Nile river valleys, staking out fields for plowing and sowing, arranging ditches and gates for irrigation, learning to read and write, starting up a civilization, and learning to be comfortable under its rules and restrictions. You would return to the world where the biggest gang, the collection of the toughest and most aggressive individuals, gets to sit down wherever it wants, sleep in armed camps wherever their leader chooses, and eat whatever comes to hand.

A world without borders is a world without a functional civilization. It is a world without citizenship, of people who owe allegiance to no nationality or culture—except some vague and unresponsive “brotherhood of man,” or perhaps to a distant and squabbling, self-proclaimed organization like the United Nations or the European Union, which is full of good ideas for how everyone else should live but with very little practical experience on the ground.

I know that’s not the ideal. The vision of a world without borders was created in the 20th century after two horrendous wars that seemed to be virtually without borders themselves. First came the League of Nations in 1920, arising out of the Paris Peace Conference that ended the 1914-18 European war. And when that organization proved ineffective, the United Nations arose to prevent another conflict like the 1939-45 war, which engulfed pretty much the whole world. The idea was that if all the nations of the Earth could come together peacefully to discuss and resolve their differences, humankind could eliminate the need for war.

But the question was always one of sovereignty. For such an organization to be effective, to enable it to establish and enforce its mandates, the member states must give up some—if not all—of their separate rights and responsibilities, just as single human beings give up some of their personal rights and responsibilities in order to claim the protection of a state or nation. The proposition for creating such a worldwide government is that nations are analogous to—and not essentially different from—individual people, just on a larger scale. But is this true?

A person—at least one whom society considers to be in possession of his or her faculties—has a single identity and the ability to form fixed intentions and follow through on them. By the time they are adults, people as individuals can have fully formed ideas that they are unlikely to change as they age further. A person can decide to be—and remain, for the rest of his or her natural life—a good citizen, a reliable father or mother, a hard-working employee, a steady church goer, or a loyal party member. Circumstances may change, calling for the individual to try out new ideas and test new values. But the person usually remains true to one set of ideas, values, and commitments for most of his or her working life. Exceptions exist, of course, but they do not disprove the general nature of human psychology.

Nations are not necessarily like this. Neither is any large group of people who have come together over time. Putting aside issues of “national character,” which are usually impressions and stereotypes about a culture gained by outsiders—Italians are excitable, Germans are sober and dour, Frenchmen are passionate—the nature of any group is fluid. People as individuals may have relatively fixed ideas and values, but as groups they tend to discuss, disagree, and influence one another, and they can form only a slippery and temporary consensus. That consensus may represent a government in power, making laws and creating institutions that have the appearance of an individual’s fixed commitments. But governments fall from power and their commitments change over time—relatively slowly and peaceably in the representative democracies, suddenly and harshly in oligarchies and dictatorships.

A single person might commit to join a nation and live under its laws. But a nation—the collective and changeable will of a large group of individuals—cannot make such promises.

The notion that, over time, a world government will emerge and the individual states which currently meet and debate in the United Nations or European Union will wither away from useless redundancy is fanciful. The idea is just as fanciful as the notion in economic Marxism that, once communism has been firmly established and people are peaceably trading their personal labor for goods and services, the revolutionary state which established this condition will simply wither away from having nothing else to do. But that is not human nature. History doesn’t come to an end, and people don’t give up their personal interests and political advantages, just because they can’t think of what to do next.

Perhaps, with our current world’s continuing developments in technology, with global and instant communications, with a fairer distribution of natural resources and the fruits of education and science, and with institutions and infrastructures which will equitably provide goods and services to all the peoples of the Earth … perhaps then we will see national distinctions fall away and people on every continent become citizens of the world. It’s a nice idea, but I doubt it will happen. In just one area—global communications, represented both by networked broadcast services, which send common ideas and values out to the mass of people, and by networked social media, which allow people to exchange and discuss their reactions to those ideas among themselves—the proposed unity has not developed. Instead, social media have allowed new groups of like-minded people to form. These groups may no longer be bound by geography and personal acquaintance, but they still coalesce around shared real-world experiences. It’s a nice idea that people will think globally, but they will continue to act locally and in relation to what they know, think, and believe.1

I can imagine only two conditions that would support a global association of humanity under a single world government. The first would be when our descendants form colonies and associations on other planets in the Sol system and out among the stars. This is well described in James S. A. Corey’s Expanse series. When Luna, Mars, and the Belt all have their own governments, shared cultures, developing languages, and genetic drift, Earth will, in response, come together as a single political entity. The second condition would be our discovery that human beings are not alone in the universe. This is a standard theme in science fiction: when the aliens come down from the stars, either as peaceful traders and teachers or as ravaging conquerors and usurpers, the distinctions among human-type people will fall away and we will, in response, become one culture, one civilization.

In the meantime, who benefits from promoting a “world without borders”? I mean, apart from the naïve idealists who live with their heads in the 23rd century?2

The first kind would be the people who have their own borders well secured, thank you very much, but would like a stake in the land behind yours. Back during the Cold War, when the Soviet Union and the People’s Republic of China both imagined that their chosen system of government had the destiny of conquering the world, various sympathizers and fellow travelers promoted the ideal of a world without borders. They looked forward to world peace at the price of submission to an ideology that came from beyond the borders of Western civilization.

The second kind would be people who despise the civilization they see around them—perhaps because they have not been successful in it, perhaps because they want to shortcut the political process—and want to see it gobbled up by a wave of foreign invaders. They imagine these invaders will provide the instability and the political liquidity to dislodge the power structure which they despise but their fellow citizens are too dull and stupid to throw off themselves. They want a revolution but lack the guns, the organization, and the numbers to bring one about. So they believe that encouraging immigration en masse will create better conditions for their purposes.

Neither of these motives is one that I admire or subscribe to. At heart, I am something of a libertarian, believing people should be allowed to go where they want and do what they need. But I also know that human beings, like all the great apes, are social beings. We need and want to find our own place, among people with whom we can find agreement and common cause. We want to build something that we can preserve, protect, and pass along to our children. And we value those children as the product of our own genes and extension of our own lives, rather than as vaguely deserving “citizens of the world.”

Maybe one day—with enough enlightenment, technology, and freely available goods and services provided through unlimited energy resources and automation—we can walk across those invisible lines and settle anywhere we find that’s pleasant and accommodating. But that day has not arrived. And it may not until the 23rd century.

1. In this, I am reminded of two more examples. The first is the drift of languages, as described in John McWhorter’s The Power of Babel: A Natural History of Language. Over time, human languages tend to develop differences among speaker groups. A powerful political influence, like the Roman and British empires, can temporarily bring people together to speak a common language. But when that influence falters or wanes, as did both Rome and Great Britain, people will go back to creating their own local dialects, word usages, and shared colloquial meanings. McWhorter points out that Italian, French, and Spanish are nothing but Latin that has been left alone in different and relatively isolated places for speakers to develop their own idiom. And each of those languages that we think of as unified wholes—again, Italian, French, and Spanish—are actually a collection of local dialects, like langue d’oc and langue d’oil in medieval France, which over time will develop into separate languages of their own.
       In similar fashion, large human associations with developed trade routes and easy movement among disparate cultures, like the Mongol Empire, can mix up the gene pool and over time create a heterogeneous population. See, for example, the wide spread of Genghis Khan’s Y chromosome. But let that empire fall and the associations wither, and people will go back to socializing with their near neighbors and second cousins. And then individual variations, like red-haired Scots and blond Scandinavians, will emerge from the mix.

2. A place I’ve been known to go and visit from time to time.

Sunday, September 11, 2016

Cruel Fate

It has long been observed that evolution is no respecter of individuals. The process has given us quadrupeds both as graceful as horses and as sturdy as elephants; invented flight numerous times with species as different as dragonflies, hawks, and bats; brought lungfish to walk on the land and whales and dolphins to swim in the ocean. Evolution in one way or another has created everything you can see on this planet, including the contents and colors of the sea and the sky. Evolution is the engine of creation, but it cares not a bit for the individual of any species. It works on the flow of genes but it lacks direction—except toward what works and survives. And along the way, that flow will discard a hundred, a thousand, a million failed attempts—all of them individual beings with otherwise developed potential.

This is a hard thing for most people to understand. We instinctively want a creator which—or, in most minds, Who—cares about us. Not just the human species as a curious kind of experiment along the way from monkeys to post-apocalyptic apes, but humans as some peak of attainment. We want to see our kind as rising toward a level of genius, awareness, independence of thought, and freedom of action that was prophesied back when the first microbial cell divided and differentiated in the primordial, comet-fed seas. This yearning for attention is part of our mammalian and human heritage, based on our being born as helpless, half-formed embryos with enlarged primate skulls too wide to gestate fully in the womb and then pass through the narrow primate birth canal. We are totally dependent in our earliest years on loving parents to feed, protect, and teach us. And when mother and father themselves prove to be all too human and fallible, we look to the sky for a loving bringer of order and to the earth for a nurturing presence.1

Moreover, we look for a creator that had us, ourselves, our own personhood, in mind when we were born. With our random gift of self-awareness, we humans each believe that we, as individuals, as the person following the particular life course we’ve chosen, with the dreams of childhood behind us and the ambitions of adulthood still driving us forward, have a unique place in the creator’s purpose. We want to be loved. We want a force stronger than ourselves to tell us that we will win the race, achieve our goals, obtain the love and respect of our family and peers, that we matter in this life.

The notion of that creator as the engine of evolution, which randomly hands out helpful or harmful genetic mutations before we are even born, which often dumps us into environments that may be both physically and—for us now, with our bigger brains and calculating self-awareness—psychologically productive and sustainable, or not, and which dooms a large fraction of our peers to random accidents, diseases, and death … we find that notion hateful. The universe is not supposed to work this way. Our mothers assured us we would be safe. Our fathers fought to make us safe.

The truth is even worse than that. Life itself and the history in which we place such intellectual store are both crapshoots.

Think of our great philosophers and teachers—Pythagoras, Aristotle, Buddha, Christ. Think of our brightest minds—Newton, Goethe, Einstein. Think of our history-changing leaders—Moses, Caesar, Genghis Khan, Napoleon. All of them are accidents in the sense that they happened to be born with the right kind of intellectual or emotional capacity, placed in the right familial and societal environments, either encouraged or simply allowed to develop and exercise those personal gifts, and survived long enough to begin applying them successfully.

Certainly, in an army as large as Napoleon’s—more than half a million men in the Grand Armëe at its peak—there must have been four or five other men with the native charisma to inspire those around them, the organizational capacity and memory to know the quality and current status of hundreds of fighting units both during the march and on the battlefield, and the imagination and vision to conjure up campaigns for them to pursue and strategies that would enable them to win their battles. But of those four or five other men, who will never be named, they were either born into humble homes and never attained an officer’s rank and training, or they died of wounds—or more likely dysentery—in their first or second campaign. History has given us a Napoleon and a Wellington. There might well have been other and better men. We will never know.

Something of this kind actually happened in the late 17th century, when the English physicist Sir Isaac Newton and the German mathematician Gottfried Leibniz independently came up with mathematical systems that became our modern system of calculus.2 They began working on their ideas at different times and, although each man left manuscripts or published papers at different dates, there is no indication that one copied from or even knew about the other.

Interestingly, a third mathematical system was created much earlier but only survived in a Byzantine manuscript that copied out the works of Archimedes of Syracuse—a parchment that was later overwritten with Christian theological texts. Once the religious material was scraped away, the manuscript underneath showed that the ancient Greek had come up with a “method” for solving physics problems for which we now would use calculus. Who knows what advances human engineering and technology might have achieved far ahead of their time if this method of calculation had spread and been used so much earlier in history. The fact that the Archimedes method was written down and survived only once, and then not rediscovered until the early years of the 20th century, is simply a matter of cruel fate.

If human intellectual and creative development is a crapshoot, so is the history in which it operates and which we take to be somehow foreordained and immutable.

We like to believe that the great turning points, the decisive battles upon which the course of history swung like a bank-vault door on a jeweled point, were destined to come out that way. Think of Hastings and the success of the Norman invasion of England, which brought French language and manners to England, and involved the English royal family in French affairs for half a millennium. Think of Waterloo and the success of the English and German armies in stopping a resurgent Napoleon from the reconquest of Europe, and subsequently establishing a hundred years of relative peace on the continent. Think of Gettysburg and the success of the Army of the Potomac in stopping a Confederate march on Washington, DC, from the north, which might have forced an end to the American Civil War favorable to the South and its secession.

From my experience of fighting some these battles in both tabletop miniature and board games (see War by Other Means …)—the different days of Gettysburg at least four times, and Waterloo and other Napoleonic battles at least once—I can attest that the course of history was not so obvious. In the hands of different generals, and with even slightly different tactical approaches, these battles could have gone the other way. There are no sure things in history.

As a science fiction writer, I wrestle with these ideas. What does evolution look like on other planets? And how might our own planet have developed differently? What great minds might a cruel fate have subtracted from the human past—or added to it—to change the path of our intellectual, emotional, and political development? What turning points, which we see so clearly in our telling of history, might never have occurred, or resulted differently, to redraw the political map of the world?

It’s a fascinating imaginative playground—and one that I explored briefly in the novel The Children of Possibility and am now re-entering with its sequel, due out sometime next year, tentatively titled The House at the Crossroads. The only difficulty, for a writer, is that if one concatenates too many changes onto history, the human experience tends to become unrecognizable for the reader.

But like evolution and fate, a writer’s imagination can sometimes be cruel.

1. As I’ve noted before, the human conception of creation and deity would be very different if our species had arisen from the line of, say, sea turtles instead of the great apes. Hatched in the dry sand, with their first act destined to be a crazed dash toward the surf and the light of the full moon, being picked off twenty or a hundred to one by the waiting seabirds and then by the snapping fish in shallow waters, the surviving turtles would have a much darker notion of the creator’s purpose.

2. Calculus, for those who are as mathematically innumerate as I am, is a method for studying changes in a system, such as the area bounded by a continuous, smooth curve, or the effects of changing rates of acceleration on motion. The word is from the Latin for a small pebble used for counting.

Sunday, September 4, 2016

My GO! Button

It seems as if all my life I’ve been pushing on a button in my brain—perhaps linked to the startle response, perhaps to the pituitary gland and adrenaline release—which sets me up to do the things I must. From twelve years in school, and then through college, followed by forty more years in the business world, I have been responding to the needs of the outside world, the commitments I’ve made to meeting them, and to my own demands upon myself.

The alarm rings at four o’clock in the morning—push the GO! button to rouse myself, get out of bed, stumble through my get-ready routine, and sit at the typewriter for an hour or so to work on the novel I am trying to write before leaving for school or work.

When the clock edges up on seven—stop what I’m doing, push the GO! button again, and prepare for the morning commute. For the last ten years of my working life that meant putting on my riding gear, wiping off and wheeling out the motorcycle, and driving through traffic on the two worst commute-hour corridors in the country: westbound I-80 toward the Bay Bridge, then southbound I-880 through Oakland. San Leandro, and Hayward, with the San Mateo Bridge toll plaza jam-up and then its seven-mile, arrow-straight slog still ahead of me.

When I arrive at work, pour my first cup of coffee, sit down to log in at the computer, and while the machine is churning—push the GO! button to deal with an unknown number of voicemail messages behind the phone’s blinking light. These will have collected overnight because this is a global company with people calling or returning my voice messages from the East Coast, England, and Singapore. Ten minutes later, with every voicemail either answered or logged for subsequent action, push the GO! button again to enter the slipstream of overnight emails and deal with every new alert, request, and problem each one brings. Forty minutes later, I can take my second sip of cold coffee and begin the planned part of my day.

As the hour of each scheduled meeting or interview appointment approaches—push the GO! button to prepare my mind, psychologically and emotionally, for the meeting agenda, for the new information and directions the session will likely bring, and the pitfalls it will probably hide, or—equally stressful—for the questions I must ask my interview subject, the amount of blind probing I must do, and the unique personality I must deal with in order to get information for the next article I must write.

If the meeting is the quarterly internal business review with all employees, push that GO! button dozens of times in the weeks beforehand as I prepare slides for the various speakers, make arrangements for the meeting space and video connections, send out companywide announcements and reminders, and remember to order an assortment of refreshments—tempting but not too rich and costly—for the estimated number of attendees. And then one big push as the hour of the meeting approaches and the hall starts to fill.

If the interview subject is a company officer, press the GO! button a couple of extra times to deal with schedule changes, session interruptions, and the Shadow Kabuki–like play of his or her political and personal sensitivities. Even if the officer is known to me from past associations, and even if our past discussions have been cordial and even friendly, the subject itself will be new, and a whole kaleidoscope of novel implications will overlie the results from our previous dealings.

Then, when my schedule opens up and it’s time to write the next article, or prepare the next set of speaker slides, or pull together the next issue of the newsletter or the next refresh of the internal website—press the GO! button to steel my mind for diving into this set of details, driving toward this overarching message, and bending the arc so this story finds a strong, logical, and credible resolution in the reader’s or viewer’s mind.

Finally, when the five o’clock hour, or six o’clock, or sometimes seven or eight, comes around—press the GO! button once more to prepare myself, physically and emotionally, to swing my leg over the motorcycle again and face the reverse commute over that bridge and through those commute corridors from hell. Riding a motorcycle is usually exhilarating, and doubly so when I’m headed home and know there’s no scheduled arrival time for which I must push the travel envelope. Motorcycles in California are automatically entered into the carpool lanes, and if traffic in all lanes grinds to a stop, I can still split them to get through the jam—although that involves its own repeated pushes on the GO! button: look ahead, figure the available width, divide it for the size of my bike and clearances, watch out for that car wobbling in its lane, keep an eye on that semi crowding the line, and so on for mile after mile of jangling alerts.

If it’s raining that day, and I’ve chosen to drive the car rather than wrestle with my rain gear and deal with the stresses of wet tires on grooved pavement, the commute adds the dimension of sitting in stalled traffic, where I’m safe, dry, warm, and have the radio or a CD to listen to, but also trapped, staring at the bumper of the car ahead of me, counting the minutes as the flow creeps forward, brake lights winking, making excuses in my head for the meeting I’m going to miss on the work-bound commute, or the apologies I’ll have to make on the home-bound route.

Twenty times a day, a hundred times a week, for year after year, my brain has taken that shot of psychic energy and adrenaline.

It wears you down.

Now that I’m retired and working on the sort of writing I used to do at four o’clock in the morning, I have to push the GO! button a lot less often. I might have a doctor or dentist appointment to go to during the day, or a lunch with friends for which I don’t want to be late. Some Saturdays I might have a war game scheduled (see War by Other Means …), and the house of the gamer who’s hosting the event might be as far away as my old commute, but the traffic on Saturdays is usually light and the motorcycle ride is fun rather than nerve-racking. I still get emails every day, but they are usually chats from friends or commercial messages that I can safely ignore. I still get the occasional unprovoked phone calls and voice messages, but they are easily screened.

Curiously, one of the stressors I once experienced at work, plunging into the details of the next article or speech that would have to be completed on deadline and then sent for review with both the subject matter expert and other approvers—which usually entailed its own set of stressors returning through the voicemail or email stream—does not carry over into my fiction writing. Although I try to maintain a schedule with my writing, working to an outline, in order to bring out a new novel every year or so, the pace is at my discretion. And, unlike an article where the objective, the points to cover, and the details not to be missed are directed by somebody else, my own writing is under my control and that of my subconscious mind (see Working With the Subconscious from September 20, 2012). When I sit down to write fiction, it is because the story has been percolating through my brain, the pieces have started coming together, I’ve just thought of the opening line of dialogue, or incident, or sensory image to start the scene—and the keyboard draws me to it like an old friend. I don’t have to push any internal buttons because my mind is already flowing in that direction, eager to get these new ideas down in specific words, images, and plot structures, creating an experience that will feel real and concrete in the reader’s mind, where before there was only a blank page.

After all those years in school and then in the working world, I can wake up when the birds start singing and the dawn light shows in my bedroom window. I move through my morning routine out of unforced habit, taking a few extra minutes here and there if I want. I do my karate exercises (see Isshinryu Karate) before breakfast because the workout makes me feel better and lighter during the rest of the day. I read the newspaper with as much attention as I want while I eat, because I’m interested and not because it’s an assignment. Then I turn on the computer, pour my coffee, and see if my subconscious has sent me more of the novel to salt away as finished scenes and chapters. And if not, I can go sit in my chair and read a book. Or I can get on my motorcycle in the middle of the day and ride out across the countryside, picking my own route, enjoying the sun and wind, and not minding a schedule.

This is good because, after all those years of pushing, pushing, pushing, my GO! button is broken. My life is in my own hands at last.

Sunday, August 28, 2016

Fractured Reality

During the trip back East for my fiftieth high-school reunion this summer, I stopped in Cleveland to visit with one of my cousins. As part of our sightseeing in that lovely and too-often maligned city, we visited Lake View Cemetery, which features—along with the family plot of some early settlers my cousin was researching for a project—the Wade Memorial Chapel, with its Tiffany-designed interior, and the Garfield Mausoleum. At the latter, I learned about our twentieth president, a native son of Ohio, a scholar who raised himself out of poverty through education and strong values, and a beloved politician who was shot by a delusional office seeker just three months after his inauguration. The story of that assassination—along with much about the medicine, technology, and politics of the time—is told in Candice Millard’s Destiny of the Republic.

This current election year is a time of unusual and unbelievable surprises. One party has nominated a man with no political background whose id seems to be directly connected to his mouth. The other nominated a woman with a political history so shadowed that people casually dismiss emails suggesting she connived at rigging the primary election and sold facetime at the State Department for donations to her family’s charitable foundation. Nobody much likes either of them but adherents clings to their party’s candidate, either out of loyalty to one side or hatred and fear of the other. In the throes of this hot mess, I found the Millard book a refreshing and surprising return to an earlier, perhaps more innocent time. It was an age very different from our own.

First, the scars of the Civil War, which had ended just sixteen years earlier, had gone very deep. Half the county, laboring under an onerous Reconstruction that rivaled the reparations to which the Allied Powers had subjected Germany after World War I, had reason to distrust and fear the other half. James A. Garfield was not anyone’s first choice for president on the Republican ticket. In fact, his mission at the convention in Chicago had been to nominate a fellow Ohioan, former secretary of the treasury and brother of General William T. Sherman. But Garfield—who had spent his early years as a teacher, then fought as a general himself in the war, and finally served as a quietly respected U.S. congressman and senator—was such a polished speaker that people began to think of him as a candidate. With a rich field of other candidates, however—including a lackluster politician named Chester A. Arthur, who was backed by the most powerful and corrupt patronage wielder in the party, and who would later become Garfield’s vice president—the balloting went on for days. But when Garfield was finally nominated, the country rallied to him. His message—when he chose to give it, which was seldom, because he did not think it proper to campaign actively—was full of reconciliation, good sense, and fair dealing for all citizens, including former Confederates and the recently emancipated black population.

And then, when Garfield was shot by Charles Guiteau, a man suffering from psychotic, disorganized, and delusional thinking who today would be diagnosed as schizophrenic, the nation reacted with grief and horror. When this disappointed office seeker let it be known that he supported Chester Arthur and his corrupt political backer, Roscoe Conkling, the pair had to go into seclusion to avoid being lynched. Even in a country as divided and confused as America was in the wake of bitter internal war and a failed peace effort, people could come together in a common feeling of disdain and outrage.

The bullet wound in Garfield’s back, which had avoided the spine and major organs, was not immediately fatal. In fact, many soldiers during the war had taken similar wounds and lived on, carrying the bullets or shrapnel inside their bodies for years. However, President Garfield received the best medical attention of the time, which included his physicians’ determination to find and remove the bullet. Since American doctors did not yet subscribe to the germ theory of disease and the antiseptic practices of British surgeon Joseph Lister, the president’s attendants probed his wound repeatedly with metal and ceramic rods and even with their own fingers, none of which they washed first, let alone sterilized. The wound became infected repeatedly as Garfield lingered, flat on his back in bed, for almost three months. In an interesting sidelight, telephone inventor Alexander Graham Bell tried to develop and use a device, originally created for neutralizing electrical interference on phone lines, as a means of locating the bullet but this early form of magnetic scanning was not successful.1

As the president lay dying in the White House, then recovering, then dying again, the doctors gave out bulletins several times a day describing his progress. These were transmitted to major cities and towns and posted outside telegraph and newspaper offices, where citizens gathered by the thousands to follow the news of Garfield’s hoped-for recovery. Those who lived near Washington camped out in the park across from the White House awaiting developments.

Why do I find this story so interesting? Because it makes such a marked contrast with our own turbulent times.

In 1880 this was a country, less than twenty years from a state of political secession and all-out war, that could come together both politically and emotionally. First, they would nominate and elect a relatively unknown senator to become president, responding to his good nature, positive qualities, and inspiring personal story. Then, when he was struck down, they would unite with even stronger feelings of outrage and grief.

Today, we have endured in our recent history no break or disjunction so great as the Civil War. In fact, we have much to celebrate and experience forces that should unify us: the end of the Cold War within most people’s living memory; unprecedented growth in scientific discoveries and inventions, and the spread of their application into people’s daily lives; and a robust economy that—despite a series of booms and busts in the past fifty years—continues to grow and expand, providing a standard of living for the average American that’s still the envy of the world. Given these good times, with no obvious, existential threats to our way of life and national security,2 one would think we might look for the gentlest, wisest, most inspiring people to lead us. Instead, we fight and tear down, and claw at the eyes of those who would try to lead—until only the toughest, most armored, eyeless creatures will choose to compete for … not for the dignity of leadership, but for the opportunities of pure power.

In 1880 this was a country with a relatively primitive communications system. True, the railroads and the telegraph had been operating and opening the countryside for several decades, and the telephone had just been invented and was rapidly spreading in and between the urban centers. But people still took in their account of events and absorbed their political opinions through their local daily newspapers, which had seen nothing of the amalgamation of news empires that would take place in the coming decades and into the twentieth century. And yet the story of the president’s condition could disseminate rapidly and relatively uniformly from the doctors at his bedside to the posters and placards outside the telegraph office in every community. What’s surprising is that, while people might speculate about Guiteau’s motives and connections, and Garfield’s advancing and retreating waves of infection whipsawed people’s hopes and fears, the stream of information seemed relatively unburdened by wild fantasies.

Today, with so much consolidation of our news feeds—but also with the dispersion of public opinion through alternative news sources and directly through social media—unfettered rumors and swirling conspiracy theories would tend to overlay such an event. A president who lingered for months at the point of death would, like the cat in Schödinger’s box, be both alive and dead, as well as sighted simultaneously golfing on Martha’s Vineyard and whooping it up at the Bunny Ranch in Nevada. The president’s assassin would be identified with three different personalities and six different co-conspirators, all discussed as verifiable truth in the media, as well as flocks of supporters who would idolize him as a saint and detractors who would see in him the Antichrist. And every one of 320 million Americans, along with several billion more active viewers and listeners around the world, would have their own understanding of and opinions about the facts of the case. Everyone has a camera, an imagination, expertise with Photoshop, and access to the worldwide web. Everyone makes up their own story.

Back in 1880, when people had to ride the rails for days to go from one side of the country to the other, and had to sit down and read page-long stories in the print media to remain informed, the country seems to have been a lot more unified. Today, with the ability to fly from breakfast in New York to lunch in San Francisco—well, a late lunch, considering the travel time plus time zone differences—and with information freely flowing in spoken words, images, and seven varieties of text through the airwaves and different networks, the country is fragmented and our reality fractured.

As a science fiction writer, I try to understand all this. Anyone looking forward from 1880, and surmising about—or even actually being told about—the advances in media technology that were coming in the twenty-first century, would predict greater public cohesion, more uniformity of thought, and greater access to and reliance on provable facts.3 Instead, we have just the opposite. Not a thousand points of light, but a thousand points of different and irreconcilable worldview.

This tells me that, with the way our technology is advancing exponentially, and despite the best analytical imagination I can supply, life and the nature of our political, economic, moral, and spiritual reality at the dawn of the twenty-second century will be essentially unknowable.

Damn! And I thought I was just getting good at this prediction thing.

1. With modern medicine, of course, the bullet would have been found by x-ray and removed in a sterile surgical procedure that first afternoon. For comparison, consider how quickly President Reagan recovered from a similar attack.

2. Yes, we have troubles: weak allies in Europe who are floundering under a broken political and economic system; a resurgent Russia flexing its muscles to the east of them and looking to resurrect the old Soviet hegemony; a resurgent China seeking to expand into its ancient cultural hegemony over all of Asia; economic stagnation in Central and South America driving waves of job-seeking immigrants across our southern border; and political chaos in Africa and the Middle East driving religious and political refugees into the modern, developed states of Europe and perhaps eventually into this country. But all this is business as usual, because the world has been in flux and turmoil ever since the rise of nation-states four thousand years ago, except for those brief periods when Rome ruled the Mediterranean or Britain governed a patchwork of colonies across the globe. None of the current turmoil spells an imminent threat to our continued existence as a nation or a society—not unless we let it.

3. Oh, for the days when a photograph was accepted as visual proof of a single, solid reality.

Sunday, August 21, 2016

Utopia and Dystopia

I grew up with the dystopian1 novels 1984 by George Orwell and Brave New World by Aldous Huxley, and in college I studied the utopian works The Republic of Plato2 and Looking Backward by Edward Bellamy, and the dystopian We by Yevgeny Zamyatin, among others. More recently, I’ve read Suzanne Collins’s dystopian Hunger Games trilogy and seen its movie adaptations, and I’ve watched the movies of Veronica Roth’s dystopian Divergent series, although I have not yet read the books.

I understand why an author would write either a utopia or a dystopia. The former is to test out ideas about how the human condition and human society might be made better if not perfect. The latter demonstrates how, in the presence of some human failing or absent some leavening force, society might become very much worse. All of this is the proper sphere of science fiction: to envision an alternate future for humanity and the mechanisms that might drive us toward its attainment. I understand the purpose of writing a utopia or dystopia, but I’ve never tried to write one myself.

One might also see in much of our current politics, especially among American Progressives and European Socialists, a real-life attempt to erect utopia here on Earth, marked by universal and equal access to education, health care, job opportunities, plentiful goods and services, and proportionate personal wealth. In similar fashion, the goal of the early Marxists was to create a worker’s paradise of individual labor freed from the strictures of market forces, return on capital, authoritarian bosses, religious coercion, and government interference. This was also the theme of the John Lennon song Imagine: no religion, no countries, no possessions, “… nothing to kill or die for … a brotherhood of man.” And it was the drive of every 19th-century utopian commune in America that was run along socialist lines to create a separate space of equality, friendship, love, and shared physical labor.

I can understand the motive for writing such stories and dreaming such dreams. But, for me, the books and the political schemes they represent simply do not work. I don’t think I could ever write a thoroughgoing utopia or dystopia.

These books and political programs are all based on distorting human nature. In utopias, the distortion is toward a good and positive spirit of sacrifice and selflessness, which is indeed found in pure form in some human beings, but is by no means the dominant characteristic of our species. In dystopias, the distortion is usually toward a spirit of gullibility and passivity on the part of society as a whole, and callousness, manipulation, and greed among its leaders—which again marks some humans to a high degree but is not the nature of all or even most human beings. The result is that most of these stories present societies that revolve around only one political premise, one positive or negative cultural value. And most of the characters in them are mere caricatures or cartoons, reacting to that premise, sustained or squashed by that value, and not real people with complex emotions and motivations. Paintings done in two dimensions with a limited palette are boring. Music played on a simple scale without sharps or flats, or the rudiments of harmonics, is boring.

Probably the most true-to-life—and therefore most frightening—of the dystopias is 1984. Its depiction of an all-powerful state with an ever-watchful leader feels real when compared with Nazi Germany, Soviet Russia, or Communist North Korea. The protagonist Winston Smith and his paramour Julia are small, powerless, feeble in their attempted resistance, and ultimately left naked in their defeat. The mass of society is obedient to a fault, innocently accepting when people and events are written out of the history books and sent down the “memory hole.” They try eagerly to comply with the state-sponsored changes in language which make productive thought harder and harder. There is an apparently active underground, but it turns out to be only a state-sponsored fiction created to cement state control.

You read the book and despair—until you remember that even the most draconian states, like the Nazis and Soviets, had their internal resistance groups which were not state approved. Real people have a conscience and a working memory. Yes, you can imitate public allegiance through the regulation of civic action, like forced attendance at parades and regimented salutes, and the manipulation of popular speech, like substituting “Heil Hitler” for “Good morning.” Yes, you can turn some people into frightened sheep and others into social-climbing wolves. But in the privacy of the home, in the intimacy of the family, the leavening of human nature will react with doubt and scorn. The Soviets, for all their propaganda machine, oiled with the likes of the dreary newspaper Pravda and the even drearier humor magazine Krokodil, reaped a harvest of popular samizdat, or underground publications and Western music laboriously copied out or rerecorded manually and passed along from hand to hand.

Utopias and dystopias both seem to founder on questions of scale and issues of absolutism. In Ray Bradbury’s Fahrenheit 451, the prohibition on books and reading matter was so complete that, at least in the movie adaptation, people were looking at newspapers printed as comic-book imagery with empty word balloons, and they seemed to enjoy television plays with empty drama about who gets to sleep in the Blue Room. Real people don’t put up with such nonsense.3 Presumably, then, statutes and public announcements, and technical documents such as instruction manuals and operating procedures, would all be transmitted through verbal recitations and video demonstrations rather than written text.4

The matter of absolutes comes home in the idea of the perfect society having “no possessions.” I can understand how that would mean one person not owning, as his or her sole fiefdom, a chain of factories with monopoly power over some national good or service, employing hundreds of thousands of people who respond to the owner’s instructions as if they were government edicts, and paying him or her billions of dollars in profits. I can also imagine this deprecation of the human impulse toward acquisition as applying to the farm that a family works to both provide a private food source for themselves and generate a cash crop, or the corner store that a family runs to sell food and sundries to the local neighborhood while generating work for themselves and a personal income. “No possessions” might also apply to the house a person lives in or the car he or she drives, as communal housing and public transportation can be thought more equitable and efficient. But do I get to own the pants that I wear today, wash myself tonight, and then put on again tomorrow? How about my shoes or my jacket? Or the toy my child plays with and loves to distraction? Where does one draw the line when decreeing “no possessions”?

In 1984, the telescreen in every room not only played out endless exhortations and propaganda but also watched as the occupant went about his or her private business and eavesdropped on every conversation. The slightest deviation from party discipline could presumably be detected and punished. This sort of continuous surveillance was science fiction when George Orwell wrote the novel. However, as many people discovered when the Patriot Act was approved in 2001, powerful computers and the transition of our telephone system from analog copper wires to networks switching around digital packets meant that the National Security Agency and other government bodies could sample every conversation flowing through the system, pick out key words and subversive ideas, and build a case against any citizen. This was the absolute control of the 1984 telescreen brought terrifyingly to life.

Except … it’s not. Even if a network of supercomputers could analyze, weigh, and flag subversive speech among the billions of words that 320 million Americans—and about as many more foreign nationals—speak into the telephone system every day, the government prosecutors with their limited—though still impressive—resources would go crazy trying to track down and take action on every lead. Even to become a “person of interest,” a potential assassin or saboteur needs to do more than speak a few predesignated words into a telephone. And, as we’ve seen from the news analysis of recent terrorist acts, even people of interest with proof against them tend to be vetted and dismissed into a sea of suspicious but not indictable characters. Supercomputers may do the flagging, but human beings with their limited imaginations, faulty attention spans, and imperfect understanding of every situation will still do the follow-up interviews.5

The world is neither wholly good nor bad; instead, it blends both characteristics in equal measure. Human beings are neither wholly self-sacrificing and subservient, nor selfish and grasping; instead, they are a mixture of both, in different measures at different times. Emphasizing one aspect of the world, society, or human nature might make a strong point in an interesting study, but it does not make for a good story. Real people make for good stories because they are not caricatures, not entirely predictable, and they follow a story arc that the reader or viewer can only bet on but never know for certain in advance.

1. Thomas More coined the word utopia from Greek roots meaning “no place” for the title of his 1516 book.

2. Although The Republic is commonly described as a utopia, I wouldn’t want to live there. Like Thomas More’s description of an ideal island full of selfless people farming the land and being rationally distributed, and when necessary redistributed, around the countryside in groups to maintain an unnatural state of balance, Plato’s ideal city-state treats its people more like puppets than citizens: deprived of family life and private property, subjected to an educational system strong on physical conditioning and mathematics, forbidden to read poetry and fiction, and drenched in public-spirited martial music. In either situation, I would be planning my escape … and I suppose that’s the point.

3. Even American television programs—or at least those that survive past the cancellation point of their first season—have some content or features to which a conscious, self-aware, adult human being can relate.

4. When I worked as a documentation specialist in the pharmaceutical industry, the question arose about using photos and illustrations in our operating procedures—and presumably this would extend to reliance on training videos. The U.S. Food and Drug Administration regulations require written procedures, which can be cited and specifically enforced, rather than imagery and demonstrations, which are subject to the viewer’s attention span and interpretation. The force of law will still be codified in words.

5. But with all of this, I cannot account for North Korea. There the population lives in primitive darkness—see the nighttime satellite photos of a blacked-out country—under gulag conditions, and on the edge of starvation. The Kim family and their military supporters seem to have achieved total, interpersonal, locked-down control of the country in the sixty-odd years since the Korean War ended. Perhaps you cannot breed humans into self-sacrificing sheep, but in three generations of unrelenting surveillance and punishment you can shape them into mice that hide from the daylight.

Sunday, August 14, 2016

Rational Thoughts on Suicide

Suicide—the taking of one’s own life or allowing oneself to die with or without a fight—is not always or by itself an irrational act. As a novelist, I can think of many situations where a calm and rational person might be willing to face certain death in order that others may live. This is the wounded soldier, found in many stories, who stays behind to hold off the approaching enemy while the rest of the company escapes. Or the Sydney Carton1 who offers himself in the place of another, better man.

But rational suicide might not always involve self-sacrifice. A person faced with an inevitable and painful death, such as burning alive or succumbing to a ravaging disease, might choose to accept a quicker, less painful way out of life. This is not an irrational act, although it might be a desperate and despairing one.

Our species could not be fully self-aware, or even fully human, if we could not rationally contemplate our own personal destruction, the end of a time that we must know is finite. Indeed, I have always favored the definition Robert A. Heinlein gives for an adult: someone who knows he is going to die. Once a person has come to terms with the inevitable, he or she knows what is possible, understands the value of his or her own life, and can decide how best to spend it. That is, how to choose between the potential achievements of the remaining years against the goal that is on offer now. Someone who does not know those years are already numbered, no matter how many they may be, and death is inevitable, whenever it comes—such a person remains a child with the fancies and illusions of a child.

As an adult, we want our lives to mean something: to serve some purpose greater than ourselves. Even if that purpose is one we have chosen for ourselves and serves some internal ideal—such as painting a beautiful picture or writing a thoughtful novel, something only we ourselves can judge and appreciate—it is still a greater purpose than satisfying our personal wants or gratifying our senses. In the same way, we want our deaths, the last act of our lives, to mean something as well, to serve a purpose greater than demonstrating our own foolish choices and carelessness.

As human beings, we strive for purpose in a world, and in an ecological niche, that does not automatically provide sense and meaning to our lives. Yes, we have the commandment, written into our genes as in our Bible,2 to go forth, be fruitful, and multiply. But this is not an individual mandate. Being one link in a chain that stretches backward to the first one-celled microbes and forward to whatever comes next in the evolution of life on this planet is simply a biological necessity. And indeed, to fail to reproduce is an act of cellular suicide all in itself. But merely having children—for most people, males especially—does not satisfy the rational part of the brain that celebrates the individual, the ego, the “I” that is not merely a collection of cells but an autonomous, free-willed being.

Nothing in life can supply the ego’s purpose from the outside. Well, except perhaps for a parent or kindly grandparent who bends the imagination of a young child toward a certain pursuit, amenable to the child’s talents, experience, capabilities. Such a lucky child may grow up with an ingrained sense of purpose that he or she might think came out of the air, naturally, as a directive from some higher power.

But for the rest of us, we flounder. We must decide for ourselves what our destiny and our fate will be. And many of us never rise to the awareness that this is a natural choice at all, that we must put thought and energy into deciding what path our lives will take and what kind of person we will become. For those who do not ever recognize the choice and its importance, life is a matter of drifting on the currents, like a not very interesting character in a not very well written novel. For such people, suicide might come easily.

The wish to continue in life and fulfill that purpose is also a matter of projection, expectations, and the weighing of chances. For those of us who make the arts our personal goal and the focus of our extra-biological attentions—that is, aside from the daily routine of eating, sleeping, bathing, dressing, and other self-maintenance activities—the realization that our own talent may not meet expectations, that a future of study and practice won’t improve our odds of success, and that we will end in obscurity can be a crushing blow. “Ego death,” as one of my wargaming friends describes a total, ignominious defeat.

Yes, we are assured that the effort is the goal, that simply doing the work is its own reward, and that fame and fortune come to but a few. If an artist or a writer can be satisfied with his or her own work, no matter what the critics and the buying public think, then these palliatives will satisfy the demands of ego and purpose. But what happens when the creator looks at the work, the total oeuvre, and sees only trash?3 Then he or she has failed not only the expectations of the public, friends, and family, but also of oneself. And then nothing is left. Ego death for real.

Given this potential for critical self-doubt, perhaps it is better to make the personal goal simply one of offering service to others, in the manner of Mother Theresa. We can make personal meaning out of helping wherever there is a need and we can supply a willing pair of hands or a problem-solving intellect. In these cases, the overall quality of the work and the personal responsibility for the outcome are less important than the will and vigor with which the effort is made. The outcome lies in other hands, the responsibility with the fates or the gods.

And, as to ego death, people go through calamities all the time. Into each life comes the loss of a loved one, alienation from family and friends, disappearance of fortune or reputation, devastation by storm or fire with the loss of a home or property into which the person has put so much of his or her time and effort. The things we value turn to ashes and dust. Our hope for the future, of living out our lives in a time bubble where these perishable things remain forever unchanging, is dashed. And yet into the void created by such losses there sometimes seems to creep—at least for those of us who are lucky in attitude, or have learned from early training, or persist by some cellular vitality—the restless turning to other loves, other goals, other vessels for our sense of self, security, attachment to life, and hope for the future.

The lucky people can bend with misfortune, shift gears, find new roads, and move forward. In fact, they may never look far enough down the road on which they are traveling to ask what happens when it ends. They know that all roads eventually end, but that most roads also branch out, that goals are malleable, and that people—every person, regardless of past history—are capable of remaking themselves into something new. There is always something new that a human being can try. All it takes is bravery and patience.

Life is persistence. And it can be a long time until the candle finally burns out. That is also something every adult knows.4

1. From Charles Dickens’s A Tale of Two Cities.

2. Genesis 1:28 in the King James Version.

3. Public radio personality Ira Glass has an especially apt thought in this regard, animated by the following quote.

4. After reading all this, Odin asked, “Does he have any idea what’s coming?” And the Three Norns replied in unison, “Nope.”

Sunday, August 7, 2016

Sparkly Shoes

Recently in our condominium garage we came across our neighbors from down the hall, who have a little girl about three years old. She was stomping across the pavement, and with every step her tennis shoes gave off red, blue, and green sparkles. Clearly, she was delighted with the effect, and so were her parents. And that made me think …

When I was growing up, batteries were bulky things—mostly C and D cells—that tended toward fragility and leaked various corrosive liquids. Tiny, powerful, long-lived batteries based on rare minerals like lithium were decades away from commercial use. Back then, too, strain gauges were exotic devices in the hands of NASA and possibly the military. And light-emitting diodes (LEDs) were either unknown or still in deep development in the laboratory. If someone told me that in my lifetime an entrepreneur would put them all together to make sparkly shoes for toddlers … No, that someone would think of putting these exotic and expensive devices into shoes for which there is no naturally perceived need, and that parents would buy them just to get a smile from a child’s face—well, I would have marveled at the thought.1

In another amazingly silly use of high technology, we now have millions of people all over this country using their smartphones—which have embedded applications such as timekeeping, photo imaging, global satellite positioning, and software programming—to track down and “capture” mythical Japanese pocket monsters, or “pokémons,” so they could win non-monetary credits or kudos or some kind of recognition, even if it’s only their own self-satisfaction.

Please understand that I’m not against sparkly shoes and pokémons. In fact, as a convinced free-market capitalist, I find this frivolous use of advanced technology absolutely wonderful. We live in a world where whimsy and fun still matter. And smart entrepreneurs can still make a buck inventing clever ways to amuse other people. You might call that buck-making a cynical manipulation of people’s emotions. I call it, in the words of Henry J. Kaiser, “finding an [as yet unspoken] need and filling it.”

A socialist or communist society would never come up with these things. In such societies, the Ministry of Shoes would be dedicated to making sober, sensible, box-toed Oxfords for all the serious, pre-grownup children. And when every last child had at least one pair of regulation shoes—as if the children of America are not actually swimming in shoes—the ministry would turn its attention to other worthy causes, like preserving cattle hides, preventing deforestation, or engaging in Muslim outreach. The Ministry of Shoes would never think to develop, manufacture, and offer sparkly shoes as a secondary and delightful addition to a toddler’s wardrobe. And the Ministry of Communications would never think to put a camera, programming, or GPS function into a telephone in the first place. After all, the sober, sensible bureaucrats in charge of new product development would never let frivolity and fun enter the fixed-market equation while there were still hunger, want, and homelessness somewhere in the world.2

The amazing thing about this rise in the marketplace of sparkly shoes is that the national supply of batteries, strain gauges, and LEDs has not been in any way depleted. Neither has the playing of Pokémon Go cut into the availability of telecommunication or satellite positioning services for the rest of the country. Sure, there are children in Ethiopia and South Sudan who are deprived of their fair share of sparkly shoes—as I am sure the military establishments in those places are also suffering a dearth of batteries, strain gauges, and LEDs. But their lack was not caused by putting sparkly shoes on the feet of American toddlers. And stripping the sparkles from American sneakers would do nothing to put more shoes on the feet or food in the mouths of African children, nor would it improve their local economy or raise their educational prospects.

As I’ve noted elsewhere, the economy is not a pie. Slicing economic rewards thinner for me does not create more wealth for you, or vice versa. Rather, the economy is like a rain forest ecology: the more life there exists under its canopy—capturing the energy of sunlight and preserving it as fruits, seeds, sap, edible leaves, insects, birds, beasts, and compostable mulch3—the more niches for life there can be. The more people who are out there in the economy creating sparkly shoes and pokémon games, the more incentive there will be to demand, and more wealth to fund, the next wave of miniaturization in batteries, strain gauges, LEDs, megapixel cameras, computer controls, GPS satellites, and a host of related technologies.

This has been the story of our amazing escalation in technology since the invention of the steam engine as a coal-mine dewatering machine in the late 1700s. Someone thinks of a new application—put the engine in a boat with a paddle wheel, put it in a cart on steel rails—and soon the technology is growing and changing, becoming more ubiquitous. And, with the human capacity for learning, retaining, and sharing experiences and discoveries, the technologies usually become smaller, better, more efficient, and less expensive. If you doubt this, think back to the first cell phones in the 1970s and ’80s: usually mounted in cars, because of their bulk and power requirements, then more portable but still the size of a brick, with a Western Electric–style handset on a cord. A rich man’s toy. Now you can buy a mobile phone for the cost of a good lunch, and in some countries it’s easier to get cellular service than a landline.

Wars have sometimes helped with the development of some of this technology. Certainly, World War I saw an improvement in the mechanization and automation of the battlefield, with benefits drifting over to civilian technology in the form of more robust automobiles and airplanes. World War II saw vast improvements in radio technology, radar, codes and code breaking, the first computing technology—generally associated with code breaking, artillery firing solutions, and development of the atom bomb—and large-scale production and use of aluminum in aircraft manufacturing. These advances then provided a boost to everyday civilian life in the decades that followed.

But television also came along between the wars, served no real military purpose, and advanced just as rapidly in purely civilian usage. And the discovery and manipulation of the silicon transistor—progressing from individual devices that emulated old-style vacuum tubes to integrated circuits that put a huge number of gated operations onto something the size of a postage stamp—were first a civilian invention. Sure, military technology benefited from using integrated circuits, but so did whole civilian industries of electronics applications for entertainment, automotive controls, and mobile computing. Now we are entering the biotech age, and that owes most of its advances to the sequencing of the human genome—a purely civilian project—and almost nothing to work on bioweapons.

Barring the civilizational devastation of a global economic crash, nuclear war, or asteroid strike, this advancement in technology will continue for as far as the eye can see. Some advances are predictable, and as a science fiction writer I try mightily to get ahead of them: like more convenient and personalized communications, new clothing options, transportation modes, and medical procedures, all based on computerized automation, artificial intelligence, and the linkage of systems and technologies that once operated in isolation. Some advances I defy anyone to predict or even imagine: like sparkly shoes and Pokémon Go.

The world of the next twenty years, hundred years … thousand years is going to be unrecognizable to our most modern eyes. I can hardly wait!

1. And then I would have asked, when do I get my Jetsons-style jetpack? Oh, yes, that’s almost here—and here, too.

2. Of course, in a socialist or communist society, where ever-declining government tax revenues must chase ever-increasing economic and social problems—“eventually running out of other people’s money,” in the words of Margaret Thatcher—there would never be any money to spare for frivolity and fun.

3. In this view of economics, the energy from sunlight captured with hydrocarbon compounds in the rain forest is analogous to the energy of human work and imagination captured in goods, services, and the money to pay for them in the marketplace.