Sunday, May 26, 2013

A New God for the Scientific Age

Someone posited recently in a blog, article, or Facebook posting—I forget which—that we need a new god for the scientific age. Presumably this god will replace the Biblical Yahweh or Allah, the sky father, god of shepherds, creator of the world in six days, who sent a variety of prophets to chastise the Hebrew kings and later populations of reluctant believers, including one prophet who was revealed as this god’s son born of woman, and another who was hailed as the last prophet with the final word.

All right, so a story that’s been going around and collecting intellectual barnacles for about three thousand years needs an updating. But what would a totally new god, a god not steeped in parchment, candle wax, and incense, a god not tainted by human mythologizing and anthropomorphizing, actually look like? What would such a god be and do?

To my way of thinking, to truly reflect the scientific age, this god would have to answer for the parts of the story we cannot see from observation and account for by logical analysis. This would have to be the god who stands outside, or exists behind, or comes before the periodic table, the big bang, space-time, dark matter, dark energy, the expansion of the universe, and quantum mechanics.

This would have to be a god who accounts for two aspects of the world which the ancients understood, at best, poorly: change and randomness. The ancient world was a static world made in its exquisite detail only a short time ago. Mountains had always stood in the same place—with occasional landslides; rivers had always flowed in their beds—with periodic flooding; and horses had always run in the field—same kind of horse, same grass, same field. God made the world right the first time. And it was a world where sudden and unpredictable changes were the work of unseen forces, the whim or anger of a god or gods whose intentions humans needed to study and understand. Unpredictability existed only in the minds of men, for God always knew what He was doing.

Deeper study and more information have shown humans that the world is not static. Mountains rise with the movement of tectonic plates and wear away with the weather. Streams change their courses or disappear as the land wrinkles and smoothes. Horses evolved from the “dawn horse,” Eohippus, which had toes instead of hooves and was the size of a dog. If the Biblical God made the world in detail, then He’s been fiddling with it over and over for four billion years, like an artist with a nagging case of self-doubt.

Patient observation and a better approach to mathematics have shown that some events truly are random, governed by probability among a range of potential outcomes. Indeed, order and regularity may only apply to large groups and concurrent events, where the vagaries of chance even out into an average and appreciable pattern. Pretending that the Biblical God stands outside probability and guides it to a predictable—to Him—result is not an answer. A probability whose outcome is known at any level ceases to be probability and becomes causality. If the Biblical God is predicting the decay of every heavy atom or the making and breaking of every covalent bond in the universe, then He’s a busy fellow indeed!

Yet the universe, which we now understand extends beyond this planet to encompass billions of stars in billions of galaxies, still has room for a guiding hand. Curiously enough, it’s to be found in these two areas of creation and probability.

Establishing Fundamental Principles, Making Fundamental Choices

Humans imagine—but cannot quite prove—that the universe is a delicate balance between energy and matter. We push our mathematics and our logical analysis to the breaking point trying to establish the dividing line, the point or the vibrating loop of string, where energy solidifies into objects we can touch and manipulate. To make it all work, according to the rules that our minds lay down, the solution requires more spatial dimensions than we can account for in everyday experience—which is to say something seems to be happening outside the world we know from direct observation.

So, for a first principle, what is the conversion point from energy to matter, and how did this god choose it? And once you have that itty-bitty point that might also be a loop of string, how do you build it up? Why does it take three quarks to make a proton (two Up and one Down) or a neutron (two Down and one Up)? Why do atoms place these heavy particles in a cluster at the center of an atom and surround them with the orbiting, detachable fairy wings of barely there, problematically observable, almost nonexistent lighter particles called electrons? Why choose this configuration rather than, say, globular shells of dense particles enclosing a central nothingness? And why not have molecular bonding by mating positive to negative particles scattered across the face of this globule, like magnets studded on meridians around a globe?

Once matter was solidified, this god would be the mind that set the proportions of mass to distance and so defined gravity and perhaps set the boundaries of spacetime. This is the god who established the relations among the atoms of the periodic table so that some joined in covalent and ionic bonding more readily than others, allowing for complex molecules reacting to stable rule sets, enabling the complexity we call life. This would be the Prime Mover, answering questions before they can even be asked.

The universe might have had very different principles, a different configuration, an energy balance or density that entirely eliminated the possibility of life. The universe might have remained a thin, undifferentiated bath of hydrogen atoms all resonating at a uniform heat: proton-and-electron, buzz-buzz … proton-and-electron, buzz-buzz … proton-and-electron, buzz-buzz, and that’s the extent of creation. But where’s the fun in that?

Harnessing the Power of Probability

As noted above, a god who knew the outcome of every probable event would create a static, predetermined, foretold universe. We humans, with our limited vision and short-range imaginations, might not see the ultimate outcome, but it’s out there, waiting to be experienced, plod, plod, plod, until the final bang or whimper.

On the other hand, a universe of total chance, total indeterminacy, where anything might or might not happen, would be the work of a trickster god, a Loki, the cruel, the deceiver, the untrustworthy, the enemy of humans.1 And yet randomness and indeterminacy seem to govern at the quantum level, driving the joining and breaking of bonds, the distribution of energies, the position of a particle here rather than there, moving in this direction rather than that.

It is at the intermediate levels—between the quantum fluctuation of particles and the organized whirl of galaxies under the guiding hand of gravity—where order and predictability emerge in the universe. This is also, perhaps coincidentally and perhaps not, the human scale of action.

The god I’m envisioning would be the god who harnessed possibility and turned it to advantage to make a universe as varied, yet consistent, as the one we see all around us. Who set the scale of that existence, so that the operating level of that chemical complexity we call life would fall exactly at a midpoint, halfway between the ultra-small plane of atoms and particles and the ultra-big plane of stars and galaxies. Who set the endurance and corruptibility of these fragile, chemical bodies in a timeframe long enough to grow and learn and wonder, but not so long as to understand completely, grow bored, and burn out with the stars. This would be the universal intelligence, who mirrors the understanding of humankind without prefiguring or limiting the evolution of that mammalian and eventually monkey-borne intelligence. This would also be a god who could be surprised and delighted by the effects of randomness on the system he set in motion.

An example of randomness and order can be found in the operation of the DNA molecule. It contains and reproduces the chemical formulas for all the proteins needed to make a human or a horse. They are written in a four-letter code, which is interpreted through a three-letter “reading frame” to call for each of a protein’s amino acid components in sequence. The possible code combinations allow for sixty-four different ways to specify just twenty different amino acids; so the genetic code can suffer a certain amount of mutation before the specification actually changes. And yet the specification can change. And then, depending on how a different amino acid affects the protein at that point, and depending on how the operation of that protein fits into the cell’s current environment, the change might be beneficial, enabling a better fit with the environment, or detrimental, disabling the protein, lowering the efficiency of the cell, and hampering or even killing the creature composed of those cells.

The process of evolution—random mutations in DNA that positively or negatively affect individual cells and organisms—can be cruel. For every mutation that helps a creature or a species adapt to a changing environment, there are many more that produce either no effect or some detectable, perhaps even mortal, damage.2 So the god who incorporates randomness into his patterns must justify operating pitilessly on an individual level. The one justification I can think of is that, unless individuals were prepared to suffer these small hurts and occasionally offer the supreme sacrifice, the whole species—and perhaps all life on the planet—would eventually decay and die under the pressure of a steadily changing environment. A species is not a unique or wonderful thing to be preserved for its own sake; it is merely the best fit for a particular role in a particular environment.

But rather than death, fear, and terror, humans usually find inspiration and beauty in change and seemingly random events: in the face of a granite mountain sheered off by a glacier, in the facets of a carbon crystal compressed in the explosion of a kimberlitic shaft, in the shore of a lake defined by an old meteor crater or a fold in the land. Of course, fear and terror do accompany the larger events that seem random and chaotic: the explosion of a volcano, the fall of a meteor, the course of a hurricane. But as these apparently random events fall into that in-between place, where chaos succumbs to order, humans are constantly learning the principles needed to predict and react to them.

Perhaps it is the function of a modern god, a god that science can love, to permit random-seeming events as the culmination of quantum chaos operating on established mechanisms in physics and biology. The world made by such a god can surprise and delight us, fascinate us with its innate complexity, and occasionally terrify us with its power. That’s almost a god I can believe in.

1. Indeed, total chaos would be as boring and inimical to life itself as the total predictability of a uniform universe composed of nothing but hydrogen.

2. For more on the mechanics of evolution, see Evolution and Intelligent Design from February 24, 2013.

Sunday, May 19, 2013

Zen and the Artist

I recently posted on Facebook a thought that occurred, like all my good thoughts, in the shower:1 “Writing novels is a Zen experience. Although you may have acquired the adept’s tools, you must approach each project with the beginner's mind.”

Clearly, the thought has two parts: “the adept’s tools” and “the beginner’s mind,” but the Zen does not exist in either part. Instead, it exists in the unspoken place: the tension between them.

The Adept’s Toolkit

To become competent at any art—whether it’s writing novels or poems, painting canvases, carving statues, or composing music—you must learn a set of basic principles. Then you expand on what you’ve learned to adopt the principles that work for you, in your particular style and voice, and reject or ignore the principles that chafe and constrain your particular vision.

For the writer, these principles stretch from the simplest elements to complex relationships: word usage and the juxtaposition of denotation (what the word means in a dictionary sense) and connotation (what it will suggest to the reader); sentence structure, paragraph structure, the optimum length of such units and the limits of complexity and compression; the flow of ideas and the flow of action; the optimum mix of telling versus showing, the mechanics of point of view, and the mechanics of character creation and revelation; the flow of dialogue and techniques for revealing information and insight—as well as concealing the core structural surprise—all without being obvious about it; story structure, knowing where to start, knowing how to resolve a story thread, and knowing when the story is complete.

All of these elements and relationships go beyond what they can teach you in school about spelling, grammar, punctuation, capitalization, and all the other mechanics of writing that can be written up as rules and that any writer ignores at his or her peril.2

Other artists have their own principles. For a painter, they involve perspective, proportion, color mixing, treatment of light and shadow, level of detail, and a thousand other things at which, as a poor artist myself, I can only guess. For a musician, they involve the musical key, use of chords, transitions and refrains, and other things which, ditto, I ditto. And for either of them there are also basics not to be ignored, like the location of the viewer’s “eye” and angle of incident illumination for the artist,3 or scale and harmonics for the musician.4

The artist can start with the basics taught in school. But the work will not rise to the level of art until the practitioner learns the principles and decides which to adopt and which to ignore. Until that framework is in place, the writer will produce either pleasantly bland or confusing stories, the painter will make pedestrian, paint-by-numbers canvases, and the composer will make, at best, elevator music. Each will attain his or her requisite “voice,” or “eye,” or “ear” only after he or she knows what rules to break and which to emphasize.

This is why I’ve often said that no one can teach you how to write. A teacher can only expose you to different ideas and help you think about the mechanics and the principles. But you must learn what works. You do that, first, by reading what others have written and deciding whether it works for you as a reader. These are what a painter or musician would call “influences,” and the same applies in literature, too. But you must also, second, actually meet these problems in the act of writing, work through them for yourself, try out different approaches, and then decide what works for you as a writer. Until you’ve done the work yourself, you remain a student and an imitator.5

Some very rare people can produce a polished first novel which displays a seasoned understanding of the principles and an adept’s choice of approaches. I couldn’t do it. My first novel—a manuscript written when I was sixteen years old and worked in the dim hours before dawn, when I got up an hour early before heading off to school—was a slipshod mess. So was my second novel, written in the same circumstances before heading off to work. So was my first attempt at a third novel—although, after having that attempt thoroughly stamped on by a professional editor, my second try came out better, became published, and actually won an award.6 You can bet that anytime you see a highly praised “first novel,” it’s the author’s third or fourth attempted manuscript and only the first one which he or she was willing to show to an agent or publisher.

For most of us, we have to learn to swim in the calm waters of a pool before we try to swim across a lake—or the ocean. Most of us are learning to write all the time, getting better, smoother, more practiced, more certain, more likely to hit the right tone and story arc with the first draft, rather than putting down some any-old rubbish that must then be whacked and battered into something that kind-of, pretty-much works.

But with the adept’s toolkit and the adept’s confidence comes the adept’s assumptions. “I know what I’m doing, because I’ve been here before.” “Of course, the story will follow this path and not that. I’ve been this way before.” “These characters will act this way, talk this way, think this way. I know how to create a character.” … And that’s a trap.

It’s all right to work from a set of established assumptions if your goal is to write the same story over and over. That’s the structure of each James Bond novel that Ian Fleming wrote: same character walking into different circumstances with a different villain but using the same skills, charm, and perseverance to find his way back out. The approach also worked for authors like Agatha Christie and Edgar Rice Burroughs. It can create amazingly popular fiction. Readers who like a particular author’s work can feel pleasurable anticipation in comparing the known to the unknown: How will Bond get out of this one? Which of these clues will Poirot discover? What evil mastermind will kidnap Dejah Thoris, and how will John Carter get her back? This sort of storytelling is a bit like honest carpentry. It’s about planning, measurement, and precision. But it involves very little of the aha! element that makes for great literature.

If your goal is to create something new, something that defies expectations, a work of great imagination and deep insight, then you need to pull something more than carpentry out of the adept’s toolkit.

The Beginner’s Mind

The beginner has a fresh mind. He doesn’t always know the difference between what he knows and what he doesn’t. He is still in discovery mode: seeing, sensing, learning, and adapting. Confidence and the complacency that it can bring are the enemies of the beginner’s mind. To begin again as a beginner, you must stop insisting that you know how to do something. Instead, you must go quiet and let your subconscious perceive, suggest, and affirm what is real about your present situation.7

As a writer, you must perceive your story, not as a contrivance of linked actions to arrive at a preconceived ending, but as the experience of real people living and responding to life’s circumstances. You must forget what you know about plotting and structure, about climax, denouement, and surprise.

You must come to know your characters as real people with background stories, unique expectations, desires, fears, and prejudices. You must forget what you know about types like “protagonist,” “antagonist,” “foil,” “ingénue,” or “the hero’s best friend.”

You must hear the dialogue of your characters as actual speech, rather than a contrivance to reveal information to the reader and to pass it from one character to another within the reader’s understanding. You must forget what you know about language structure and composition.

You must see your settings as real places that your characters inhabit and which your reader might want—or fear—to visit. The setting is not simply a contrivance to propel action and plot and to hand the characters convenient tools, clues, and props along the way.

As a painter, you must see the world and the objects in it as they exist, peculiar to themselves, with their own surprising beauty and flaws, and not as a symbols or representations or generalizations. As a musician, you must let the melody go where it must and resolve when it will, and not according to some rhythm and tempo imposed by a classical form.

Later, once the story, the characters, their conversations, and their setting have come alive in your mind, you may want to use the toolkit you know to sharpen and enhance, to smooth out flaws or highlight differences. But this is the final polish—not the initial discovery. If you start with your story as a preconceived plot and your characters as types, you may have the skill to create something real for the reader, but the risk is that you will end up with an easily disbelieved wind-up toy peopled by dolls. Worse, if you don’t believe in your story and characters yourself, you may err by adding details that don’t belong and characteristics and personal facts that contradict the character’s true nature. In this I’ve found that my subconscious is a better guide, perceiver, and evaluator than my own conscious mind can ever be. My subconscious is a repository of what “feels right” about a situation, whether in real life or fiction.

To achieve this state, the writer or other artist must forget for a moment that he or she is creating something that is not real, has never existed, and perhaps cannot exist in this universe. This is more than forgetting the learned principles of the toolkit, but forgetting the very nature of the art. The mind launches forward into the darkness or the daylight carrying only the image and sense of this perceived story, of these accustomed people, of the places they naturally inhabit, and of the things they are likely to say out of their own awareness, perceptions, motives, or fears. The tension lies in playing two parts simultaneously: both as recipient of the story on one side of the screen, living it as the reader, viewer, or listener does; and as creator and coordinator of the story behind the screen, making it up, drawing it out, and cutting it off.

To be an artist is to put your mind in two places at once, holding two thoughts and driving two realities at once, both creating and experiencing, yin and yang. It should be a form of madness … except that the brain is an incredibly complex mechanism. It has operations in both the focused daylight of the conscious mind and the shrouded darkness of the unconscious. And to pursue an art form—any art, at its deepest level—requires the artist to bring the two together at one time, to be both aware and unaware, to be both the adept and the beginner.

1. I think it has something to do with warm water hitting the back and side of my neck, maybe relaxing some key muscles, possibly increasing blood flow to the brain. Whatever the mechanism, I treasure anything that gives me a moment of insight into what, I presume, is my subconscious—although it may also be another dimension.

2. Unless you have a good reason that will be obvious to the reader. For example, Don Marquis’s beloved character Archy wrote stories and poems all in lower case because, as a cockroach, he couldn’t push down the typewriter’s shift key. Russell Hoban’s post-apocalyptic character Riddley Walker wrote in a fractured form of English that was both dismaying and mesmerizing. If it works, do it.

3. Unless, of course, you are Picasso, painting faces and bodies from all angles at once.

4. Unless you’re doing experimental, modern, atonal music and don’t mind offending the audience’s ears.

5. I remember, early in my writing career, hearing an interview with Marilyn Durham about her first novel The Man Who Loved Cat Dancing, where she mentioned the problem of getting a character to walk through a door. I thought, “How hard can that be?” But it’s actually a complex question. Do you have the character open the door, or simply walk through? Does he turn a knob to open the door? Do you describe the kind of door, its paneling and paint, and the shape and composition of the knob? Is it a brass knob or a crystal knob? … What I’ve learned since then is that you are using the reader’s eye and ear like the camera and microphone on a movie set. You only describe the woodwork of the door or shape of the knob if it serves some purpose to focus on it—such as to suggest the age of the house or the taste of the character observing it. You only describe going through the door if you want to say something about the room beyond and how it’s situated and entered. For most of my purposes, the action starts and ends inside the room or other place and no doors are required, coming or going. Pare away to essentials. Only show what should stick in the reader’s mind.

6. And by that time I had already graduated as a baccalaureate with honors in English literature, worked in two different publishing houses as an editor myself, and was making my living as a professional communicator writing press releases and articles for employee newsletters and magazines. Fiction certainly requires good writing skills, but many kinds of writing and editing have nothing to do with fiction. To write a novel, you don’t even start at ground level: you must climb your way up from the third sub-basement.

7. When I use the word “subconscious,” I’m referring to a special understanding of the human mind. We all have our point of view, our attention, our focus, the thing that knits together our present awareness, our perceived sensory experience, the memories that our thoughts and senses trigger, and our expectations about what will happen next. This is the bright line that knits together past, present, and future in our mind and responds to the notion of “I” and “me.” But the mind is vast and complex.
       While I’m sitting here writing, I attend to the computer screen and the words appearing on it. I also attend to my thoughts and to the learned response inside my head that follows a logical sequence of thoughts and turns them into words. My fingertips follow another learned response to spell those words correctly and push down the appropriate keys. At the same time, however, my ears may be hearing things to which I am not consciously attending: key clicks, traffic noises, background music, the dog snoring. My skin may be feeling pressure from the chair seat and tiny pressure changes from air currents in the room. My nose may detect cooking odors—hmm, garlic?—from the apartment down the hall. Similarly, when I ride the bus, my eyes may see words and images on advertising posters of which I’m barely aware, or overhear fragments of conversation to which I do not consciously listen. But other parts of my brain are recording all these sensations, triggering responses and creating sense memories of which that bright line is not completely, if at all, conscious.
       More of the brain, in its various cortices and networked systems, is functioning than may be under control of that bright line. It is from here, in the unconscious part of brain functioning, that I believe new insights, connected thoughts, and inspiration live and develop in darkness, held passively until they announce themselves to the bright line of consciousness. If I could not tap into that subconscious process, I have no idea how I would write fiction—or even, for that matter, nonfiction.

Sunday, May 12, 2013

Money vs. Ideology

A lot of concern in the political world—among the people you run with, and those you’re running against—involves motivation. What does that guy want? What makes her tick? Where’s he coming from? And the biggest question of all: Can I trust him/her/them?

The common perception is that people on the right are driven by money. They want to acquire it, save it, get a good return for it, and avoid spending it foolishly, giving it away, or paying more than they have to—or, in some views, any of it at all—in taxes. These people—some would say—have no personal beliefs or moral grounding that stands higher than the love of money. They will shape their psyches, bend over backwards, and kill their own mothers in pursuit of the demon gold.1

The common perception is that people on the left are driven by ideology. They value a vision of the future, a personal set of values, beliefs, morals, and intentions. They believe in a kind of moral purity, guided—some would say—by what “feels right.” They believe that the heart is the truest expression of the human soul, and that a rational mind and the pursuit of personal advantage come second. They will follow their beliefs and morals no matter what the cost in money or time or personal inconvenience.2

All of us mix these tendencies to some degree. The man who labors for money and strives for wealth and security may certainly have personal morals. The woman who feels compassion and votes with her heart may certainly be careful with her money. But it’s in the name calling, and one’s dim view of the other party, that these leanings toward money or ideology come out most strongly. People of the left decry the right for its concerns about wealth, taxes, and federal spending. People of the right decry the left for its ideological purity, political correctness, and magical thinking.

As a man of the middle but with rightward leanings, especially in fiscal matters, I tend to trust the monetary motives on the right. I’m more comfortable with people driven by greed and self-interest than with those driven by ideals and vision. That’s a hard thing to say, I know, and will earn scorn among those for whom vision and compassion are the highest ideals. But I’m going to explain myself anyway.

People who act in their own self-interest and value money—and the personal advantages in terms of time, convenience, indulgence, and security that money can bring3—are fairly predictable. You know what drives them. You can also guess the limits of their ambition. Although they may have no upward limit to their desire for stacks of folding money, their sphere of action is going to be limited to the advantages, tricks, and ploys that benefit them and their family and, more rarely, their neighborhood and class.

The damage that a greedy person can do is both limited and self-limiting. His drive to get and spend may ruin his family relations and his reputation in the community. He may indulge in shady practices, cheating his employees, suppliers, and customers. He may cut corners in production and sell shoddy goods as a result. He may dump wastes and pollute the local stream. But the cheat and the producer of low quality are quickly discovered and repudiated by reputable people. Damage to the stream is detectable and traceable back to the polluter.

In a free society, with open communications and speech protections, a supplementary market will grow up to provide reviews of and recommendations for those who provide products and services. Think of the various institutions, like the Better Business Bureau and Consumers Union, or the magazines and websites, like Car and Driver or Rider, and the consumer service sites, like Angie’s List, that test and recommend products or gather and publish people’s personal experience with service providers.

For those whose customers may be unable to judge the quality of their work—for example, the doctor whose patients must trust in his or her knowledge and skill—or whose customers may be at some distance removed from acceptance of the product—for example, the engineer or construction contractor on a bridge, whose faulty workmanship endangers the drivers who cross it—society erects barriers and protections in the form of licensing, inspection, and regulation. The doctor who mistreats her patients will lose her license. The contractor who skimps on bridge construction will be discovered during inspection, forced to repair the work, and lose his performance bond. Government agencies at the county, state, and federal level exist to monitor air and water, issue operating permits, and punish illegal dumping and polluters. The mere existence of such safeguards are usually enough to keep the average professional, contractor, manufacturer, and supplier honest.

In a free society with an open market, consumers are generally able to choose their products and suppliers, and take responsibility for the quality of service or workmanship they will accept and where they will place their trust.

The damage that an ideological person can do knows no such limits. In the private sphere, as a concerned citizen, opportunities for imposing one’s beliefs, vision, morality, and intentions on other citizens is certainly limited. Think of writing a letter to the editor of the local newspaper or putting a political sign up in your front yard. But in a free society, with open communications and speech protections, like-minded people can come together to form parties and pressure groups. And in less-free societies they can operate outside the law, as vigilante groups and secret societies.

My fear is that the ideologue—who may be as blind to the needs and wants of others as the miser and the moneygrubber—will impose his or her vision, values, beliefs, and intentions on those who do not share them. That tendency has always been present in human society. But in earlier societies, with more limited forms of communication and cohesion, like-minded groups had fewer opportunities to clump together, link up, spread, and build consensus. It’s hard to form national parties with specific agendas in a rural society connected only by newspapers printed on a hand press. Building consensus, selecting priorities, and launching campaigns takes time—usually measured in years.

But with the growing communications vehicles brought about by technology in the 20th century—the telegraph and telephone, radio and television, and now the internet and email—our age has seen a flowering of political ideologies. Campaigns in favor of certain programs or against certain practices can launch, gain momentum, and take over the public awareness and imagination in a matter of months—sometimes in weeks or days.

Principles and values that once took a generation or more to develop and build now sweep the populace in a year or less. Think of the campaigns against personal choice in the matter of energy use (fossil fuels, incandescent light bulbs), everyday convenience (recycling, composting, plastic shopping bags), and societal values (abortion, same-sex marriage). Peer pressure is now applied on a national and even international level through public and social media. If you happen not to share the values being promoted, you can be criticized and ostracized both on a local and a national level. If you are not with us, you are against us, a selfish wrecker and destroyer of the world, and the weight of public opinion falls hard and fast.

When people break free of the shackles of self-interest, they can dream big, envisioning a world where everyone must participate in a utopian future or risk destroying the very planet itself. Then things can really get out of hand. Ask the Germans. Ask the Russians. Ask the Poles and the former Eastern Bloc. For myself, I’d rather put up with the machinations of a greedy capitalist any day.

1. Of course, another side to the politics of the right supposedly derives from Christian fundamentalism, and this motivation not only dictates how its followers should act but also wants—some would say—to impose its will on all of society in a state of theological fascism. Politics is a messy business, and any drawing of lines is going to wander across such boundaries.

2. In most cases, time and personal convenience are also an expression of money, which is simply a storable and tradable form of human energy. See The Economy as an Ecology from November 14, 2011.

3. Here I’m going to make a distinction between the lust for money as a form of personal wealth and security and the lust for money as a form of power over others. The man who buys and sells corporations, and seeks money in order to pay for these acquisitions, is not going after money per se. He seeks power as an end to which money is simply a tool. The same lust drives a politician, who seeks money in order to buy advertising and sway opinions in order to obtain votes, or a military officer, who seeks combat honors and praise from his superiors in order to obtain promotion.

Sunday, May 5, 2013

The Dollar Value of Technical Advances

I’ve written several times before that we’re all on an express escalator to the future—and a future that we may not even be able to imagine.1 Technology has a way of building on itself. Whatever one person may invent for one purpose, another may improve upon, adapt for some other purpose, or reinvent using different techniques and processes. This is how swords become plowshares, as anyone who has studied the history of technology since the Second World War will attest. All it takes is free minds prepared with the right education, an open flow of information, and the natural human desire to make something new appear—usually with a monetary reward attached to the invention.

How can we place a value upon this process? Not upon the single invention, but upon the whole chain of creation and re-creation that it may ignite? Well, we can’t—we can only dream.

When Thomas Edison, in competition with other creative thinkers, worked on the electric light2—finding a filament that would glow with electric current inside a glass bulb—he could pretty easily calculate the riches that awaited a successful product. Simply take the nation’s annual investment in whale oil, natural gas, and candles used for lighting purposes, beat the costs of these various technologies, and rake in the difference.

But Edison would have been thinking only of area lighting—for one or more rooms in the home, for the front porch, for the street beyond, possibly as some kind of beam to light the way ahead of a carriage at night, and for searchlights and lighthouses. At the time, he probably never imagined the thousands of other uses to which a switchable light might be put: advertising signs and marquees, projectors of still and moving images, or signaling the state of circuits in complex technical environments like control panels. Some of those applications—like displays and signage, or the kerosene-burning “magic lantern” projector—already existed in other forms, and Edison or some later inventor might have foreseen the possibilities for development and expansion once a switchable light source was perfected. But other applications—like the flashing signal bulbs on a telephone switchboard or in a power plant control room—could not exist until the sealed and undying electric light bulb made them possible.3

The point is, Edison might have predicted fairly accurately the value of electric bulbs in house and street lighting, but he could only guess—and probably guess low—the value of a switchable source in all the other applications it has enabled over the years. Whole industries have grown up around the light bulb and its offshoots in the cathode ray tube and the light-emitting diode.

Consider, then, the potential value of the internet. It originated as a convenience for scientists working on government projects who wanted to share research findings. Rather than waiting to publish their papers in a journal somewhere, they could put their results out on the Advanced Research Projects Agency Network (ARPANet), which was initially funded by the U.S. Department of Defense. That was back in the 1960s, before the invention of the microprocessor,4 when scientists were the only hands-on users of computers. Linking their basements full of computing power to share a manageable amount of data—under some conditions of access control and secrecy—seemed like a good idea.

Anyone working on the protocols for connection, packet switching, transmission control, and other processes required to wire up an undefined number of separate computers and then trade files among them could clearly see the immediate benefits. This was a faster—virtually instantaneous—way to publish large amounts of information among a large population of users than printing the information in periodicals or bound books and maintaining them on the shelves of one or more libraries. The network was like printing one book and making it appear simultaneously on the shelf of every library in the network.

But did anyone, in those days when computers were data-hoarding dragons that lived in the basement, with core memories the size of walk-in closets and disk drives the size of washing machines, foresee the ultimate impact? Could they imagine that one day, when computers were the size of typewriters—or even the size of a bar of soap—their network would put the information resources of practically the entire planet at the fingertips of virtually every man, woman, and child? And more than a library function, making hard facts and electronic books available worldwide, did anyone guess that their network would ultimately support a planetwide commerce in soft goods like music, movies, novels, and newspapers; tickets to entertainment and sporting events; reservations for restaurants, hotels, airlines, and other modes of travel; transactions between banks and investment advisors and their customers, as well as direct trading in shares and commodities; and the advertising, display, sales, payment, and shipping of every other product and commodity? Did they understand that their network would link every person in a massive exchange of messages, confidences, whispers, and rumors? Did they guess that simply managing the flow of all this information and conversation with search engines like Google and social exchanges like Facebook and Twitter would launch some of the largest, most profitable enterprises on the planet? Or that their inventors and progenitors would, in many cases, be undergraduates still in college and working from their dorm rooms?

Did they understand that this network of computers, which started out by trading secret information on defense projects, had the power to change the face of commerce and bring down or force the transformation of old, established businesses? The internet has changed forever the dynamics and business models of telephone companies, television networks, newspaper, magazine, and book publishers, recording studios, banks, stock brokerages, and travel agencies. But aside from this “creative destruction,” could the ARPANet inventors have understood how much of a boost, how much pure acceleration, their infant network and its extension worldwide would give to any of these functions, how many new business opportunities and new jobs—even new types of jobs, like “web designer,” “database engineer,” and “app developer”—would result from creating so much of this free connectivity?

Anyone trying to guess the value of a thing called “the internet” back in the 1960s would, like Edison trying to assess the value of a long-burning light bulb, have started with the world’s libraries and the books contained in them, plus the annual revenues of the world’s publishers and newspapers, come up with an astounding figure, and stopped right there.5 No one would have foreseen the business opportunities available today—nor a rising cottage industry in writing, directing, and filming three-minute movies for YouTube.

More important, at that time, you could not have sold this vision of the future to the average person. Try describing how, with an easy to use search engine like Google, anyone could have a personal librarian at his elbow who provided instant access to thousands of libraries, repositories, databases, companies, and other resources. How, with social media like Facebook and Twitter, he could trade opinions, confidences, jokes, pictures, and home movies on the fly with friends, family, and thousands of barely recognized strangers. This level of access is something the average person would never have thought of wanting or paying for, because it’s something no billionaire, king, or president could have obtained even twenty years ago.

Consider the power built into a smart phone, which is both an outgrowth and enabler of the internet. It is actually a computer in the palm of your hand. It wirelessly connects outward to the resources of the whole world through radio-based voice and data systems. It wirelessly connects with peripheral devices in your immediate vicinity, like earphones and keyboards, through another radio-based system called Bluetooth. It contains a television screen, various kinds of software-embedded keyboard and other inputs through a touch screen, plus speakers, microphone, combined still-image and video camera, and GPS-sensing antenna.

With its built-in software applications it becomes an appointment book, messaging center, voice recorder and general secretary, to-do list keeper, stock ticker, weather channel, internet connection, plus a competent photo and video recording studio, a library of your favorite music, movies, books, and periodicals, a repository of your personal thoughts, photos, family videos, and other informational memorabilia, and a navigator both by established roads and cross country. And it also serves its basic function as a telephone—but with your address book and the Yellow Pages embedded on speed dial.

More than that, with the right selection of software applications—usually available for a few dollars each6—this handheld device becomes a video arcade, card game player, pedometer, exercise trainer and diet coach, heart rate monitor,7 health statistician, and any other function that can be seen, known, and displayed. A dozen years ago, you would have paid hundreds of dollars for each of these applications—if they were even available. More, you would have had to carry a dozen different devices to obtain them as single functions. Now, they all fit in your pocket in one slim device. What is that worth? Nothing if you didn’t ask for it. Everything if you think you need it. And no one, even ten years ago, would have foreseen a rising cottage industry in writing software to fling Angry Birds across a telephone’s touch screen.

What is the dollar value of all this connectivity? What is the dollar value of all this potential creativity and invention? A trillion dollars? Two trillion? Ten? It’s growing all the time. And twenty years ago the nature of this technological advance and its impact on our global society could hardly have been imagined, let alone estimated.

This rush of technology that I’ve described is taking place only in two small areas: the interconnection of computers via the internet, and the interconnection of people with new ideas, transactions, and entertainments via the internet and the personal devices that enable it. Consider now that in academic institutions and research centers across the world, scientists, engineers, and inventors are simultaneously at work in areas like chemistry to improve the materials out of which our products and infrastructure are built, physics to improve our use and storage of energy, life sciences to improve our general health and medicine, and data science to improve our use of all those other sciences.

This vast human collaboration which we variously call “science and technology” or “art and science” is changing the world faster than we can imagine.

An ancient Roman, brought forward to about 1750, would have recognized most of the technology in use at the court of Louis XV of France. He would have marveled at various individual inventions, like the stirrups on a saddle and the leaf springs in a carriage. But about the only real surprise would have been the explosive power of gun powder—and that bit of chemistry could have been explained to him in about an hour.8 But a French philosopher and scientist brought forward a mere two and a half centuries to 2013 would be hopelessly lost. To catch up and truly understand the underlying technologies of our modern age would take a lifetime of study among new concepts and established sciences. And one of today’s scientists, taken forward to about 2050, a mere twenty-five years, would have to learn disciplines and techniques for which we don’t even have names yet.

No one can put a dollar value on all this. But, I promise you, the potential wealth and power that will be showered upon the average person of the future will be vast. It’s going to be a wonderful ride!

1. See Coming at You from October 24, 2010; The Language of Technology from July 29, 2012; and In the Palm of Your Hand from October 21, 2012.

2. Edison didn’t invent the idea of turning electricity into substitute daylight. Eighty years before Edison’s first successful bulb, the English scientist Humphry Davy created the electric arc by passing a current between carbon electrodes, and the process is still in use today when you need a really powerful light. Twenty years before Edison another Englishman, James Wilson Swan, made electric lamps with filaments of carbon paper, which burned up too quickly. The prize that Edison chased was a filament that would not burn out and so would provide light indefinitely.

3. Or maybe Edison was dreaming of these and other applications all along. He certainly thought and invented widely across the spectra of physics and chemistry. Who at the time could say how much of our future was locked away in the imaginations of men like Thomas Edison and Nikola Tesla?

4. Microprocessors—“computers on a chip”—were originally conceived as a way to add distributed computing power to complex systems, or to add digital processing to existing electro-mechanical devices like elevator controls and automobile fuel injection. It was bright boys like Nolan Bushnell, Steve Wozniak, and Steve Jobs—our modern-day Edisons—who conceived of attaching these processing chips to memory chips, a cathode ray tube, and a keyboard to make the “personal computer.” And that led eventually to every person on the planet—who wanted one—having the computing power of an IBM 360 on the desktop.

5. In the late 1960s, while still an English major in college but being interested in science fiction, I began writing a story based on a far future library that controlled all of the world’s knowledge with a vast laser storage and retrieval system. In my imagination, it worked something like a planetarium, which somehow wrote data on the interior surface of a great dome and then, in recall mode, could pick facts and folios off the ceiling—rather like a curved, interactive hologram. Along with everyone else at the time, I was thinking of a single huge repository, a vast installation, like a Fort Knox of the world’s information. Nobody in the paying public yet understood that huge size still had limits, that true power lay in networks of distributed and interconnected nodes, and that by chopping the information up and scattering it around—the earliest glimmer of “cloud computing”—humankind could achieve virtually unlimited storage.

6. Remember when software used to cost a couple of hundred dollars? Some still does: the heavyweight, industrial-grade packages for office productivity, graphic studio work, or video editing. But the cost of a single-purpose application has dropped to next-to-nothing, or it’s free.

7. Through an app that uses the camera’s flash to light up a fingertip you place over the camera lens, and then the imaging chip reads color changes in the skin as your heart beats. How cool is that for making clever use of available resources?

8. Well, to really understand explosives, you need to grasp molecular bonding between atoms, which depends upon an entire realm of knowledge and theory in chemistry. But the ancient Chinese who mixed the first gun powder and the Renaissance Europeans who used it didn’t have that knowledge either. They just knew that when you mixed certain materials in certain ways and then touched them with a spark, interesting things happened. An old Roman could learn that, too.