Sunday, June 30, 2013

Humankind’s Next Step

I was in grade school when Sputnik went up, junior high when Alan Shepard first flew suborbitally, and college when Neil Armstrong stepped out on the Moon. So I grew up believing we would have an actual presence in space—be a space faring-people. That’s one of the reasons I started reading science fiction and became a science fiction writer.

In the 1970s we retreated from the Moon and instead of long-range rockets built the Space Shuttle, essentially a vastly expensive truck to low earth orbit. So I thought we would be gathering our strength, learning the ways of space, and building an orbital presence. But one space station, poorly funded and still mostly a science experiment, is not much of presence, and now the Shuttle is gone.

My hope fades, although it still glimmers a bit with the commercial efforts of companies like SpaceX and Virgin Galactic. I have given two young people of my acquaintance who have scientific minds this challenge: “Get us off this fxxxing rock.” Space probes are nice. Space probes are cool. But if we don’t go out there and establish a presence, we’ll never actually use space or be able to defend ourselves. When the next big rock comes along, we’ll be the new dinosaurs. When the first visitors finally come, we’ll just be part of the flora and fauna, a curious form of talking monkey, but ultimately expendable. We have to get up there to establish our right to live in this neighborhood. It’s where we go from here. It’s what intelligent beings are meant to do.

But the next steps will be hard. If all we wanted was more real estate, a place to put our growing population, it would be cheaper and easier to build underground cities or domes on the ocean floor. Orbiting space stations and O’Neill colonies1 are expensive. You have to lift every part of a facility like the International Space Station from the surface of the Earth. You could also get raw materials like silicates for cement and oxygen and metal ores for structural framing by mining the asteroids or the Moon, but in either case you would already need a healthy presence in these places to undertake such projects. Compared to these structures, it would be several orders of magnitude cheaper to build a five-star hotel, complete with Olympic swimming pool and tennis courts, on the summit of Mount Everest. The logistics of construction and supply would be a lot simpler, too.

Logistics, or how to get there. … Right now, everybody who goes above atmosphere travels by some form of chemical rocket. It may burn solid fuel or a combination of liquid oxygen and liquid hydrogen, but essentially the motor is harnessing a controlled explosion. These vehicles are not like your family car or Flash Gordon’s rocket, where you just fill up the tank and go. It’s more like firing a bullet: a lot of manufacturing to prepare it, a flash and bang to get your boost, and then you’re sitting at your destination with a naked capsule or payload and a litter of spent shell casings stretching out behind you. Sure, the Space Shuttle was sold as being reusable, but it still had to be stripped down and practically rebuilt for every flight.

We have other ideas for spaceship motors: mass drivers, electromagnetic slingshots, ion drives, and fusion reactions, among the things we think we understand; and antigravity, warp drive, and wormholes among the things we can only dream about. But right now, we’re stuck with chemical motors. They will get you up to orbit, and with a boost from there you can visit the Moon or inner planets and, with a lot of planning and a narrow margin for failure, come back. Touring the outer planets is a one-way trip on built-up momentum, as is any excursion beyond the solar system. As far as ship technology goes, we’re in the same situation as a 15th-century sailor who knows about caulked planks and canvas sails. You can tell him about steel hulls, steam turbines, radar, sonar, satellite navigation, hydrofoils, and hovercraft. He’ll certainly want those things, but they will take generations of hard scientific work and the solution of many incidental problems in chemistry and physics to achieve.

Even if we had better drive systems, where would we go? Explorers like da Gama, Columbus, Drake, Shepard, and Armstrong will put up with huge risks and the possibility of failure to discover new lands. Settlers are willing to put up with a certain amount of hardship and uncertainty, but their hope of surviving and prospering must outweigh the troubles and frustrations they leave behind in the mother country. People who are living a good life full of family, friends, familiar places, and favorite pastimes in one country need a powerful incentive to pick up and move overseas, let alone to the Moon or Mars.

The Moon is a rock. We might find water frozen in polar craters and mine silicates for their oxygen. We would have to build underground to avoid being bathed in cosmic rays, solar flares, and ionizing radiation—not to mention that the surface is in total vacuum and alternately oven-bakes when the sun shines and deep-freezes when the sun’s over the horizon. A colonist would never see another tree, unless he planted one under a dome, and not see much of a horizon unless he suited up and went outside. Until a critical mass of colonists, infrastructure, and self-supporting technology grew up—most of which we can imagine but not begin to specify or build—the first settlers would depend entirely on shipments from Earth. That would put them at the whim of government budgets or commercial contracts under the control of far-off citizens or suppliers. Given the history of funding space exploration and the amount of lip service we regularly pay to humanitarian efforts, that’s a slender line to hang your life on.

Mars is only slightly better. Like the Moon, it’s too small to hold much atmosphere. Opening a window on Mars is like opening one on an aircraft flying at 115,000 feet. In the laboratory, you’d call it a high-grade vacuum. And the composition is 95 percent carbon dioxide, the only gas heavy enough not to bounce around and escape the surface. Mars is also bathed in radiation because it has no magnetosphere, the magnetic flux which surrounds the Earth and is generated by the convection of molten iron in our core. Whether Mars’s core is stone-cold dead or just strangely inactive is open to discovery, but the planet’s current state is that of being geologically and magnetically dead.

Venus is more likely. It’s about the same size as Earth and, although closer to the Sun, is still within the habitable zone. Venus has a thick atmosphere that’s about 96 percent carbon dioxide and the rest nitrogen. The pressure is ninety times that of Earth’s atmosphere at sea level.2 Unlike Earth, however, where a malleable crust rent by volcanoes and plate tectonics allows for release of energy from the core, radar studies of the surface of Venus suggest it is very young with few asteroid craters. The crust appears to be thick and rigid, retain heat, and allow that heat to build up until the whole surface simply melts, subsides, and reforms. Living on Venus might be like living on the downhill slope of an active volcano.

We can dream of ways to “terraform” such planets. Since the problem with the Moon and Mars is lack of mass—and we just don’t have ways to artificially build up gravity yet, if we ever do figure it out—these bodies will be the last to be made like Earth. Venus is a more tractable problem, if we could think of ways to stabilize the heat exchange in the crust, thin out the atmosphere, and pump in something like the eighty percent nitrogen, twenty percent oxygen that we’d rather breathe. Terraforming the moons of Jupiter and Saturn present similar problems.3

But until we learn a whole lot more in terms of science and engineering—and some of it can only be learned by going there, encountering the problems one at a time, and solving them—the amount of usable real estate in this solar system is limited to our home planet. And as for planets around other stars, so far we’ve seen mostly close-in gas giants and worlds far stranger and less hospitable than those here at home.4

But why am I being so negative, when I said earlier that our next step is off the planet and into space? Because the challenges that await us are formidable. Compared to them, Columbus or Drake only had to deal with leaking planks, brackish drinking water, fearful crews, and hostile natives who were still, at a biological level if not culturally, indistinguishable from themselves. Columbus and Drake and all the others who opened our planet’s horizons simply needed courage and funding. To go into space, we’re going to need courage, funding, and a depth of scientific and technical understanding we don’t yet fully appreciate.

But the alternative is to remain here as part of the flora and fauna, to wait for the next big rock or advanced species to wipe us out. If we don’t go, humanity’s tenure on this planet is not indefinite. We might not even last another ten millennia.

1. Proposed by physicist Gerard K. O’Neill in 1976, these are huge cylinders—five miles in diameter, twenty miles long—which rotate to provide an Earth-like gravity. You live on the inner surface in shirt-sleeves and farm the dirt under sunlight coming in through huge longitudinal windows. The cylinders—actually designed in pairs, linked at the ends, so they counter-rotate for stability—could be placed at LaGrange points in the Earth-Moon system. The details need to be worked out, of course. The cost will not be cheap.

2. Early rumors that on Venus the rain was made of sulfuric acid have been largely dispelled, but that’s small comfort.

3. If we can one day learn to terraform the planets, how much of a problem will it be to adjust little things like the amount of trace gases in Earth’s atmosphere, adjust the wobble in our rotation, and smooth out any temperature flux not due to changes in the Sun itself? If you’re going to tackle living on the Moon or Mars, making a garden spot of the Earth on all of its continents and under its seas is a trivial exercise.

4. Of course, that may be largely due to our methods of detection, which look for changes in orbital and luminosity effects and so tend to favor large planets over small ones like Earth and Venus.

Sunday, June 23, 2013

The Value of Money

We all worry about the value of money these days. Inflation, interest rates, foreign exchange rates, and market valuations all push the value of our dollars—or euros, or yen, or yuan—up or down. Wouldn’t it be nice to have a safe, stable, universal kind of money that was always worth what you paid for it with your hard work and didn’t erode over time? Is there a magic coin that governments couldn’t debase with their manipulations of the money supply, stealing value from the rest of us?

For a while, it seemed like the bitcoin would be an inflation-proof form of money. This is a “cryptocurrency” or “virtual currency” that apparently—because I’m no expert in its mechanics1—acquires value only because people are willing to trade it. Because it is not backed by any one government nor controlled by a central bank, it supposedly does not suffer from inflation and all the rest. But as people are discovering, its value can increase or decrease unexpectedly due to issues of liquidity: if you can’t find someone to take it, you can’t spend it. And when you convert your bitcoins back to “real world” money—as you must do eventually, unless you can live on bitcoin transactions as if you were trading Linden dollars in Second Life—then you’re back to dealing with inflation, deflation, and foreign exchange rates.

It seems to me one can look at money in two ways: first, what it represents in an economic system; and second, how it is handled within that system. Money as a representation of underlying value, and money as a medium of exchange.

As a representation of value, money is a stand-in for—a kind of frozen and tradable form of—human energy. When it results directly from human mental or physical effort, as wages or the receipt of payment for a product made through that effort, it represents captured human energy. When it earns interest on a loan or a dividend on an investment, it represents a share of the work which that loan or investment enables, such as providing the worker or the enterprise employing him with tools and raw materials. The interest or dividend represents human energy in two ways. First, it represents the sacrifice the person paying the interest or dividend is making, in terms of the value of future work, in order to obtain those tools or materials, or some desired item such as a house or car, before the work has been done and the money is in hand to pay for them. Second, it stands for the effort that the person making the loan or investment puts into delaying his own impulse toward instant gratification in favor of some other purpose, such as increasing social goods or multiplying the value of past efforts. Money in all its aspects represents human energy—whether it be muscle energy or brain energy, or some aspect of human choice—that has been expended over time.2

As a medium of exchange, money needs a certain identity. The king’s shilling must be intricately inscribed so that unlicensed “coiners” cannot easily duplicate it in debased metal. The U.S. dollar or EU euro must be made with special paper and printing so that counterfeiters can’t make more for themselves with a photocopier and pass them out at leisure. The credit card you carry must have some recognized standing—MasterCard, Visa, American Express—and the backing of a recognized bank so that people will know they can get their value out of the exchange. You’d be hard pressed to get anyone to accept Larry’s Credit Card issued by Harry’s Bank. Who are these people? What kind of scam is this?

It used to be that the medium of exchange needed to have scarcity value—coins were made of gold or silver, metals hard to find and extract, and so acquired human-energy value in their own right. Other forms of scarce “money” have been diamonds, bars of precious metal, bits of jewelry, or beads of wampum—that rare purple tint found in an otherwise white shell.3 The idea that money had to have value all on its own went away in the Renaissance, when Italian bankers began using letters of credit and promises to pay in place of moving around bags of gold. You could trust the banker’s correspondent in the faraway city where your goods were being trading: he knew your banker, had the gold secure in his vaults, and supported just as much trade going in the other direction, so it all evened out.

We might look at inflation as a natural phenomenon: the devaluation of past human effort. That is, we prize the work that goes into the things we need now or will need in the immediate future. We place less value on the things we already have that may be old or wearing out. So the money that was paid for them then, representing human energy that is no longer of much value, has decayed. Certainly, when Khufu’s pyramid was new, it was a thing of great value—eternal resting place to meet the needs of a godlike pharaoh, source of prestige and pride for the Egyptian people. Today, the energy that went into making it—the labor of thousands of people 4,500 years ago—has been totally discounted. As an Egyptian national treasure and remnant of world heritage, it is “priceless.” But if the pyramid disappeared tomorrow, no one would pay the millions of dollars required to rebuild it.

In this sense of money as a representation of value, it cannot be inflated. A government cannot immediately devalue the effort that went into earning it. The only way that money as frozen energy can be devalued is if the product of that energy has no value or reduced value. A worker who makes a shoddy product may release it to the market in return for the same amount of money as a quality product, but the product in use reveals its flaws and so reduces the value of the effort put into it. In that case, the particular piece of money—being equal to all the other monetary units received by the workers on the line, whether they make their products well or badly—actually gains in phantom value. The worker has gained money under false pretenses, for time and effort he did not expend or expended incompetently.

It is only when money is divorced from its relation to production and recompense, from its enabling function as capital, from its purchasing power as a loan to be repaid later, that money can be inflated. In a barter system, the value of goods and services remains relatively constant. One might for a while “bid up” the value of certain goods which draw attention and so attraction. One might “bid up” the value of some popular or rare human services against others that are mundane or easy to do. But the disjunction between effort and the value of money used to facilitate barter trade never becomes very great.

When government asserts its control, however, then the disjunction can arise and grow. When governments control the coinage, they can debase the metal in the coin or reduce its size, supposedly creating more “money” in the marketplace with the same amount of precious metal. When governments control the printing of money, they can print paper bills—even those supposedly backed by a fixed exchange rate with a certain value in metal or other “hard” currency—in excess of the value supposedly backing them. And when the value of the money in circulation is linked only to the government’s promise to make an exchange,4 or to some abstract value like the exchange rate or the government’s promise to pay future interest, then the value is in free fall, supported only by rumor and reminiscence of past value. That is, the value of money is divorced from effort—except in some oblique way relating to the roll-up of value in the gross domestic product and the import/export ratio.

The only way to make money “inflation proof,” then, would be to tie it back to human effort, time, and product value in some way with which the government cannot interfere. The problem is that all questions of value are ultimately malleable. There is no “gold standard,” no common accounting for a person’s time or effort. Even human life has no fixed value, no uncontested weregeld such as the old Anglo-Saxon and Salic laws tried to establish. If it’s your life or the life of a family member or loved one, the price is beyond counting. If it’s the life of your local shopkeeper or the farmer, miller, or baker who stock his shelves with bread, then you can establish a temporary economic value—in relation to your own needs. If it’s the life of an anonymous peasant in China or India, well then pfft!

Value is a personal, self-centered, temporary thing. We value the man or woman who can write a clever or useful bit of software—but only in terms of coding and design. We value a major league ballplayer—but only in terms of pitching and hitting. No one would pay Mark Zuckerberg $50 million to play in the World Series, or Tim Lincecum to write an iPhone app. The ten thousand stone cutters and strong backs who made Khufu’s pyramid are all dust now and, for all the good it does, the monument itself might as well be dust—except stone is so durable.

The value of money is a fluid and fragile thing, as is the economy in which it circulates. It is based on trust, expectation, hope, and fear, which are all human qualities. Look as hard as you might, you will find no external standard, no ultimate value, nothing that endures beyond tomorrow. Money, like the wink of a king, is in the eye of the beholder.

1. As I understand it, bitcoins are simply encrypted strings of data secured by a digital signature. You acquire them; you lock them up in a digital “wallet”; you transfer them to other people’s wallets. In place of a government, bank, or other financial institution—“a trusted third party”—to keep you from spending the same string of digits in two different places at once, the bitcoin system uses a system of timed identity stamps encoded into the string. You start by creating a virtual wallet that lets you accept a string of data from other users as payment for something you’re selling and then you pass that string along to still other users in exchange for something you’re buying, and each transaction carries a timed code. This is not all that different from the dollar-denominated strings of code you keep in a bank account and pay out with a check or credit card transaction through the bank’s clearing house or card service.
        What I don’t understand is where the first bitcoin came from, or how more bitcoins would be created sui generis as the volume of transactions increases and the value of the system swells. In a centrally supported system, the amount of dollars in circulation is controlled by the Federal Reserve or other central bank, which sells its own securities to financial institutions in the banking system and sets the amount of reserves the local bank must hold against the dollars it has on deposit. Without this ability to authorize the creation of new money, only that first bitcoin would ever be in circulation, passing from hand to hand.
        Supposedly, you can buy fresh bitcoins with conversion of the funds you already have in your local currency. But that just passes the value proposition back to your Federal Reserve or other central bank: if your dollars are eroding through inflation, you will need more of them this month to buy the same amount of bitcoins you bought last month. The value of each bitcoin might grow in terms of the dollars you put in. But the same could be said of converting your dollars to Swiss francs or Japanese yen for a transaction. And you arrive at the other side of the value equation when you cash out your bitcoins. The bitcoin is not really a hedge against inflation.

2. If this looks like a Marxist proposition—that all value derives from human labor—then so be it. Actually, I believe I’m expanding on the Marxists to account for human transactions such as loans and investments. I would say that all value comes from human energy, expressed both as physical labor and as willful intention and purpose. The desire to delay present gratification in order to invest in an enterprise is just as valid a human activity as the necessity of working each day for bread or a paycheck. Certainly, all value derives from human desires and activities. For example, consider that an uncut diamond the size of the Earth might lie at the heart of Jupiter, but since it has nothing to do with human beings—that is, we cannot get to it, touch it, treasure it, or trade it—that diamond has no value in our economic system.

3. Contrast this with a medium that has no human-energy or scarcity value. “Here, let me pay you with this rock.” “No thanks, there are plenty more in a pile over there.”

4. Like a Renaissance Italian banker who has promised to pay out his gold but issues other letters of credit instead and never has to open his vault.

Sunday, June 16, 2013

A Drone in the Hive

Looking back over my career history, I have to admit my function in the economy has been that of a drone in the beehive. I have produced nothing that my society desperately needed and sometimes nothing it even wanted. Yet I’ve been gainfully employed for most of my forty years in the business world, except when I took sabbaticals to write my novels and then lived on my savings. Calmly accepting my dronehood may seem like a harsh view of affairs, but as you near the end of a long run, you become dispassionate: the big picture is bearing down on you.

I always knew I was on economically shaky ground. One of my grandfathers was a civil engineer who built skyscrapers, the other a lawyer and county judge who helped people with disputes, wills, and land sales. My father studied mechanical engineering and pursued a career that included helping develop radar during World War II, then working on early developments in nuclear fuel processing, digital computers, and electronics. My mother studied landscape architecture and became a draftsman during that same war before retiring to become a housewife and mother to two boys. All of them led good productive lives. But I thought I had a talent for writing, and so I studied English literature—and even then it was considered the classic pursuit of the useless dilettante.

When I graduated from the university, I might have become a teacher. English teachers help young people become expressive and imaginative. The world would be a darker place without such people to introduce young minds to the literary and dramatic arts. Or I might have become a journalist. Journalists investigate and inform millions of people, and occasionally they change the course of history. But either career, teacher or journalist, would have required me to engage in specialized study, a different undergraduate major entirely from English literature.

My undergraduate degree pretty much qualified me to get into graduate school. There, with another four years of study and research, I might have earned a PhD that qualified me to teach at the university level. And whom would I teach? Other undergraduate English majors aspiring to become English professors, wear tweeds, smoke a pipe, attain tenure, and be paid a scholarly stipend to find the deeper meaning in the works of Jane Austen and Henry James.1

But when I graduated as a baccalaureate, after sixteen years of continuous schooling since the age of six, I was already burned out on learning, deep reading, and research. Applying to grad school in order to enter a life of same had for me the feeling of walking into the ocean, then swimming, then floating, then drowning.

So I found other things to do with an English degree. I wanted to write novels from the start, but at first you need a day job. With the help of two of my favorite professors, I was accepted as a book editor at the university press. That meant sitting for eight hours a day marking up scholarly manuscripts for prose style, punctuation, and printer’s instructions. That job ended with the state’s next budget crunch,2 and I traveled west to California. There my dad had retired from engineering and was running a custom drapery making and cleaning business; so I hung curtains in high-rise offices—my only non-English-major job since graduation—until I found work as a book editor for a publisher of railroad histories and Western Americana. That meant more hours of marking manuscripts, this time for enthusiastic railroad buffs and amateur historians.

After that job ended, because the owners were winding down the business, I moved on to technical editing—marking up and coordinating the production of engineering reports and proposals; then to public relations—writing press releases and marketing brochures; on to employee communications—writing in-house newsletters, magazine articles, and executive speeches; and finally into manufacturing documentation—writing procedures for scientists and skilled labor making pharmaceuticals and biotech reagents.

Don’t get me wrong. I’ve enjoyed this work and found it personally rewarding. I got to rub elbows with engineers and scientists working on the cutting edge of modern technology: civil and mechanical engineers building hydroelectric dams, steel mills, cement plants, transportation systems, and power plants; utility engineers developing alternative energy projects and building electricity and gas transmission and distribution networks; control systems engineers monitoring processes at an oil refinery; biological scientists developing new medicines; and chemists and engineers inventing new ways of studying the genome. In every one of these jobs I found that, if you ask the right questions and don’t pretend to know more than you do, you can learn a tremendous amount.3

But upon reflection I can’t claim I was ever necessary to any of these industries.

The scholars, railroad buffs, and amateur historians whose manuscripts I edited really weren’t bad writers. I might have rearranged a few of their dependent clauses and unsplit a few infinitives, but it’s not as if I was turning straw into gold. And marking their manuscripts for the typesetter, specifying line widths, paragraph indents, and punctuation marks—for example, making sure each and every dash was designated as an em-dash, as if the typesetter might suddenly go crazy and start inserting en-dashes or hyphens—was sheer fastidiousness in pursuit of absolute clarity.

The engineers and scientists I worked with were all already pretty good at explaining their work. They could make it clear to me; so they could easily have gone ahead and produced their own reports, proposals, and manufacturing documents unassisted. They might occasionally have misnumbered a paragraph or left out a comma or left in an unexplained acronym, but technical understanding would still have been achieved.

The employees for whom I wrote newsletters and magazine articles—and later created topics for the company’s internal website—already knew what was happening inside the firm. Whole batches of our newsletters got dumped in the wastebasket as soon as they arrived at local offices. The magazines went into the trash as fast as they arrived in home mailboxes. The internal website was notoriously unread and unremarked. The company events and meetings I arranged and managed drew perhaps three percent of the employee base, because everyone else got the inside scoop from their supervisors and managers.4

The executives for whom I wrote speeches were all able speakers. They would work over my text for two or three drafts and then, arriving at the podium, just glance at the topic headings and wing it. A capable engineer or executive who knows his business can always speak confidently about his or her subject—just as a capable politician, general, or attorney can speak well extemporaneously. I might have found them a joke or two to start, arranged a few of the thoughts in the middle part, and given them a logical stopping point. But they didn’t need the talents of a writer in order to use their own words.

In every case, I held these jobs and performed these functions, not because the actual authors, readers, users, and speakers required me, but because someone higher up in the organization’s management felt a need. The executives in charge of funding my position thought those readers and users needed a special functionary—me as editor, writer, and coordinator—in order to backstop the process, prevent the occasional costly mistake, or put across their message. I was a security blanket, a safety net, a spoonful of grease in the gears. My real function was to put the minds of the organization’s chiefs at ease. When I dealt with those chiefs directly,5 I was just a chauffeur to their words—as if they were incapable of driving themselves.

And between these gigs in editing, writing, and documentation, I wrote my novels. I consider that my true calling, although it’s never been very lucrative, only personally rewarding. Writing fiction doesn’t pay the rent. Most months it doesn’t even buy a dinner out.6 It’s not that I write bad novels.7 It’s that I’m competing with a million other English majors who also have a talent for words, a bright idea, a typewriter—now a computer—and a ream of paper.

The point of all this explanation is not to justify a woe-is-me. I’ve done very well, thank you. These jobs rewarded me adequately in my younger days and then very handsomely toward the end of my career. They enabled me to support myself and my family, pay the mortgage, eat out regularly, and buy both the necessities of life and my preferred toys, books, and music. Life has been very good for me.

I can say this as a person who never made an ounce of steel or cement nor designed or built the plants that produce them. I never generated a kilowatt of power nor strung a foot of transmission wire. I never discovered any new medicines nor invented a new technique for exploring the genome. I have written books that help explain the human condition to my satisfaction, but I can’t claim to have given a vast crowd of readers much insight or many aha! moments.

The point of all this explanation is that we have a rich economy—rich beyond the dreams of any conquering Ramses, Alexander, Augustus, or Napoleon. Working through free markets, widely shared scientific principles and technological discoveries, a mobile labor force, and the driving tempo of creative destruction, we have built an empire of vast wealth. Our economy produces enough surplus wealth and productive energy that we have been able to graduate millions of English majors, fine arts majors, psych majors, anthropologists, and other pursuers of philosophy and letters over the years and still find them good-paying jobs as handmaidens to business and industry.

If the economy is an ecology, where the more economic activity you have, the more niches for productive enterprises and aspiring people you create, then I’ve been a drone in the hives of busily pollinating worker bees. I can’t imagine that the barren, dreary economies of places like Soviet Russia and the old Communist China were ever able to find so many happy niches for their own literature, arts, and psychology majors. Or that they were able to contemplate educating and graduating so many drones-to-be in the first place.

Western civilization, with its distinctive market practices and technological revolution, has erected a fountain of wealth and privilege. I’m grateful for the opportunity it gave met to exercise my one best talent and still make a good living.

1. For a while I did wear tweeds and smoke a pipe. I can’t say they did anything for my image.

2. The press was attached to my alma mater, Pennsylvania State University. That job crossed paths with the first of a long string of recessions that have since plagued my working life, this one starting in 1970. It came down to plowing the back roads of Pennsylvania that winter or employing someone like me to edit scholarly publications. The plows won. That was the last government job I ever held—and good riddance.

3. I spent my university career avoiding math and science in favor of literature and the arts. I spent my working career making up for the self-imposed deficiencies in my education.

4. The most meaningful parts of my job, as far as everyone else was concerned, were the annual Halloween haunted house and the Christmas lunch. They were big hits—until another budget crunch came along.

5. This is that “C suite” and “C level” stuff job descriptions keep talking about.

6. I once added up all my advances and royalties from fiction writing over the past quarter-century and figured they amounted to approximately my annual salary from one year in the middle of my business career.

7. Or that’s my opinion.

Sunday, June 9, 2013

The Unriddling of Quantum Bayesianism

I cannot claim to actually understand quantum theory. I’m a writer, trained in the English language, literature, and humanist principles, not a degreed scientist nor any kind of a mathematician. In fact, I have an ingrained distrust of mathematics. I sense that, like the grammar of a language, mathematics is a human-devised system. Mathematics reflects, first, our groping toward a current understanding of the nature of reality, which is constantly subject to revision; and second, the inherent workings and prejudices of the human mind, which has grown out of complexity and cannot always justify itself.1

But I thought I understood—at least in part, without all the equations to solve—the purpose of quantum mechanics in dealing with systems that cannot be directly observed. Observation is a basic tool of science. You might almost say that without observation and analysis, science does not exist. Without observation and reality testing, science becomes conjecture, metaphysics, and mysticism.

The effects of the act of observing upon the outcome of an experiment or observed situation were never much of a problem for scientists during most of its flowering during the Renaissance and the Enlightenment. Astronomy, the study of the cosmos beyond Earth’s atmosphere, is all about observation without experimentation that might affect the outcome. You look directly at the light from stars and galaxies, you study that light as it reflects off planets and moons. Nothing you can do to that light—refract it, break it, analyze it—can have any effect on the stars and galaxies themselves. Most of the objects under observation are also light-minutes, if not light-years, distant from the observer. A great many of them have already changed position, changed their nature, or even burned out and ceased to exist.

Since then, we have discovered that many objects and events out there in the cosmos either do not shine and so cannot be directly observed—think of brown dwarf stars and dark matter—or they are so far away that they can only be observed indirectly—think of exo-planets whose very presence must be inferred by their orbital and luminary effects on their anchoring suns. Still, all we can do is study, observe, and infer. Until we launch spacecraft and expeditions, we cannot change the objects of study.2

Down here on Earth, under the atmosphere, we have always run the risk of an observation effect. You can study the physics of motion, action, and reaction with billiard balls. But you must always allow that a human hand wields the cue stick, or that humans designed and built the spring-loaded machine that gives the ball its first impetus, or that humans at least milled the slate bed of the billiard table and applied the felt that covers it. Human eyes watching the balls do not affect their paths, but human hands and inputs are present at every phase of the experiment.3 And even the human eyes that see the results and human brains that interpret them are subject to variabilities of perception.

But still, scientists did not have to contend directly with the observation effect until they entered the realm of the very small, the quantum realm of atoms and flying particles. These individual objects are not only too small to see by reflected light in a microscope, but the medium of observation—flying photons in a light-based microscope, flying electrons in an electron microscope—will interfere with the location and direction of motion attributable to an observed proton or neutron. You can know where a flying proton was coming from by capturing it in a sensor. You can know where it once was by hitting it with a smaller particle and studying the latter’s deflection. But you can’t know where a flying particle is and where it’s going at the same time. To observe it is to send it somewhere else.

This is not a problem that can be overcome by using better instruments, manufactured to closer tolerances, more finely tuned, using a more fine-grained medium of observation. Working with atoms and particles and their associated energies, you can only know after the fact: after you have disturbed their original motion and inadvertently changed it. Or you can know something by observing great masses of such particles operating together, cancelling out individual variations, moving in a bunch as, perhaps, thousands or millions of atoms adhering together to become a dust particle that’s big enough to bounce a photon off without sending it somewhere else.

Quantum mechanics governs the study of individual objects at this sub-microscopic level. As I understand quantum mechanics—and remember, I’m an English major standing outside looking in—it accepts this after-the-fact uncertainty and has principles for dealing with it. Quantum mechanics accepts that an electron buzzing around an atomic nucleus has no observable location or path.4 If you want to talk about the electron, you write an equation about the probability of where it might be and where it’s headed. But trying to nail the electron by actually observing it is futile. First, because you can only obtain old, that’s-where-it-was-then data. Second, because you’ve interfered with the electron in the wild and changed it by your very act of observation and experiment.

The story of Schrödinger’s cat is intended to represent this acceptance of uncertainty. Put a live cat in a box with a vial of poison and a mechanism to break the vial only if a random event—the decay of a radioactive atom—occurs. Close the box and wait. As you wait, the cat may still be alive or may already be dead. You can’t know until you open the box. Quantum mechanics says the cat’s fate is suspended in a wave function, which is an equation that describes the probability of the atom decaying and the chances of the cat being alive and of the cat being dead.

Something I learned recently—and I’ll get to where in a minute—is that we can think of probability and statistics in two different ways, as frequentists or as Bayesians. The frequentist calculates a likelihood by studying the frequency of an occurrence among a large number of random events, such as the frequency of heads coming up in a long run of coin tosses. With a lot of data taken under identical circumstances, you can calculate the probability of the toss almost exactly: 50% of the time heads comes up, and 50% tails. For the frequentist, the proposition that a coin toss will have these results establishes a kind of predictable reality. By contrast, the Bayesian—so named after the work of 18th-century mathematician and theologian Thomas Bayes—measures the plausibility of an event when you have incomplete knowledge. The Bayesian knows you can never do enough coin tosses to fix that 50-50 as an exact number. And even if you tried, other factors might affect the tosses, such as air currents in the room, tiny imperfections in the coin’s balance, and microgravities due to plate tectonics. So Bayesians are careful about making assumptions, know their calculations may include subjective elements, and are quick to change or abandon their predictions as new data come to light.

A frequentist looking at Schrödinger’s box can take what he knows about previous decay events in the triggering atom and write a simple equation showing the likelihood of the cat being either alive or dead. For standard quantum mechanics, that equation becomes the observer’s reality. Until the box is opened, the observer must believe that the cat exists in two states, both alive and dead, an unresolved wave function—which only resolves itself, or “collapses,” when the observer opens the box. In the same way, a physicist can only know about the position and direction of an electron as a probability function. The electron isn’t anywhere in particular nor is it going anywhere else in particular until it’s knocked off its course by observation.

I had always thought of the wave function as a physicist’s cautionary tale: “You can’t know until you look, and looking changes the results. So when studying individual events at the sub-microscopic scale, you just have to deal with probabilities rather than certainties. The only certainties are in mass effects, taking the average of movements among a large number of randomly moving particles.” After all, that works for billiard balls. Even though the atoms composing the ball’s ivory or resin may individually be oscillating or moving in different directions as the polymer chains writhe and adjust to impact, the ball itself is going in a predictable direction, has a specific location at any one point in time, and is not deflected by the act of our looking at it.

That it, the probabilities described by the wave function are a statement about the limits of human knowledge.

It turns out this quantum caution works on a macro scale, too. We can’t know what every consumer in the nation thinks about a new laundry detergent or how every voter will cast his or her ballot. And the simple act of meeting them all individually and asking can have the effect of changing the data. “Hmm, why did the pollster ask me that? That’s an aspect of the product [or candidate] I’ve never considered before.” Or “the man asking me these questions looks sneaky and untrustworthy. He must be trying to dupe me or make me change my mind in some way. I’d better not tell him what I really think.”

But we believe we can sample enough data to come up with a reliable probability. When you study the average thoughts or opinions of a large enough number of randomly thinking individuals—not the whole population, certainly, but a “statistically meaningful” percentage—we all accept that you can come up with a pretty close approximation of an election’s outcome or a product’s success in the marketplace.5

As I said, it’s always been obvious to me that Schrödinger’s cat and the wave function represent a cautionary tale. It never occurred to me that physicists might think the cat actually was both alive and dead until the box was opened, or that the electron might not have any location or direction until observed. That is, I never suspected anyone believed that their wave functions, their probability calculations, were real things, representing any kind of ultimate reality. Thinking that can lead to the absurdities—like the cat’s being both alive and dead at the same time—which have plagued quantum mechanics from the beginning. When you think you can write an equation that includes conjectures about probability and yet still represents reality, you are halfway to metaphysics and mysticism.

Call me a natural-born Bayesian.

I’ve just finished reading an article in the June 2013 Scientific American, “Quantum Weirdness? It’s All in Your Mind” by Hans Christian von Baeyer. The upshot is that a new model developed by three physicists dealing in quantum information theory6 is currently resolving the absurdities found in most interpretations of quantum mechanics. They call their work “Quantum Bayesianism,” or QBism for short.7

Quantum Bayesianism applies the Bayesian view of statistics and probability—fluid in its hypotheses, subjective in its approach, driven by direct observation of the data, and mindful of the nature of conjecture—to the observational restrictions of quantum mechanics. According to QBism, the boxed cat may be alive or dead. The conundrum has nothing to do with the cat, which already knows its own fate. The wave function is a construct in the observer’s mind to balance the two probabilities—live cat, dead cat—until the box is opened to confirm one or the other. According to QBism, the wave form of an electron or any other flying particle is a construct in the observer’s mind to capture all of the particle’s possible locations and directions—anywhere inside the nave of our hypothetical cathedral (see Note 4 below), maybe heading toward the altar or heading toward the door, but certainly not out on the Pont Neuf or somewhere in Rheims heading for Belgium. But this equation has no more to do with the electron’s actual situation than our refracting a spectrum from the light of Arcturus affects the star in any way.

This is a quantum mechanics I can believe in and follow, even if I can’t write the wave function equations myself. This is a realization of my supposition about the physicist’s cautionary tale. And that only leaves one question.

This new Bayesian interpretation suggests that earlier physicists working in quantum mechanics have all along been thinking that their equations, rather than a statement about the limits of knowledge, are actually statements about the nature of reality: the electron indeed has no location or direction except in a probability equation; the cat really is both alive and dead. This seems to have been the contention of the orthodox, or Copenhagen Interpretation, of quantum physics for the past eighty or so years. If so, then my question is: what other craziness will they believe and/or have they promulgated? It’s one thing to be pure in adhering to your cautionary tales, to state with conviction and absolute belief that you cannot know a thing but only theorize about it. It’s quite another to believe that your theories thereby replace reality.

To me, QBism represents an awakening from a long, dark period of conjecture. We may be back at a ground state of unknowing. But I find that more comforting than for human science to continue climbing the ladder of mathematics and internal logic, past the point at which the rungs actually coexist with observable reality, and then onward, upward, higher, onto rungs that are purely theoretical. At a certain point, you have to open your eyes, look down … and fall.

1. See my earlier postings Fun With Numbers (I) and (II) from September 19 and 26, 2010.

2. However, this premise is changing, now that we can land humans on the Moon and human-made machines on Mars, and litter the space in between with satellites and their detritus. Mission planners make every attempt to sterilize their landers, so that the possibility of carrying an Earth microbe to Mars is greatly reduced. But we can no longer say that the transmission of Earth-borne microbes to Mars and their later, mistaken “discovery” there is an impossibility. We are no longer simply observing Mars with distant telescopes; we are doing hands-on science with all its perils.

3. Similarly, any physics experiment not involving sight and light, such as work with sound waves or seismic waves, depends on the design and manufacturing quality of the detecting device—the microphone or motion detector.

4. And the possible number of locations and paths are huge. I’ve heard an atom described in terms that, if the nucleus of a hydrogen atom were the size of a pea, and you placed it at the center of Notre Dame in Paris, the electron might be anywhere inside the cathedral and headed almost instantaneously anywhere else inside the cathedral.

5. But there still is a residual observation effect. Human minds are far more complex than flying electrons. They are also clever, subtle, self-conscious, and suspicious. The kinds of questions asked in a poll or product survey, the framing of each question, and the hidden bias that might or might not be revealed to an alert or cautious respondent—all work to skew the data. Compared to opinion polling, the vagaries of quantum physics are crystal clear.

6. Carlton M. Caves of the University of New Mexico, Christopher A. Fuchs who is now at the Perimeter Institute in Ontario, and Ruediger Schack of the University of London in a 2002 paper titled “Quantum Probabilities as Bayesian Probabilities.”

7. And if you mentally read that contraction as “cubism,” you’re not alone.

Sunday, June 2, 2013

On Becoming a Writer

I recently posted about the tension between the known and the unknown—between what you get by following the rules and what you have to forget in order to discover something new—in fiction writing or in any art form.1 I also made reference to the sorts of formulaic story structures, fully developed and unchanging series characters, and stock situations you find in popular authors like Ian Fleming, Agatha Christie, and Edgar Rice Burroughs. That leads me to think there are two main approaches to becoming a writer and two motivations for writing.

The first kind—call them Type I, and include the authors named above—has a talent for knocking out words and wants to make money at it. They will write for the market because their first goal is to sell books. That means they will look around at the competition and try to do the same thing only better. They will let their agent propose book subjects and try to write them, or quit on a project as soon as their agent says something like, “That’s going to be a hard sell.” They will follow their editor’s instructions out the window, taking seriously a half-serious suggestion like, “Couldn’t one of your characters be a vampire? … Well, just think about it.”

They will plan and scope their books as a franchise or series, because those build faithful readers and generate their own recognition. But writing for a franchise creates certain limitations. For one thing, you can’t kill off your main character—or even put him at a credible risk for death and maiming—because every reader knows he’s coming back in the sequel. So the plot suspense becomes not “Will he die?” but “How will he get out of it?” Some writers will shrug and say, “Well, I don’t like to kill off my characters anyway.” But death, the ultimate mystery, the final disappearance, the last goodbye, is a powerful dramatic tool. Reduced to a problem to be escaped, it loses some of that power.

This kind of writer will also join the franchises of other writers, either as subsidiary author of an original book, or as author of a book or media tie-in in a collaborative series. The goal of working on other people’s stories, in other people’s universes, perhaps using a stock set of familiar characters, is not so much to create something new as to leverage existing recognition, familiarity, and fame among readers.

Truth in advertising: four of my novels with Baen Books2 were collaborations with senior authors, undertaken at my editor’s suggestion. These were not the sort of collaborations where two authors of equal or near-equal rank think of an idea, kick it around, develop plots, characters, and scenes, and then take turns at writing them. These were commercial ventures, in which a famous author had a “trunk” idea—that is, a story he had thought of, perhaps done a bit of research and plotting, but then abandoned or “put in the trunk”—that he knew he was never going to write. The author had offered this formative material to Jim Baen with the understanding that Baen would find a young author to work on it. The young author—me, in these cases—would then flesh out the story and write the manuscript. The senior author would get first edit and refusal rights to the book. And if the senior author finally approved the manuscript, it would be published under both our names and we would split the royalties.

Recognition is everything in the writing business. You can get it by leveraging something that’s already in the public eye. Mining current events and scandals is what Truman Capote did with In Cold Blood: take a notorious case and write a book about it. It’s what every reporter-turned-book author tries to do. Publishers know they can sell a book whose subject has already caught the public attention, either through building on past sales of a franchise fiction universe, through linking with a famous name, or through explaining or amplifying current events or famous cases.

Type I writers are looking for a sure-fire hit, for access to the mass market, for an inside track to the public imagination, and to become the next big thing. Ten years ago they were kicking themselves for not being the first to think of writing about child witches and wizards at a school for magic. Today they kick themselves for not thinking of writing about a virtuous young woman who falls into sexual bondage to a billionaire. The kinds of books they write have “bestseller” written all over them.

There is a tendency for this kind of writer to think that story and plot are a kind of wind-up toy: give them a subject, and they'll start banging out character types and story lines. They believe the old saw that there are only seven plots (or five, or thirteen) in all of storytelling. They like to think they are professionals because they can name the tools of the trade, as a carpenter knows his saws, chisels, and hammers.

I have nothing against that kind of writer. They create much of our popular fiction. They also can occasionally create something new. Ian Fleming created a new kind of spy story that mixed glamour and assassination and did away with the disguises, codebooks, and grunt work of traditional espionage. Agatha Christie created the archetypal mystery, where a famous detective identifies the criminal from among a multitude of colorful suspects. And Edgar Rice Burroughs created a new kind of science fiction, blending science and fantasy with outright sexual tension.

I have nothing against that kind of writing—except I cannot do it. I tried, as noted above in my collaborations through Baen Books. But those efforts were never very satisfactory, not for the readers, not for the senior authors involved, and not for me. I discovered that I am a Type II writer.

Type II’s have fallen in love with the inner workings of the imagination and with the writing process itself. For them, writing is a way of finding out what’s real, what they know and think about the world. Stories are not wind-up toys but voyages of discovery. They tend to believe their characters are real people, not types. They tend to follow their story lines as life experiences, not archetypes. Boy wizards and bondage heroines are all very well, and it’s nice that somebody else is writing about them. But the meat and potatoes for the Type II writer live in other places.

These writers create strange books that you as a reader either love or hate or just don’t understand. They become bestsellers only by accident. But they secretly believe they write books that will be remembered for the ages. Type I writers think the II’s are dilettantes and naïve. Most Type II’s struggle in obscurity, but if once they can link a story to the meridians of the human heart and imagination, they can eventually find readers. The old publisher’s mid list—now the world of epublishing—is filled with Type II’s.

If a Type I happened to leave his manuscript on the subway, anyone else could pick it up, take it to an agent or a publisher, and pass it off as his or her own. That’s because, while the writer wants his story to stand out as a reflection of the current public mind and mood, he still wants it to blend into the popular culture and pass smoothly across the reader’s defenses and through the reader’s consciousness. He has been trained—or knows instinctively—that quirky little bits of prose style get in the way of this scientifically selected process.3

If a Type II left a manuscript, it would be so unique that an agent or editor would likely say, “This reads very much like so-and-so. In fact, it is a so-and-so. Where did you get it?” That is, of course, if so-and-so had a publishing history and any popular recognition. If not, then the agent or editor would probably say, “This is … interesting,” and probably be thinking, “Ewww!”

The Type I writer will seldom, if ever, suffer the pangs of wondering if the book he is working on is any good or if it’s just a bog and a mishmash. But at the same time she or he will seldom feel the joy of discovering that the book is better than expected and that all the parts—in hindsight, miraculously—work. Type II writing is an experiment, a passage into the unknown, a risky business. But its rewards—spiritual if not always monetary—can be vast.

I have a talent for knocking out words. I made money at it by writing and editing engineering proposals, biotech procedures, press releases, annual reports, employee communications, and so on. All of it was work for hire, all formula work. But my fiction is my own. Yes, I wrote some novels to other author's outlines while I was with Baen, but before I could write those books I had to delve into the story, explore it, add it to my psychic DNA, and bring it back as something I could see as a book.4 I may never make much money at fiction. But there are plenty of people trying to make money by knocking out the modern equivalent of pulp or dime novels as Type I writers. The publishing world is full of second-rate publishers and third-rate agents who are looking for them to produce "content." I dream of making stories that will be remembered for the ages.

Which type of writer are you? If you’ve done any writing at all, then you know that without being asked.

If you haven’t done any fiction but feel you have a talent for writing, then all you can do is write. Describe the things you see around you. You may have done some painting or music in the past; now bring that vision or that ear into words. Practice plot structure by writing down a daydream as the outline of a story: What happens first? What happens second? What are the consequences? Practice dialogue by picking two characters—even characters you’ve met in other books or in the movies—and imagining a conversation between them that is focused on some topic: How do we get out of this locked room? Who gets the last bite of the apple? Practice voices by taking a writer you like and trying to write in his or her style.

Write a little bit every day as an exercise. Collect story ideas and put them into computer folders. See if things start coming together around these ideas. This is how novels are born. Don’t bother too much with writing courses or author groups. They may give you some ideas, but they can also fill your head with a lot of stupid rules and prejudices.5 If you like to read books, then you already have buried in your head the essence of story, character, dialogue, and all the rest. Every writer is self-taught. Every writer finds his or her voice by admiring, emulating, and adapting.

I’ve been writing since I was sixteen. But if you haven’t been writing that long, is it too late to start? Well, there’s a story about a woman who reached the age of 100 and was interviewed by a bright young reporter. She asked if the old lady had any regrets. The woman said yes, that she didn’t start to learn the violin at age sixty, because by now she would have been playing for 40 years. We’re all going to live longer, what with healthier lifestyles and advancing medical techniques. If you start now, you’ll have a whole shelf full of books by the time you’re 100.

1. See Zen and the Artist from May 19, 2013.

2. See the complete list in Science Fiction.

3. I believe this is why writing teachers tell students to “kill your darlings.” Eliminate those passages you’ve labored over and feel they really express your idea, because you’ve made those passages stand out from the flow of the text. I also think this is what journalists learn, so that their prose won’t stand out—or stick out as a gaudy bit of color—in the mechanical sameness of the newspaper’s other stories.

4. This process was also amazingly instructive. I learned something about how the great writers gather ideas and build stories. And trying to write with the pace, tone, and voice of these other authors was a writing class in itself.

5. In one author’s group I attended thirty years ago, a particular member was death on “lists.” She always wanted any group of three or more items reduced to one or two, even if the intent was to show variety and richness of detail. (“ ‘They shopped for a long stay at the cabin, buying eggs and cheese, boned ham in a can, a loaf of whole wheat—supplemented with boxes of saltine crackers—two cartons of milk, a case of soda, and a package of those tiny star-shaped cookies the children loved.’ ” “Now that’s a list! Pick two of those things and just mention them.”)