Sunday, June 29, 2014

Science and Computer Modeling

I believe the issue of anthropogenic global warming (AGW) raises an important question about the view our society has of science. Specifically, are we to accept a computer model of a complex system as proof of a hypothesis about that system? I don’t think that’s too hard a question to understand, is it?

Science—at least as I was taught in school—is the business of observing a situation, making a hypothesis about what might be going on, devising an experiment that will prove or disprove that hypothesis, running the experiment, and analyzing and interpreting the results. Generally, it is easier to disprove a hypothesis (“No, your guess was wrong”) than it is to prove it (“Yes, that’s exactly what’s going on”). During the processes of making observations and formulating hypotheses, scientists may use computer models to help them try to isolate effects, generate ideas, and increase understanding. But can they then use those same models to prove the hypothesis? Can they create a model that is so accurate it moves from an illustrative point of view to an actual prediction?

I can well understand why climate scientists rely on computer modeling. It is physically impossible to run meaningful, full-scale experiments with effects that may take years or decades to develop across the entire planet and its sky. Still, the question remains: is modeling an appropriate stand-in for physical experimentation?

Computer modeling is by its very nature the act of paring away. When a system is so large and complex that it appears to be chaotic, the modeler selects one or more variables to study in relative isolation. Other variables are then held in a kind of stasis covered by the Latin phrase ceteris paribus, or “other things being equal.”1 With that selection made, the modeler can try different scenarios, increasing some variables in the model, reducing others. Of course, to run these scenarios, the modeler needs a relationship between the variables under study. That relationship is usually an equation or an algorithm: if this variable increases, then that variable decreases, or increases, or remains unchanged. Without a mathematical expression of these relationships, the model can’t be run on a computer.

So we have two cases where the modeler manipulates the presumed reality of the subject being modeled. First, he or she has selected some variables to study and decided to hold others at fixed values. Second, he or she has reduced the interaction of those variables to a mathematical relationship, an equation or an algorithm, a human construct, that may or may not reflect the actual relationship or account for the action of other variables which, by agreement, have been excluded.

I do not discount modeling as a powerful tool and an aid to understanding. Every economist, weather forecaster, stock picker, and engineer must run some kind of a model in order to study relationships and make predictions.2 The model may be either an actual program running on a computer or a virtual program—the product of a trained intuition—running inside the predictor’s head. But these are still study aids, and the prediction must be presented as a probability, because the system is still chaotic and all those ceteras are not actually “paribus.”

We make models because the actual system is too varied, with too many complex sideshows operating all at once, to track in real time. To begin with, many of the relationships may so far be either under-studied or unknown. But even where the relationships are fairly well understood, by their very nature the outcome must be probabilistic. That is to say, when two air masses collide in a storm, or two buying decisions intersect in a marketplace, or two protons impact in an atom smasher, the results cannot be predicted except as a probability in one direction and a greater, equal, or lesser probability in the other. It’s not that the model is inadequate and that a better set of mathematics would clear up the confusion. The situation has been studied and the results cannot be stated with absolute accuracy.

Too often, as in the case of a weather front or a marketplace, the interaction of all parts of the system is so complex that the only way to model it correctly would be to create a one-for-one reproduction of the system and play it out in real time. And then the assignment of probabilities to the imponderable interactions would still skew the results.3

Finally, one of the choices a modeler must make is the issue of sensitivity. It’s not enough to say that one thing affects another. You must also say how likely that effect is to occur. In place of those “other things” which can accelerate or retard an effect, and which you have chosen to hold as equal, you must assign a probability to the effect you are trying to model. Altering this element of sensitivity in your algorithm can make the model sluggish and not at all likely to react to the effect you’re modeling, or it can set up a “tipping point,” as if the whole system were set on a pinhead and likely to topple in any direction with the smallest push. The art of model making lies in assigning these sensitivities.

Inevitably, through choice of variables, probabilities, and sensitivities, the modeler emerges with not one model, one representation of the complex system, but many different models, many possible outcomes. And then the modeler must choose which of them is the most likely to reflect reality. I can’t imagine that this would be an unforced choice, free of some bias or preconceived liking for a certain outcome.

To apply all this to the anthropogenic global warming debate, I would offer some observations:

First, the noted rise in temperatures during last part of 20th century may have other causes that are not examined in the atmospheric models of greenhouse gases. For example, the most recently completed eleven-year sunspot cycle—number 23 since the Maunder Minimum of the 17th century and peaking in the late 1990s—was noticeably higher than previous cycles. And the cycle that we’re currently in—number 24, with a peak we are just now leaving—has seen sunspot activity that is significantly lower than the previous cycle. Astrophysicists have determined that a spotted Sun is a warmer Sun, and when the spots fade at the bottom of the cycle, the Sun’s energy output is measurably lower. This seems to coincide with the cooler recorded temperatures on Earth for most of the 21st century.

Another possible cause of the rising temperatures may well be human-made. Most of the recorded temperatures are taken in and around cities. The urban heat-island effect—where paving and rooftops increase reflected heat—is an observed effect already observed. But the last half of the 20th century has also seen a huge increase in the use of air conditioning. I intuited its effects while standing outside our hotel in Phoenix, Arizona, two years ago and feeling a blast of heat from the condenser units along the building’s rear wall. If you’re going to cool a large interior space like a hotel, office complex, or sports arena, you’re going to export a comparable amount of heat to the outside environment. If you air-condition the whole city, the air outside has to become hotter.

Second, temperature variation over the long haul has not been measured but merely proxied. From the centuries before thermometers were invented and used, we have anecdotes about harvest yields and home heating and clothing options, as well as descriptions of falling snows and freezing rivers. From this source we can infer a Roman Warm Period, a Medieval Warm Period, and a Little Ice Age in Europe. To counter these anecdotes, climate scientists use proxy measurements like the chemical analysis of ice cores from glaciers, pollen counts in sediments, and the width of tree rings. But those are still stand-ins for actual temperature readings, and trees may grow rapidly or slowly, and plants shed more or less pollen, for a variety of reasons, only some of which may be related to absolute temperature.

Third, indications from historic temperatures and amounts of atmospheric carbon dioxide suggest that the two may be related. But the relationship may not reflect cause and effect. An old logical fallacy4 notes that correlation is not necessarily causation. In any event, carbon dioxide is a weak greenhouse gas, and its effects on temperature are usually modeled as a “forcing”—that is, the gas sets up conditions where the effects of other, stronger greenhouse gases such as water vapor and methane are multiplied by its presence through positive feedback. Such studies do not seem to include the possible negative feedbacks, such as higher temperatures and abundant carbon dioxide increasing the growth of green plants and so absorbing greater amounts of carbon dioxide from the atmosphere.

In my view, computer models that predict anthropogenic global warming and sea level rise are not much different from economic models. They emphasize one or two factors while holding others as neutral or steady. Economists cannot account for all the decisions and interactions and their effects in a human economy in the same way that meteorologists cannot account for all influences, interactions, and effects on atmosphere. So to say that a certain percentage of carbon dioxide in the atmosphere at the start of the 21st century will lead to a rise of two or three degrees in composite global temperatures in the 22nd century is like saying that the expansion of the money supply or the dilution of share prices this year will yield a rise of a certain number of points in the Dow Jones Industrial Average a century from now. It’s a good guess—but one that might not even be very good.

The confusion about this kind of scientific prediction seems to be in three parts. The first is confusion as to whether computer models can prove and disprove hypotheses, or simply shed light on what might be going on in the complex system from the narrow analysis of a few factors.

The second confusion is the latent assumption that models using the fastest new computers and most complete algorithms by now should be so accurate and powerful that they can make predictions about the future that everyone else needs to accept and follow. That is, the models don’t just show one possible future among many possible scenarios and outcomes, but instead they show the only possible future—the one that we will experience, the future that must arise.

And the third confusion? That because science has been so successful and powerful in our lives up to this point, its conclusions must necessarily be a true and accurate picture of the systems it studies. And so, by extension, if computer modeling is something that scientists do, then the results of those models must reflect scientific conclusions—that is, proof of a hypothesis.

And my sense of how science works is to say … no.

1. I have also heard this phrase expressed as “all other things being equal,” which is fair enough. But occasionally people translate as it “all things being equal.” That is, of course, absurd. All things cannot be equal. If they were, the system would be locked into immobility.

2. With the exception of the weather forecaster who’s working with real-time satellite imagery. It does not take a conscious or sophisticated computer model to look at a storm sweeping across the country and predict where it will be in the next twelve to twenty-four hours. Instead, you do it by measuring time, speed, and distance. Of course, weather is still chaotic, and storm fronts can sometimes veer or stall, but you can get awfully close a lot of the time.

3. Generations of geniuses have thought they could predict the action of chaotic systems, reducing the outcome to a single, safe bet. Investors try to do this with risk all the time. They hedge their bets with countervailing bets—options, swaptions, collateralized debt obligations—that are supposed to leave them happy no matter what the market does. Interestingly, risk seems to be like a constant pressure in the system, and containing it is like trying to stuff an inflated balloon into a loose basket: it always pokes out somewhere else. For those who think they can tame risk with a better set of mathematics, I have four watchwords: Long-Term Capital Management, the hedge-fund company that used absolute-return strategies and high leverage trying to beat the market—and collapsed in the late 1990s.

4. In Latin, post hoc ergo propter hoc, or “after this therefore because of this.” Things that happen together, either before or after each other, are not necessarily related by cause and effect. This is easy enough to see in a simple system—for example, you drop a dish and then the doorbell rings—but much harder to understand in complex, chaotic systems.

Sunday, June 22, 2014

A Latent Belief in Absolutes

We all believe unconsciously in absolutes. For example, when we wash something, we tend to believe that it’s absolutely clean. When we kill something, that it’s instantly dead and inert. We believe in finished projects, for which nothing more need ever be done—no later repairs or preventive maintenance. We believe in straight lines and perfect circles. We believe in separate classes of things and people—mammals here, marsupials there; a person belonging to this race or that. Even if the lines aren’t exactly straight and the individuals don’t belong, our brain perceives them this way.

Of course, in the real world, outside our heads, any regular washing technique is only partially effective. Surface dirt and visible stains may go away, but loads of bacteria and other invisible contaminants will remain. In the case of our own skin and hair, we probably couldn’t live without them. In the case of other objects, from cars to clothes, a thorough cleaning to “absolute” status would probably destroy the paint or the fabric.

Similarly, “dead” is a relative term in the first minutes after the bullet goes home or the blade comes down. Although the heart has stopped functioning and brain activity is at a minimum, most of the cells are still alive. Apoptosis—the programmed process of releasing intracellular enzymes to break down internal structures—will not start for some hours or days yet. And even then, with the right technology, the remaining DNA fragments might be mechanically copied and the phenotype resurrected for some years afterward. All our concept of death really means is that this body, as a going concern, no longer has much of a future.

The natural world, outside the minds and handiworks of human beings, has no straight lines, no flat planes, no perfect circles, no sense of completion, and no perfectly separate identities. It’s a world of fractals, of shadings, of endlessly evolving consequences, of slippery concepts imperfectly applied. The natural world offers us no Platonic ideals. In the animal kingdom there exists no perfect ideal representing “horse,” of which all actual, living horses, from ponies to Percherons, are merely crude copies.

Even at the atomic level—where scientists once thought lay the realm of absolute indivisibility—we find increasing amounts of fragmentation and fuzziness. Atoms are composed of protons, neutrons, and electrons. In beta decay, free neutrons break down into protons and electrons, along with a fragment of matter called an antineutrino. Protons themselves are composed of three quarks—two going one way, the third another—which all have various attributes and flavors. And quarks themselves will eventually be found to combine other scintillae that will ultimately resolve to some kind of movement, energy, or nothingness. The more you look, the more you find, and the fuzzier your conceptions become.

Conversely, in the natural world, everything you see can be considered whole and perfect in and of itself. Any horse that has all of the necessary parts and attributes for running across the field, eating grass, passing wastes, and making more little horses is indeed a perfect specimen, a unique individual. It may be an Arabian, a Thoroughbred, a Clydesdale, or a wild pinto. It may be lame in one leg, missing an eye or a testicle, or afflicted with mange. It is still, tautologically, a horse—or zebra, eohippus, or whatever it calls itself—and is its own thing, perfect in its nature.

Everything else that we might attribute to nature—all perceptions, definitions, classifications, and evaluations—lies in the mind of one human being or shared among two or more human beings through the function of language and its corollaries of recording, transmitting, and analyzing symbolic communications. That is, almost anything we can say about the real world is a myth, a supposition … ultimately, just a bright idea.

We tend naturally to think in absolute terms: zero and one, on and off, yes and no, day and night, here and there, alive and dead. Making such distinctions is at the root of the process of discovery and learning that every developing mind follows: the hand is not the foot, a stone is not a nipple, mother is not father, floor is not bed. As language-making and -using creatures, we define our world by differences and distinctions.1

Once we have the concepts of difference and distinction down, we have already started on the road of classification: these things are foods, these are clothes. And soon after that, we start to notice multiplicity: I have two apples; if I can take another one from my brother; I’ll have three apples. So we enter the realm of whole numbers, a system that is—in the natural world—completely artificial. Yes all apples look alike, as do most stones and all sheep. But careful examination shows that no two apples weigh the same, have the same color, or represent the same nutritional value. None are so alike that I will trade two large gala apples for two small pippins. I might cut the one of the large apples in half and trade it for one of the smaller apples, and so we get into fractions and negatives, and eventually arrive in the realm of irrational and imaginary numbers.

The world we see around us is the product of our minds. Even what we see with our eyes and hear with our ears is a construct. The cones and rods of the retina only react to certain wavelengths of light, not to the entire electromagnetic spectrum. The tiny hairs lining the ear’s cochlea are only sensitive to certain impulses at certain frequencies passing through the air. Either organ produces only raw signals—plus a good deal of noise—that the brain must sift through and assemble into meaning. And that meaning is entirely influenced by what the brain has experienced in the past through an internal system of determination, classification, memory, and recognition.

I am not arguing that there is no real world out there beyond our skulls. If there weren’t, our brains would not have even the signals and the noise with which to work. But what I am suggesting is that each of us lives pretty far down inside a comfortable burrow of our own experience. From day to day, we see the shapes and images we expect to see; we hear the words and sounds we expect to hear. If we encounter something wholly new, not previously experienced, we fall back on association with familiar objects: It was a crash, like thunder. It was a flash, like lightning. It looked like a human being but it had—I don’t know—antennae, like an insect. It wore wings, like a butterfly.

It would help, then, to every so often get out of your own head: out of your usual books, your humdrum job, your familiar surroundings, association with the same people—and go someplace new, see unexpected sights, hear unfamiliar languages and music. It keeps the synapses fresh and crackling. It might even push your mental development back a few steps and make you young again.

1. Only later do we come to the concepts of similarity and likeness: Well, a hand is somewhat like a foot when you’re crawling on all fours—and then a knee is like a foot, too. Both mother and father are similar in being able to provide sustenance, protection, and approval. The floor is like a bed, if you’re tired enough.

Sunday, June 15, 2014

As Immortal as It Gets

People refer to their children as their immortality. If the children live and reproduce, having more children, then a person’s heritage, his or her line, some bit of his or her flesh, will continue into the future. The conscious “I” may not accompany those fragments of DNA and that bit of tissue, but a link will exist. Your existence will continue. There will be some sign that once you existed.

The process works in reverse as well. As I sit here today writing this—as you sit there reading this—you and I share flesh with every bit of life on Earth. Our line goes back, and not just among the humans, those Homo sapiens who developed in Africa and walked out among the neanderthalensis, habilis, and erectus to conquer the planet. But we also go back much farther than that.

We share the developing DNA structure and tissue makeup of all primates, all mammals. We participate in the branching that separated and developed the mammalian line from the reptiles, the reptilian line from the amphibians, and the amphibians from the fish. We go back to the first eukaryotes, the multiple-celled animals that sequestered their DNA inside a nucleus, picked up organelles like the mitochondria from competing creatures, and learned to differentiate their cell types to support a larger organism. We go back to the prokaryotes, the single-celled organisms that lived on their DNA and RNA organizing a soup of proteins and other chemicals inside a cell membrane. We go back to the time before the ancestral lines of plants and fungi separated from the protists that became us.

We are cousins not just to every human being that ever lived, but to every animal that roamed the forests and the plains. We are distant cousins to the dinosaurs, lizards, and mouselike mammals that survived the impact at Chicxulub. We share grandparents with fish and scorpions and spiders. We share great-great-grandmothers with the flowers and the trees. We can honor the myriad kinds of bacteria that inhabit our own bodies as distant brothers.

Just sitting here, we share the original organizing principle, the DNA-RNA-protein regime, the coding system that violated entropy and established the first life on this planet. That means that each of us, inside our cells, also carries a pinch of that original, primordial protoplasm. It has been replenished, filled up, and refined a million, a billion times since the protista first cooked it up, but we share the same soup of amino acids and proteins that the first cells enclosed.

And if you follow certain lines of thought, which question how it came to be that all life on this planet uses that single, unique DNA-RNA-protein regime as its coding system, without a hint of any other information-recording chemicals or any of DNA’s possible variations in the evolutionary matrix, then you must begin to wonder if that system even originated on Earth.1 And if not—that is to say, if the cells and their DNA-RNA-protein regime were either seeded here or carelessly left behind in the lining of an astronaut’s glove—then we share much more. We may be distant cousins to the life out among the stars.2

This is not immortality of the self or soul or consciousness. The person who answers to your name, collects all your thoughts and memories, and has your particular hopes and dreams for the future was not and will not be involved. You do not remember being your own grandfather, either. Or a chimp, a mouse, a lizard, or a fish. And neither will the immortality going forward from your cells, through your children and grandchildren, if any, carry your personality into the far future.3 Family lines eventually die out, or they become so mixed that you would not recognize your far-future offspring. And eventually, however slowly, the human race will evolve, too, into something that will look back on you sadly as some kind of shambling ape. That, too, is inevitable.

The true immortality is to embrace the fact of life itself: the promise of the DNA that it will change, that the next generation will evolve to meet the requirements of the environment it finds. If we are all cousins in life, then the best we can do is celebrate its continuing, down through the ages to us, and further down from us to whatever our kind becomes. We will be immortal as long as the Sun shines and the Earth survives. And if we ever master the trick of going out among the stars, we may be as immortal as the universe itself.

That may be as good as it gets.

1. See the thoughts in Communicating with Aliens from July 28, 2013.

2. When you stop to think about it, seeding the stars with microbes may be the only way to colonize distant planets. Microbes are tough, can form spores when ambient conditions become too rough, and have almost no expectations. Why go to all the trouble of trying to terraform a distant world, achieving just the right balance of gravity, atmospheric gases, and pH levels to suit your particular kind of life? That’s a lot of work, takes a lot of planning and foresight, and can lead to disaster if you miscalculate even one tiny factor among hundreds or thousands of needed changes. Instead, why not “planiform” your future life? Send a microbial sample to any star within reach that has a planet in the habitable zone—that is, with liquid water necessary to support your kind of chemistry—and let the microbe and its progeny adapt from there. If the microbe survives and evolves, its succeeding generations will grow up on a planet that has everything they need. They will view that planet as a special, beautiful, gentle, God-given place, even if it remains inimical to their parent’s life form, who once lived on a distant, horribly acidic or alkaline, harshly radiated, or heavy world under a too-bright or too-dim star.

3. Immortality in personal form is a nice dream, but it does not square with reality. We would all like a few extra years of life, even a few hundred. That would give us time to really accomplish something, to develop not just skills but mastery, to focus on perfection in whatever we do—from piano playing to novel writing to beer drinking. But immortality? Life without end? Staying alive and thinking forever and ever? Waiting for the Sun to burn out? For the universe to expand into separate molecules of cold gas that never react with one another? That’s not a promise but a sentence of doom.

Sunday, June 8, 2014

The Rectification of Names

My young friends and family members on Facebook recently shared—and chortled over—an article in the San Francisco Chronicle’s online presence, SFGate, which reveals that General Motors had a list of banned words that employees must not use in referring to safety defects—um, “problems”—and the company’s program of vehicle recalls.1

This is neither surprising nor particularly funny to anyone who has worked in corporate communications, as I have in several positions over the years. What’s a bit disconcerting is that GM had to go so far as to formulate and, presumably, publish internally a list of such words.2 But then, I never worked anyplace as large and far-flung as GM, where the corporate communicators have to deal not only with reporters, lawyers, technical report writers, and safety administrators, but also with people who are not conscious of or trained in the use of language. GM must have thousands of vehicle inspectors in the field and quality control technicians in the factory, who each write hundreds of reports that might find their way into the public domain and the courts. And in many cases those reports will be processed by language translators who must be conscious of, and deal appropriately with, the sort of industry slang people use everyday in the industry, as in “Well, I guess I pronged that part!”

Correct use of words, as a matter of image and legal concerns in public statements, is common to every corporate communications department. I’ve always had my writing scrutinized by both technical experts and the Legal department to make sure that I’m presenting a fair case in every article I write and not using defamatory, judgmental, hostile, or legally damaging or actionable language. No place I’ve ever worked went so far as to publish a list of “banned” words, because the people I associated with were all professionals and already sensitive to language issues.

I doubt that anyone would have had to tell the communications team at GM not to refer to a car or its problems as a “deathtrap” or "widow maker.” In the same vein, the people at PG&E addressing the licensing and operational issues at Diablo Canyon Nuclear Power Plant simply knew to avoid the terms like “meltdown" or “blast radius” when discussing the plant’s potential effects on the environment. Similarly, doctors and therapists learn in their first year as interns, if not before that in med school, not to throw around terms like “crippled for life” or a “freak of nature.” The article is right: such language is hyperbolic, judgmental, and not helpful to a rational discussion of difficult issues.

Part of surviving in public relations and corporate communications is thinking through the connotations of words like “defect” and “error.” Both suggest a failure that can be attributed to someone’s personal negligence—which practically begs for a witch hunt to find that person and punish him or her. And that’s not helpful, either. Too often, matters that bring on a recall arise from issues of foresight or unforeseen circumstances. For example, a part like the ignition switches in the GM recall might have been properly designed in the first place but installed in a space with too-tight tolerances, at the wrong angle, or otherwise used in an application which the original designer had not anticipated. Whose fault is that? And how should it be punished?

Or take a real example from my own experience. On one of my BWM motorcycles, the semicircular caps at the bottom of each fork leg that hold the axle—like the bolted-on clamps on the end of a connecting rod—were just a couple of millimeters too thin to bear the weight of the bike under all loading conditions over its service life. I never experienced a problem and never heard of a catastrophic failure, but some number of these caps must have been found to have cracked when the owners brought their bikes in for service, such as during a tire change. That generates an obvious safety recall. Is the engineer who originally designed the caps at fault because he tried to make a light-weight, just-right part, rather than specifying everything twice as thick as it needed to be, so nothing could ever go wrong? I don’t think so. I’m just glad BMW Motorrad USA pays attention to these issues, isn’t afraid to issue a recall, and fixes the problem promptly, not only on bikes in the field but also in all future production. That’s why you look at the Consumers Union report on any car and buy a brand that you trust.

In such cases, rather than “defect” or “error,” it would be better—less judgmental, less prone to witch hunts and punishments—to use a word like “problem.” Anyone can understand that problems arise without blame. Problems can occur in any set of complex circumstances beyond the knowledge and foresight of a single human being or review committee. And the word suggests that a solution can be found and implemented going forward.

In ancient China they had a term for one aspect of the process of changing regimes and dynasties: “rectification of names.” It meant calling things by their proper or correct name, to establish the proper social and political relationships. Language reflects mindset, and good communications can forestall a myriad of misunderstandings and bad feelings. Wise people, those ancient Chinese.

We see something of the same impulse in the modern phenomenon of political correctness. While I’m not a fan of PC when it goes to extremes, I can understand the thinking. For example, the word “negro”—the proper term for a person of dark skin and usually of African ancestry when I was growing up—had its origins in Spanish or Portuguese and went back to the Latin niger, meaning simply “black.” That became corrupted and vilified in the “n-word,” which I now cannot write or speak in public. In my life, we have gone from “negro” through “black” as the preferred word, to African American, and each change occurred when the older word had picked up pejorative connotations, became a negative epithet, and had to be discarded. The intent is not to keep words in circulation that have gone from being purely descriptive to judgmental and hurtful.

Evolution in language is common. At one time the word “nice,” which we moderns use to mean sweet and inoffensive, actually meant foolish, stupid, senseless, or simple to the point of idiocy. It came from the Latin nescius, meaning “not knowing.” The word morphed through its parallel connotation of meaning “timid” in the late 13th century to become “fussy” or “fastidious” in the late 14th century, and from there, through “dainty” and “delicate,” to have the sense of “careful” and “precise” by Shakespeare’s time—as in “He has a nice appreciation of the political situation.” From that point, it was just short steps to “agreeable” and “delightful,” and from there to “kind” and “thoughtful.” Meanings are slippery things from one generation to the next.3

Where I balk at political correctness is its tendency to run ahead of current word usage and try to deflect the course of language as a political tool. This is the case, for example, when words suggesting any consciousness of racial or sexual differences are banned on the grounds that they promote racial or sexual discrimination. The idea is that if we can change the words people use, we can change their thinking. And that, in my view is a slippery slope toward Orwellian coercion. People may be invited to a broader, more collegial and humanistic view of their fellows by appealing consciously to the “better angels of their natures.” They cannot be tricked into tolerance by denying them knowledge of the hurtful words. People are clever and will just make up new words with new connotations to carry their meaning. If you want to root out intolerance and evil—and that’s what civilizations try to do—then you have to go to the source, to the spirit, rather than the vocabulary.

Still, words are important. By using words consciously, we can choose to inflame a situation or place it in more useful perspective. “Deathtrap” inflames the emotions. “Safety problem” focuses the mind on solutions. Careful thinkers, speakers, and writers understand the denotations—the dictionary meanings—and the connotations—the emotional weight of past associations—that all words carry. That’s the stuff of clear writing and exciting, vibrant fiction.

1. See GM Recall Investigation Reveals Banned Words from May 17, 2014.

2. And then deal with the embarrassment of that list finding its way into external media and exposing the company to ridicule.

3. Or one region to another. As a child of the New York area, I was transplanted to the suburbs of Boston. All around me, my new playmates were using the word “wicked” as a term of perhaps chagrined admiration, as in “Hey, that was a wicked pitch!” To my New York ears, the word retained only its sense of pure evil, as in the Wicked Witch of the West. Slippery things, words. But I still love them.

Sunday, June 1, 2014

Hard Money, Soft Ideas

What’s the difference between an intellectual and a businessman? In my view, it’s what each one is willing to risk. The intellectual plays with ideas and reputations. The businessman plays with cold, hard cash.

I got to thinking about this the other day when the Dow Jones Industrial Average took a dip of about 150 points. This happens quite often, anymore, and then it seems to make up the loss the next day—kind of like a heart in fibrillation.1 And each day, the pundits of the business news will say that investors are responding to one earnings report or another, this housing or jobs report, this or that move by the Federal Reserve, this or that event in Europe or Asia. As if they knew …

But I don’t think so. I believe each investor focuses on the stock or bond or other investment opportunity right before his or her eyes. Is it solid? Will it pay a return? What are the risks? These people don’t vote their hearts, or their notions, or their hopes and fears about the world situation. They are making a hard-money bet on a particular opportunity.

The beauty of the stock exchange and the bond markets is that they synthesize a single movement out of thousands or millions of individual choices, like the collective motion of a flock of birds or school of fish. The flock or the school has no real existence—although you can photograph it, track it, admire it. But it is still an illusion, made up of the individual movements of a thousand single birds or fish. They may be reacting to the same currents in the air or water, or to the motions of their neighbors. None of them is moving in total isolation, like a salmon swimming upstream or a bird flitting from branch to branch. But neither are they responding to the direction of a leader or an explicit instruction—not the way the geese in a vee formation ride the wingtip vortexes in the leader’s wake, while the goose on point chooses the direction and flight path.

In the same way, investors may look at price movements in the exchange—the collective effect of the decisions made by other investors in their buying and selling—but each is making his own decision with his own money. He may bet with a price rise or against it. She may “go long” on a stock, meaning she’s buying up a low-priced stock, making a bet the price will go up and increase her money. He may “short” a high-priced stock, meaning he makes a promise to sell shares which he doesn’t yet own, in the expectation that the price will suddenly fall, so that he can acquire and deliver those shares much more cheaply. The market offers all kinds of bets, and the most savvy professional investors will know and play them all. But there is no stock market leader, no point goose, who dictates the daily index and issues explicit instructions to investors about buying and selling.2

Investors usually have a limited store of theories and philosophies about their money. They may follow the notion of “always buy cheap and sell dear.” They may have a preference for “value stocks,” which represent established name brands in stable markets. They may have personal rules about how long to hold a stock in an unexpected turn. Some few might have ethical or environmental scruples, which guide their investment choices in companies that deal with disfavored nations, or present too big a carbon footprint, or sin against some form of social justice. But investors who try to mix their politics and religion with their money usually don’t last long. Like a gambler who bets on a horse because he likes the sound of its name, the jockey’s colors, or the number, they’re playing by a null rule set—and they will usually be beaten by investors who bet by the percentages or with a strategy based on the fundamentals of the market.

They bet the market, they bet smart, because after all it’s … money. Money has no conscience. It’s a pure play in the closed arena of the marketplace. In that way, investing is a lot like particle physics. Dollars and stock prices share a lot of similarity with atomic particles, in that they have simple natures, can be observed and tracked, follow finite rules, and react to a limited and predictable set of influences. Neither dollars nor protons possess a conscience, remember where they’ve been, think about where they’re going, respond to a higher power, adhere to the will of the gods or the dictates of fashion, or take notice of good intentions. If an investment and its prospects make sense, it will likely make money. If an electron encounters an atom with a net positive charge, it will fly that toward that atom and stick.

Intellectuals seem to have a hard time understanding all this. They believe in the world of ideas. And ideas, like words, can have both their denotations—what they mean by application of their dictionary definitions—as well as their connotations—the realm of remembrance and feelings they evoke by past associations. Ideas can follow the steps of cold, dispassionate logic, or they can arouse the inner vision, the heart, the emotions, the conscience. Some ideas may be purely logical and self-evident but repellent in their net effects. Others may be hard to justify or follow rationally, but they satisfy the emotions or the imagination with their charm, their breadth of vision, or their sense of promise.3

It is easy to play with ideas. If you read the old Greek philosophers, the world seems to be divided between the hard-headed thinkers like Plato and Socrates, who knew how to evaluate a proposition logically, and then the woolly sophists, who strove for effects and consequences that the structure of their thought and language simply could not deliver.4 The difficulty is that ideas usually don’t have a definitive payback in terms of consequences. They don’t have to produce a specific amount of return at a particular time.

An intellectual, an academician, a social scientist, or even an economist can develop a theory based upon its internal logic, its charm, its promise of future results, its adherence to this or that trend—and never be called to account. The intellectual can publish his or her thought processes, defend them in the arena of ideas, tinker with and “develop” them over the years, spawn corollaries and new theories—and never have to bet a penny on whether they would work or not in the real world. The intellectual never has to bet his life, his soul, his sacred honor on the consequences of his thought processes—except occasionally, such as during a cataclysm like the French and Russian revolutions. Then another intellectual with a different point of view might feed him to the guillotine or put him up against the wall. But that doesn’t happen often and, with a quick eye and a fast tongue, one can usually avoid that kind of wretched denouement.

Intellectuals don’t have to make bets with their ideas. They don’t have to pay out in cold, hard cash to discover their consequences. Intellectuals almost never go broke, lose their stakes, and have to walk away from the table.5

Investors are like physical scientists. The experiment worked or it didn’t, and the results are there for all to see. If you placed the wrong bet or misinterpreted the data, you get egg on your face right fast. You can go broke intellectually as well as monetarily.6

But that also means you can learn. You have experience of something that doesn’t work—and now you can know it doesn’t work, so you can avoid that proposition or idea or notion and its consequences in the future. This is a useful feature of dealing with facts, elementary particles, or monetary units that have limited charm, present no ethical or visionary appeal, and carry no sense of nostalgia. You can build theories based on hard-learned reality, test them, refine them, and then trust them in moving forward.

I’ll take a man who bets his last dollar or stakes his reputation on an experiment over a man who talks a good game. Every time.

1. And that’s a bad sign for the economy, I believe. I think it means we’re pushing up against some kind of natural, internal limit.

2. Although I’m sure there are people, like Warren Buffett or Carl Icahn, who think they can make and break markets with the power of their money and sometimes with just their opinions.

3. Yes, on the one hand I’m talking about capitalism and, on the other, Marxism, socialism, fascism, and all the other political “isms” that have plagued the past two centuries.

4. And then there was Aristotle, who simply collected things, made observations about them, and thought organically and structurally. He wasn’t always right—for example, he thought heavy objects fell down, because “down” was where they belonged and was part of their natures—but his ideas always made sense within the limits of his knowledge and observations.

5. Unless you’re caught falsifying your data or cribbing your work from someone else. That can get you shamed and ostracized.

6. Unless you’re playing in the shadow theater of quantum mechanics. There, it seems, you can retrieve a faulty idea simply by positing the need for another set of dimensions or a field force that lies just beyond the reach of current observations.