Sunday, June 29, 2014

Science and Computer Modeling

I believe the issue of anthropogenic global warming (AGW) raises an important question about the view our society has of science. Specifically, are we to accept a computer model of a complex system as proof of a hypothesis about that system? I don’t think that’s too hard a question to understand, is it?

Science—at least as I was taught in school—is the business of observing a situation, making a hypothesis about what might be going on, devising an experiment that will prove or disprove that hypothesis, running the experiment, and analyzing and interpreting the results. Generally, it is easier to disprove a hypothesis (“No, your guess was wrong”) than it is to prove it (“Yes, that’s exactly what’s going on”). During the processes of making observations and formulating hypotheses, scientists may use computer models to help them try to isolate effects, generate ideas, and increase understanding. But can they then use those same models to prove the hypothesis? Can they create a model that is so accurate it moves from an illustrative point of view to an actual prediction?

I can well understand why climate scientists rely on computer modeling. It is physically impossible to run meaningful, full-scale experiments with effects that may take years or decades to develop across the entire planet and its sky. Still, the question remains: is modeling an appropriate stand-in for physical experimentation?

Computer modeling is by its very nature the act of paring away. When a system is so large and complex that it appears to be chaotic, the modeler selects one or more variables to study in relative isolation. Other variables are then held in a kind of stasis covered by the Latin phrase ceteris paribus, or “other things being equal.”1 With that selection made, the modeler can try different scenarios, increasing some variables in the model, reducing others. Of course, to run these scenarios, the modeler needs a relationship between the variables under study. That relationship is usually an equation or an algorithm: if this variable increases, then that variable decreases, or increases, or remains unchanged. Without a mathematical expression of these relationships, the model can’t be run on a computer.

So we have two cases where the modeler manipulates the presumed reality of the subject being modeled. First, he or she has selected some variables to study and decided to hold others at fixed values. Second, he or she has reduced the interaction of those variables to a mathematical relationship, an equation or an algorithm, a human construct, that may or may not reflect the actual relationship or account for the action of other variables which, by agreement, have been excluded.

I do not discount modeling as a powerful tool and an aid to understanding. Every economist, weather forecaster, stock picker, and engineer must run some kind of a model in order to study relationships and make predictions.2 The model may be either an actual program running on a computer or a virtual program—the product of a trained intuition—running inside the predictor’s head. But these are still study aids, and the prediction must be presented as a probability, because the system is still chaotic and all those ceteras are not actually “paribus.”

We make models because the actual system is too varied, with too many complex sideshows operating all at once, to track in real time. To begin with, many of the relationships may so far be either under-studied or unknown. But even where the relationships are fairly well understood, by their very nature the outcome must be probabilistic. That is to say, when two air masses collide in a storm, or two buying decisions intersect in a marketplace, or two protons impact in an atom smasher, the results cannot be predicted except as a probability in one direction and a greater, equal, or lesser probability in the other. It’s not that the model is inadequate and that a better set of mathematics would clear up the confusion. The situation has been studied and the results cannot be stated with absolute accuracy.

Too often, as in the case of a weather front or a marketplace, the interaction of all parts of the system is so complex that the only way to model it correctly would be to create a one-for-one reproduction of the system and play it out in real time. And then the assignment of probabilities to the imponderable interactions would still skew the results.3

Finally, one of the choices a modeler must make is the issue of sensitivity. It’s not enough to say that one thing affects another. You must also say how likely that effect is to occur. In place of those “other things” which can accelerate or retard an effect, and which you have chosen to hold as equal, you must assign a probability to the effect you are trying to model. Altering this element of sensitivity in your algorithm can make the model sluggish and not at all likely to react to the effect you’re modeling, or it can set up a “tipping point,” as if the whole system were set on a pinhead and likely to topple in any direction with the smallest push. The art of model making lies in assigning these sensitivities.

Inevitably, through choice of variables, probabilities, and sensitivities, the modeler emerges with not one model, one representation of the complex system, but many different models, many possible outcomes. And then the modeler must choose which of them is the most likely to reflect reality. I can’t imagine that this would be an unforced choice, free of some bias or preconceived liking for a certain outcome.

To apply all this to the anthropogenic global warming debate, I would offer some observations:

First, the noted rise in temperatures during last part of 20th century may have other causes that are not examined in the atmospheric models of greenhouse gases. For example, the most recently completed eleven-year sunspot cycle—number 23 since the Maunder Minimum of the 17th century and peaking in the late 1990s—was noticeably higher than previous cycles. And the cycle that we’re currently in—number 24, with a peak we are just now leaving—has seen sunspot activity that is significantly lower than the previous cycle. Astrophysicists have determined that a spotted Sun is a warmer Sun, and when the spots fade at the bottom of the cycle, the Sun’s energy output is measurably lower. This seems to coincide with the cooler recorded temperatures on Earth for most of the 21st century.

Another possible cause of the rising temperatures may well be human-made. Most of the recorded temperatures are taken in and around cities. The urban heat-island effect—where paving and rooftops increase reflected heat—is an observed effect already observed. But the last half of the 20th century has also seen a huge increase in the use of air conditioning. I intuited its effects while standing outside our hotel in Phoenix, Arizona, two years ago and feeling a blast of heat from the condenser units along the building’s rear wall. If you’re going to cool a large interior space like a hotel, office complex, or sports arena, you’re going to export a comparable amount of heat to the outside environment. If you air-condition the whole city, the air outside has to become hotter.

Second, temperature variation over the long haul has not been measured but merely proxied. From the centuries before thermometers were invented and used, we have anecdotes about harvest yields and home heating and clothing options, as well as descriptions of falling snows and freezing rivers. From this source we can infer a Roman Warm Period, a Medieval Warm Period, and a Little Ice Age in Europe. To counter these anecdotes, climate scientists use proxy measurements like the chemical analysis of ice cores from glaciers, pollen counts in sediments, and the width of tree rings. But those are still stand-ins for actual temperature readings, and trees may grow rapidly or slowly, and plants shed more or less pollen, for a variety of reasons, only some of which may be related to absolute temperature.

Third, indications from historic temperatures and amounts of atmospheric carbon dioxide suggest that the two may be related. But the relationship may not reflect cause and effect. An old logical fallacy4 notes that correlation is not necessarily causation. In any event, carbon dioxide is a weak greenhouse gas, and its effects on temperature are usually modeled as a “forcing”—that is, the gas sets up conditions where the effects of other, stronger greenhouse gases such as water vapor and methane are multiplied by its presence through positive feedback. Such studies do not seem to include the possible negative feedbacks, such as higher temperatures and abundant carbon dioxide increasing the growth of green plants and so absorbing greater amounts of carbon dioxide from the atmosphere.

In my view, computer models that predict anthropogenic global warming and sea level rise are not much different from economic models. They emphasize one or two factors while holding others as neutral or steady. Economists cannot account for all the decisions and interactions and their effects in a human economy in the same way that meteorologists cannot account for all influences, interactions, and effects on atmosphere. So to say that a certain percentage of carbon dioxide in the atmosphere at the start of the 21st century will lead to a rise of two or three degrees in composite global temperatures in the 22nd century is like saying that the expansion of the money supply or the dilution of share prices this year will yield a rise of a certain number of points in the Dow Jones Industrial Average a century from now. It’s a good guess—but one that might not even be very good.

The confusion about this kind of scientific prediction seems to be in three parts. The first is confusion as to whether computer models can prove and disprove hypotheses, or simply shed light on what might be going on in the complex system from the narrow analysis of a few factors.

The second confusion is the latent assumption that models using the fastest new computers and most complete algorithms by now should be so accurate and powerful that they can make predictions about the future that everyone else needs to accept and follow. That is, the models don’t just show one possible future among many possible scenarios and outcomes, but instead they show the only possible future—the one that we will experience, the future that must arise.

And the third confusion? That because science has been so successful and powerful in our lives up to this point, its conclusions must necessarily be a true and accurate picture of the systems it studies. And so, by extension, if computer modeling is something that scientists do, then the results of those models must reflect scientific conclusions—that is, proof of a hypothesis.

And my sense of how science works is to say … no.

1. I have also heard this phrase expressed as “all other things being equal,” which is fair enough. But occasionally people translate as it “all things being equal.” That is, of course, absurd. All things cannot be equal. If they were, the system would be locked into immobility.

2. With the exception of the weather forecaster who’s working with real-time satellite imagery. It does not take a conscious or sophisticated computer model to look at a storm sweeping across the country and predict where it will be in the next twelve to twenty-four hours. Instead, you do it by measuring time, speed, and distance. Of course, weather is still chaotic, and storm fronts can sometimes veer or stall, but you can get awfully close a lot of the time.

3. Generations of geniuses have thought they could predict the action of chaotic systems, reducing the outcome to a single, safe bet. Investors try to do this with risk all the time. They hedge their bets with countervailing bets—options, swaptions, collateralized debt obligations—that are supposed to leave them happy no matter what the market does. Interestingly, risk seems to be like a constant pressure in the system, and containing it is like trying to stuff an inflated balloon into a loose basket: it always pokes out somewhere else. For those who think they can tame risk with a better set of mathematics, I have four watchwords: Long-Term Capital Management, the hedge-fund company that used absolute-return strategies and high leverage trying to beat the market—and collapsed in the late 1990s.

4. In Latin, post hoc ergo propter hoc, or “after this therefore because of this.” Things that happen together, either before or after each other, are not necessarily related by cause and effect. This is easy enough to see in a simple system—for example, you drop a dish and then the doorbell rings—but much harder to understand in complex, chaotic systems.

No comments:

Post a Comment