Be warned, this is a rant.1 This is where Crazy Old Uncle Thomas gnashes his dentures, pounds his cane on the floor, and screams things you probably don’t want the children to hear. But I’m going to say it anyway.
First of all, let me say that I love science and technology. Although I never formally majored in any scientific discipline, I am the son of a mechanical engineer, took the basic science and math courses in high school and college, and have worked alongside and reported on the activities of scientists and engineers for most of my professional life. I currently subscribe to a number of science magazines2 and, while I don’t necessarily read every article, I make a point of studying the contents, reading the summaries and articles that interest me, and skimming the rest. I believe the enterprise of science, which humanity has been pursuing diligently since about the 17th century, has made human life immeasurably better in terms of our understanding of the universe, this planet, and ourselves. We have vastly improved our practice of information management, communications, transportation, medicine, and everyday convenience over earlier times. So I’m a fan.
But that doesn’t mean I am a “true believer” in anything and everything. And I’m not an unobservant fool. In the past, oh, twenty years or so, I have noticed a disturbing trend at the leading edge of scientific inquiry that seems almost “postmodern” in its approach. We appear to be in the hands of scientists who have gone over to some kind of scientific fantasy, which replaces observation and fact-based analysis with imagination and mathematical illusion. Here are three examples.
Black holes are predicted by Einstein’s Theory of General Relativity. If you concentrate enough matter in a small enough space—say, by collapsing a massive star in on itself—that mass bends spacetime so much that not even light can travel fast enough to climb out of the gravity well. We have identified stellar objects, such as Cygnus X-1, that appear to have properties consistent with concentrating the masses of tens of suns into a space where no star can be detected. We also have observed effects at the center of our own and other galaxies suggesting that they concentrate the masses of billions on suns in what appears to be empty space.3
Well and good. Something strange is going on, and it would seem to fit with our present and most accepted theory of how time, space, and gravity work. But I have begun to see in the literature suggestions that black holes are not just bottomless garbage bins from which nothing—not even the fastest object in our universe, the photon comprising light and other electromagnetic effects—can escape. Black holes are now supposedly able to give up energy and radiation, such as when the small ones “evaporate” in Stephen Hawking’s theory of simultaneously appearing and disappearing particle/antiparticle pairs. And lately it has been suggested that matter and information can actually come out of a black hole: supposedly, the information is turned into a two-dimensional hologram that continues to exist on the outer surface of the event horizon and can theoretically be retrieved.4
So black holes don’t really have to be black at all. Doesn’t this smack of “I have a novel idea and I can generate the math to prove it”? A black hole is, after all, a theoretically constructed object for which our observations and analyses are frustratingly distant and indirect. That is, they are less imaginary than a unicorn but also less real, from the standpoint of hands-on study, than a horse. So scientists are now embroidering the edges of a theoretical tapestry. This is not necessarily advancing our understanding of what the universe, in all its strangeness, actually is.
While General Relativity deals with galaxies and stellar-sized masses, quantum mechanics is concerned with particles and forces too small to see with the naked eye—and most of them too small to observe or directly detect using any instrument at all. With its Standard Model, quantum mechanics has generated a menagerie of subatomic particles and their associated fields—that is, forces spread over the surrounding area as a theoretical stand-in for the physical particle and its effects. Most of these particles are in the lower range of size where, if you can detect it at all, you also deflect it. That is, you can know what the particle is, or where it’s going, but not both at the same time.
Most of the particles smaller than the protons, neutrons, electrons, and photos that we’re all familiar with from high-school chemistry have been found in high-energy colliders. These take two beams of common particles traveling at near-light speeds in vacuum and run them together head-on at higher and higher energies. The resulting train wreck gives off fragments traveling at speeds and energies that can be mathematically interpreted as having a given mass. By conducting the experiment over and over and comparing the results—usually in the form of flying pieces which quickly disintegrate into ever smaller pieces—physicists can identify new particles. So far, everything they’ve discovered either fits into, or expands, the Standard Model’s pattern of masses, spins, interactions, and symmetries that include the elementary particles: the leptons such as electrons, positrons, and neutrinos; the bosons such as the photons, gluons, gravitons, and the Higgs boson; and the quarks—in their varieties of “up,” “down,” “charm,” “strange,” “top,” and “bottom”—that make up larger things, the hadrons, such as protons and neutrons. It was by smashing beams together, over and over again, that physicists at CERN’s Large Hadron Collider discovered the disintegration trail of the Higgs boson in 2012.
All well and good. But now quantum mechanics is predicting that some of these particles can become “entangled” over unusually large distances. That is, two electrons or quarks or even large molecules may be separated by distances so great that light or gravity effects would take a measurable amount of time to travel between them, but they can still interact instantaneously. The position, momentum, spin, polarization, or some other characteristic of one in the pair is instantly affected by a change in the corresponding characteristic of the other. This would seem to violate the basic principle in relativity that nothing—not information, not energy, not influence, not gravity effects—can move across the universe faster than the speed of light. If the Sun were to suddenly vanish from our system—poof!—it would still take eight minutes for our view of the Sun from Earth to wink out and for our planet to give up its angular momentum and start heading out into interstellar space in a straight line.
Unless, of course, some particles in the Sun and their correspondents on Earth—no saying which ones, of course—were quantumly entangled, and then we would know of the disaster instantly by observing the corresponding particle here on Earth. So the physicists with this bright idea and the math to prove it have found a way to overcome the traditional prohibition on instantaneous action at a distance. Like wormholes and subspace radios—both of which can supposedly shortcut the vast distances of interstellar space—all of this seems a bit wishful and fanciful.
Catastrophic Global Warming
Okay, here’s where Uncle Tom goes nuts. Of course, climate changes. Any decent appreciation of astronomy, geology, evolution, and the other hard sciences confirms that we live under a variable star on a changeable planet. Eleven thousand years ago—when members of H. sapiens had fully attained our current level of mental and physical capabilities—we came out of an ice age that covered most of Eurasia and North America with ice sheets a mile thick and drew the ocean levels down by about four hundred feet to the edges of the continental shelf. In recorded history we have the Norse traveling to “Vinland” in North America a thousand years ago and finding grapevines in Newfoundland, suggesting that there really was a “Medieval Warm Period.” We also have historical observations from the middle of the last millennium suggesting that humankind experienced a “Little Ice Age,” with much colder climate and “frost fairs” held on European rivers that had frozen over, where now they run freely all year round.
We have been tracking sunspot cycles since Galileo first reported seeing spots on the Sun with his new telescope in 1610. Then, between about 1645 to 1715, the Sun went into a quiet period called a “Maunder minimum,” named for the scientist who first described it.5 Since sunspots increase the star’s release of energy, the number of spots at any given time affects the amount of energy arriving on Earth. From observations over the past four hundred years or so, we have detected within the eleven-year sunspot cycle a larger, four-hundred-year cycle of rising and falling eleven-year peaks. Our last three solar cycles were unusually large in terms of this greater cycle, heading toward a four-hundred-year maximum, while our current cycle that’s just ending, identified as Cycle 24, generated only about half as many sunspots as those previous peaks. Whether we’re heading toward another Maunder minimum or just seeing a freak aberration in this one cycle is not yet apparent. But the 17th century minimum—and the presumed period of declining spots leading up to it—would seem to correspond to the Little Ice Age, and the recent peaks we’ve experienced would seem to correspond to our recent Industrial Age warming spell.
In 1987, I attended Energy Daily’s annual conference in Washington, DC, which discussed issues related to energy production and use. One of the speakers was James Hansen, then head of the NASA Goddard Institute for Space Studies, who presented on the role of carbon dioxide from our energy and transportation industries in increasing global temperatures. One of the points he made was that rising temperatures would not mean that everywhere on the planet would become uniformly and increasingly hotter, but instead some places would get hotter, and others colder, as fluctuations in the climate’s response worked themselves out. But this does kind of leave exact measurement of the system and the extent of the damage open to question, doesn’t it? Another of James Hansen’s points that I remember vividly was that “the man in the street” would be able to see these temperature changes for himself by “the middle of the next decade”—meaning the mid-1990s. Well, I’ve been living in the San Francisco Bay Area for almost half a century now, and my sense from “the street” is that some years are colder and some warmer; some have more rain and some less; the fog still rolls in each summer, making May and September our hottest months; and we still tend to turn the wall heaters on from December to February. If there’s been an obvious change in our weather patterns, indicating a change in climate, I have yet to see it.
In support of global warming or climate change—and the call of climate scientists to make urgent and drastic changes in our energy production and use—Michael Mann of my alma mater, Penn State, produced the “hockey stick” graph. He used recorded temperature observations for as long as we’ve been taking them—and NASA keeps “adjusting” the raw data of these observations downward for the early to mid 20th century—and from the time before that he measures variations in tree ring—which I always understood responded to changes in ambient moisture rather than temperature. His graph shows the period from about 1000 AD up to current times, but curiously it smooths out the fluctuations of the Medieval Warm Period and Little Ice Age. On his graph, temperatures bump along in neutral for a thousand years until the last hundred years or so, when they start taking off.
Since we cannot study climate as a complete system—hell, we can’t even predict the weather much farther out than next week—and since we can’t experiment with effects that encompass land, sea, and sky all at once, climate scientists instead create models of what they think is going on. Models are mathematical structures that assign variables to different effects like incidental sunlight, factors governing land and water absorption and re-radiation of the infrared waves, and atmospheric conditions that govern absorption of the outgoing radiation—the “greenhouse effect.” Carbon dioxide is a weak greenhouse gas, not as good at blocking that re-radiation of heat into space as are, say, water vapor or methane. The climate scientists’ models which predict dire effects in the next century all rely on a positive feedback loop, what they call a “forcing,” in which the carbon dioxide that’s been added to the atmosphere increases the amount of water vapor—and that achieves the predicted greenhouse effect and rising temperatures.
This whole scenario seems problematic to my mind for four reasons. First, models are not testable science. They fall into the realm of “I have a good idea and I can generate the math to prove it.” Since climate involves too many influences and variables to predict accurately, the model makers are forced to choose which ones they will study and which ignore or hold to a constant value. Second, if your model depends entirely on positive feedbacks, you’re missing something. Feedbacks are generally both positive and negative; for example, more water vapor might mean more greenhouse gas blocking re-radiation from land and sea, but it might also mean more clouds, which block the incidental radiation and so result in cooling temperatures. Third, all of these models appear to be anticyclical. That is, they assume straight-line effects that continuously build and reinforce each other. Once the carbon-dioxide influence takes off, it is predicted to continue upward forever. But everything we’ve seen about Earth science involves cycles of rising and falling effects—temperatures, rainfall, storms, ice. More carbon dioxide should eventually force an increase in other factors, like promoting an increase in green plants, which would then absorb that excess carbon. You might adjust the set point somewhat, but no effect goes on forever. Fourth and finally, the observed temperature rises seemed to slow down in the early 21st century, and none of the climate models could account for that—nor indeed for variations observed earlier in the 20th century.
I do not deny that climate does change. I do not doubt that human activity has some effect on the changes. But I doubt that the effects will be as uniformly catastrophic as the models predict. And even if they are, human beings are geniuses at adapting to change. We lived through the Little Ice Age with far less understanding and technological capability than we have today. We’ve expanded our reach over the whole globe—except for Antarctica, where there’s nothing much we need or can live on—and we are now going into space, which is the most hostile climate of all. I think we can move uphill a bit as the sea levels rise over the next hundred years, and we can adapt our buildings, our agriculture, and our lifestyles to an overall increase of a couple of degrees. Besides, as our technology keeps developing and changing, we are bound to see new energy production and usage patterns arise and sweep across the economy faster than a government mandate could ever achieve. Look what smartphones have done to telephone landlines and the recording industry in less than a decade. The pace of technological change and its acceptance will only increase.
Astronomy, physics, and the geosciences have achieved much for humanity, and I have no doubt they will achieve even more in years to come. But that does not mean that every scientist with a nimble imagination and a penchant for writing equations and mathematical models should be granted the mantle of impeccable truth. Human life on Earth is not going to change much, no matter what astronomers predict about black holes, or quantum physicists predict about subatomic particles and their entanglement. And we’re not going to dismantle our modern energy production and use patterns just to head off a rise in temperature of a couple of degrees a hundred years from now.
Here ends the rant. Uncle Tom is now back in his chair, mumbling quietly to himself.
3. I made a personal study of black holes in preparing to write my first published novel, The Doomsday Effect, from 1986.
4. See, for example, “Stephen Hawking has found a way to escape black holes” from Wired, August 25, 2015.
5. I also made a personal study of the Sun and its cycle of spots to write the novel Flare with Roger Zelazny, published in 1992.