A friend posted on Facebook recently an article about how Habitable exoplanets are bad news for humanity. The general argument is that, if we don’t find life on rocky planets in the habitable zone around stars capable of supporting life, the lack of it implies the existence of something called the “Great Filter.” This filter, supposedly, is a crisis stage in the development of any intelligent species which creates a barrier to continued existence: nuclear war, unsustainable growth rate, biotechnological experimentation—take your pick.
The article states, “For 200,000 years humanity has survived supervolcanoes, asteroid impacts, and naturally occurring pandemics. But our track record of survival is limited to just a few decades in the presence of nuclear weaponry. And we have no track record at all of surviving many of the radically novel technologies that are likely to arrive this century.”
As I pointed out in a recent posting, Where Are They? from July 6 of this year, we ourselves may have been around for about 200,000 years, but only during the last hundred years or so have we had the means of communicating by radio waves or traveling above our own atmosphere. And any broadcast radio signals we have sent—as opposed to beamcast, which is hardier—would be subject to the inverse-square law anyway, which means they diminish to a mere whisper at a distance of a few lightyears.1 The universe is quiet because the rise from replicating molecules to space-traveling organisms is difficult and takes a long time—four billion years in our case. So please give humanity a break!
I really hate this implied argument that intelligence is somehow deadly and that intelligent societies are bound to destroy themselves through nuclear war or biotechnology or some other supposedly forbidden technology. The fact that we've had nuclear weapons combined with global tensions for seventy years,2 that we only used these weapons once in anger—and that was right at the start, with two of the first such bombs ever made—and that we have since developed agreements and protocols about their use in the face of worldwide proliferation, has to indicate some sort of triumph for intelligence.
The argument about the “Great Filter” is anti-humanist and feeds into the notion that humanity is some kind of pest or virus on our planet. According to the people who hold this view, our species is an unnatural and uncontrolled expression of life which is inimical to all other life on Earth and should therefore be contained, controlled, or preferably eliminated.
In this view, the more technologically oriented humankind becomes, the more dangerous we become. We might have been all right as bands of hunter-gatherers, who made their living by picking berries, knocking over rabbits with small stones, and sucking the marrow out of antelope bones—and only after those delicate creatures had already been killed by more efficient hunters such as wolves. But let humans develop spear points, throwing sticks, bows and arrows, or drums and fire to stampede the prey into our traps, and we become a blight on the landscape.
This is luddite thinking that clings to the past. Humans in the natural state—naked and alone against the elements—are supposedly good. Humans with their inventive intelligence and its natural byproducts—meaning our weapons, our machines, our busy cities, and our noisy toys—are bad. Cloth made on a hand loom in cottage industry is good. Cloth made on a steam loom in a factory, bad. Riding in a carriage pulled by a horse at eight to ten miles per hour, good. Riding in an air-conditioned automobile with crumple zones and airbags at sixty miles per hour, bad. And so on.
This is worse than luddite. It’s a thinly veiled resurrection of the biblical concept of original sin. With the technologies resulting from human intelligence, our species has eaten the apples of the tree of good and evil. We now know and can do great harm, perhaps even eliminate ourselves and the other creatures on the planet. It then becomes imperative, in the primitive justice system of the 12th century BC, that humans do not also eat of the tree of life through biotechnology and so live forever, becoming as gods. Our imminent destruction through nuclear holocaust or biotechnical plague would appear to be the antidote in this case. This is an embarrassingly primitive position for supposedly advanced thinkers who pride themselves on having given up their religion.
Are there dangers in technology? Of course. Madmen, anarchists, and perpetrators on the political fringe have used dynamite—and now plastique—to blow up cityscapes and their infrastructure since Alfred Nobel invented the stuff. Technology creates perils and wastes as well as benefits and riches. It will take time and practice to apply the principles of engineering efficiency, least cost, lowest energy, and best use to its products and processes. And finally, all war is a terrible thing, whether pursued with nuclear or conventional weapons.3 But the world is full of dangers and has been since the first naked humans stepped out on the savannah and encountered lions and tigers, poison oak and prairie fires—along with those supervolcanoes and asteroids the article mentions.
In my book, the only solution to the perils of technology is more thinking, discussion, negotiation and, usually, a better, more refined technology. Trying to uninvent any technology—whether nuclear bombs or dynamite or the repeating rifle—for the safety of future generations is a hopeless fantasy. What has existed once will be discovered again, or superseded by an even more clever invention. Intelligence is tricky that way. Like life itself, which is the reversal of entropy implied by the information stored in the DNA/RNA/protein domain, human-scale intelligence and its byproducts build on the past, evolve their uses and designs, and develop ever more subtle, sophisticated, perfected forms.
To wish the creative power of human intelligence back into the box, like Pandora, is a romantic and fruitless emotional exercise. To hope, or fear, that it will remove itself from the universe is a denial of everything that makes us human.
1. The inverse-square law states that the flux emanating radially from a point source—like a light bulb or a broadcast radio or television signal—loses intensity in inverse proportion to the square of the distance from the source. For example, if you’re standing one mile away from a lighthouse, you see the light at intensity x. If you move two miles away, the apparent intensity is not 1/2x but 1/4x, as four is the square of the distance two miles. At three miles, the intensity is 1/9x, as nine is the square of three.
2. The article’s point about our only having survived a few decades with nuclear weapons is ingenuous. Of course we’ve only survived a few decades, because that’s our entire history with such weapons. The argument would suggest that nuclear holocaust is some kind of probabilistic event. That is, if we don’t blow ourselves up this year, then it becomes even more probable that we’ll do it next year, or the year after, or a decade from now. Such thinking totally ignores the fact that intelligence possesses the capacity for learning and growing. If we survived the first decades with nuclear weapons held on both sides of a conflict—as they were in the 1950s and ’60s—and if we grant that humans generally see nuclear holocaust as a bad thing, then with each year that passes we learn more about the weapons and their destructive power, become better at negotiating with people who have them, and improve our chances for survival. This is the intelligent course.
3. The only good thing to be said for war is its finality. When two societies, or nations, or hegemonies are in irreconcilable conflict, the only resolution may be through total commitment of one or the other’s people in sacrificing their blood and treasure. For years, science fiction writers have imagined polite replacements for brutal, bloody war: chess games, coin tosses, computer simulations, and so on. But the question still remains: when your life, your society, your future, your freedom, your honor, or whatever else you hold dear is at stake in a chess game or a coin toss—and you lose—what then? Do you submit tamely to domination, enslavement, second-class status, tribute paying, or whatever else the winner will impose? Or do you pick up a weapon, join the militia, and fight to the death, or to the point at which your courage or your nerve break and you raise your hands in final surrender? War is not the best way to resolve a conflict, but it’s the last way and the only one that really counts.