Sunday, November 30, 2014

My Problems with Interstellar

I finally broke down and saw the movie Interstellar in a theater last week, rather than waiting for it to come out on disk, as I do with most movies these days. There are many fine things in this film. It has good characters, good actors, an interesting picture of love arcing across the years, and a superficially satisfying ending. But I have a few nits to pick with the actual science and, at the end, a plot hole you can drive a starship through. (Spoiler alert at this point! However, I’m assuming most of the people who care have already seen the film.)

First, the science. In a distant galaxy reached through a mysterious wormhole,1 the crew of the starship Endurance is tasked with visiting at least three planets which are candidates for a new Earth. The first planet they go to is orbiting “the rim” of a black hole. As one of the characters aboard the starship mentions in passing, this is some kind of special black hole—tame? tired? incompetent? incomplete?—I didn’t catch exactly how it was special.2 The plot point is that if they land on this planet, their experience of time will be slowed, so that one hour on the surface will equal seven years for anyone back on board the starship or on Earth. So they will have to move quickly and get out of there fast. The way they protect the starship from this time dilation is by not going into orbit around the planet itself but instead staying in an orbit around the black hole just beyond the planet’s orbit but somehow keeping pace with the planet so that travel down to its surface is still covers a manageable distance.3

My major science nit is that black holes and their environment are not magical time dilators, as the film appears to suggest. Small black holes can have fierce tidal effects that tear you apart, and large ones can draw you into an orbital acceleration that tends to break up matter into an accretion disk full of plasma and particles. But the only way to have your personal time slowed so significantly, compared with “normal time” for the rest of us, is either to increase your speed dramatically or to visit an area with high gravitational acceleration. This is because the calculations of general relativity make no distinction between mass acquired through inertia (going fast) and that conferred by the acceleration of gravity (getting heavy). At near lightspeed or deep inside a gravity well, a person’s time slows markedly, as in the film, so that hours spent at speed or being heavy become years for someone not so accelerated. And at the speed of light itself, the traveler’s or mass-gainer’s time stops completely, relative to outside observers.4

Inside a deep gravitational field, a person’s time also slows dramatically, such as this one hour for seven years exchange. But how strong does that gravity acceleration have to be to create a noticeable effect? The Earth’s gravity, one g, slows our clocks by about 0.02 seconds per year compared to an observer hanging around—i.e., not traveling or orbiting—out in interstellar space. A clock on the surface of the Sun, at about 28 g, loses 66 seconds per year compared to clocks on Earth. So the gravitational field of a small stellar mass, such as our Sun’s, has negligible effect on a visitor’s clocks. If I vacationed on the Sun for a year—having found a solid surface on which to stand and managing not to burn up—and then came back, my relative lack of aging would hardly arouse my doctor’s suspicions. But long before that the gravity load would have flattened me, because 28 g is not physically sustainable for humans.

In Interstellar, a visitor to the first planet orbiting the film’s black hole near to, but still somewhere outside of, its event horizon is said to lose 61,362 hours—seven years’ worth of Earthly hours—for each hour spent on the surface. For comparison, you only lose 0.00753 seconds for each hour spent on the surface of our Sun. By my rough calculations—and not trying to figure out radial distances and the black hole’s Schwarzschild limit—to create a time dilation on this order of magnitude, you would need a black hole with a mass 2.93 x 1010 times the mass of our Sun.5

To orbit close to the event horizon of this monster, you would be traveling very fast, probably close to the speed of light—assuming you could accelerate fast enough to establish a stable orbit and not just spiral inward toward the event horizon. Any planet that got so close would be torn apart, rather than simply experience massive tidal waves. The planet’s crust would bulge monstrously in time to the planet’s rotation, unless its own rotation were first locked to its orbit, as our Moon’s rotation and revolution are locked. The atmosphere and the ocean would be torn away. People landing on this planet could ignore its puny gravity and would live under the gravitation of the black hole itself, which would smear them into a thin paste of plasma and particles.

But there was no hint in the film that the watery planet was orbiting the black hole at anything approaching the speed of light. If it were, the starship would need to match that speed if it was going to orbit at a comfortable traveling distance beyond the planet’s own orbit. And whether by inertial or gravitational acceleration, the astronaut left aboard the starship would have experienced roughly the same time dilation as the crew that landed on the planet. In the film, however, this lone astronaut and the people back on Earth experience 23 years of time while the crew that lands experiences only a few hours. In any case, attaining the speed needed to match that planet’s orbit around the black hole, or to break away from that orbit later, would seem to be beyond the starship’s capabilities—or else why did it take the crew two years to travel from Earth to the vicinity of Saturn, where the mysterious wormhole awaited their passage?

Second, the plot hole. Because Cooper (Matthew McConaughey) and Brand (Anne Hathaway) spend three hours on the water planet, the story in the rest of the universe is advanced by 23 years. Cooper’s daughter Murph is now a young woman (Jessica Chastain). Brand’s father (Michael Caine) has become an old man near death. Then Cooper and Brand and the aging scientist who stayed aboard the starship go on to visit a second planet, where they are not apparently affected by any time dilation. On that planet, Dr. Mann (Matt Damon) has been faking his data about the planet’s habitability, turns homicidal when his deception is about to be discovered, tries to steal their starship, and in the process disables it.6 So, to reach the third, most distant, and yet most favorable planet of all, where Brand’s long-lost love is waiting to be rescued, Cooper and the ship’s robot assistant must drive the two remaining landers and use their thrust to slingshot the wrecked starship around the black hole to begin the journey.

Because of what I believe is a misinterpretation of Newton’s Third Law about action and reaction, Cooper and the robot agree that the starship must discard the two landers and their pilots when their fuel runs out. Or maybe they’re just shedding excess weight, because they mention that as a reason, too. The misinterpretation has to do with the difference between simply dropping off the excess material and actually accelerating it away from the starship as reaction mass under Newton’s law. If discarded weight added to your boost—the misinterpretation of Newton—then NASA missions would get a useful kick when they dropped their first and second stages, or when the Shuttle dropped its solid-fuel boosters and main fuel tank—and they don’t. Cooper and the robot then fall into the black hole, while Brand proceeds to the third planet and the discovery of a livable world.

Inside the black hole, Cooper and the robot are able to experience multiple instances of time in a single place—his daughter’s bedroom, where some strange things have been happening throughout the film—and the robot is able to make observations about gravity that Cooper then communicates to Murph. The young woman has been trying to solve equations related to gravity in order to move the entire population of Earth off the planet and into space. To do this, she needs to answer some unspecified question about gravity that apparently you can only find if you’ve experienced the inside of a black hole.7

Once Cooper gives Murph the new data, he realizes that he and the robot have somehow created this whole multi-dimensional effect, the wormhole and everything else, as an expression of their own will—and this is another gray area in the story line. With this realization, the tesseract, or multidimensional cube, that they inhabit, along with every second of time passing in that bedroom, automatically begins collapsing. They are somehow ejected from the black hole, returned to their own galaxy through the wormhole, and picked up by off-planet Earthlings about 100 years into Cooper’s future. He and the robot have experienced so much time dilation that Murph is now a frail old woman (Ellen Burstyn) about to die.

But Cooper finds nothing to interest him in this new world of off-planet living inside humankind’s new O’Neill colonies.8 He commandeers a small scout ship to go back through the wormhole to find Brand on that third planet where, inexplicably, the love of her life has now died and been buried. The plot hole I find is this: if Murph has gone from her 30s to her 90s while Cooper was inside the black hole, then why wouldn’t Brand, who never entered the black hole, have similarly aged and now be an old woman? She’s shown—apparently in real time, and not just in Cooper’s imagination—at the same age as when she rode the starship away from the black hole toward the planet of her lover.

As I said, Interstellar as a film offers many fine moments, good acting, and some interesting perspectives on time and human history. But these science issues and this plot hole are serious matters for me. Any working science fiction writer who brought this manuscript to a publisher would feel slightly embarrassed, knowing that corners had been cut. Any conscientious editor would require him or her to address these problems—and fixing them would necessarily change the story in significant ways. Otherwise, the author would be left with vigorous arm-waving, insisting this is a special black hole and the new galaxy is just different. And careful, caring readers would be left sputtering, “But, but, but …” The whole project would diverge into realms of science fantasy and magic. And that’s just not satisfying in a story so strongly dependent on its use of science.

1. Okay, first minor quibble. Wormholes are accepted science fiction motifs for accomplishing faster-than-light, interstellar travel. We blink at them in movies like Stargate and in television shows like Deep Space Nine. But wormholes are mere conjecture, based on the unfounded premise that space is somehow tightly folded through alternate dimensions beyond the three—x, y, and z—that we actually perceive and experience. They are a mathematical game, not an artifact of accepted physical science. Like time travel, wormholes belong more properly to the realm of science fantasy than to serious speculative fiction.

2. From my reading to prepare for writing The Doomsday Effect, black holes are of two types: rotating and nonrotating. Other than that, the only distinguishing feature is their mass, which determines the depth of their gravity well and the size of their event horizon. At heart, the “hole” is simply an infinitesimal point, a singularity, harboring all that mass. And the event horizon is simply the distance at which the escape speed from the gravity well exceeds the speed of light. More than this, science—and all our theories—sayeth not.
       My understanding is that the spin of a rotating black hole is only important because the singularity cannot revolve around itself but instead describes a tiny circle. If you could dive through that circle—good luck with that!—you might travel outside the spreading “time cone” as described by the speed of light in normal space. (I planned something of this nature for a fragment of Kornilov’s wrist bone in a possible sequel to The Doomsday Effect.) I've never heard that the spin has any effect on surrounding space, unless it is to create gravity waves, much as from a rapidly spinning neutron star or pulsar. However, opinions on this differ: see the entry on black holes from The Physics of the Universe. But I would argue with the last paragraph to the extent that it’s not the event horizon you can never quite reach but the singularity itself. And again, all of this is conjecture supported by mathematics, not by our experience or direct observation.

3. Orbital mechanics are difficult and the first thing most theatrical depictions of science get wrong. (Remember that early Star Trek episode, where to stay in orbit a landing craft had to fire its thrusters? Unh-uh!) So here, if you want to orbit a primary like the black hole and still match speeds with another object in orbit, like the planet, you have to enter the same orbit as the object of your desire. To maintain an orbit just beyond it or further out from the primary, you must move at a different speed; you cannot “pace” the planet from a higher orbit as it goes around the black hole.

4. Which leads to a conundrum: if you ride a light wave, you experience time normally, while to observers outside your frame of reference you would appear frozen. If time actually “stops” for you, then it would follow that the universe around you experiences an infinite amount of time compared to your experience. When you finally get off the wave and return to a more manageable velocity, the universe will have expanded to a thin gas, the stars burned out, and you are left in a cold, dark, empty place. This is why travel at lightspeed is theoretically impossible. When you get where you’re going, it won’t be there anymore.

5. That’s 29,300,000,000 solar masses—a truly galactic-scale black hole! Even if my math is wrong by a couple of decimal places, we’re not dealing with a black hole formed by the collapse of any star we know about. This one eats out the hearts of entire galaxies.

6. Apparently, the starship Endurance is such a rickety contraption of modules assembled in a rotating ring that the explosive outgassing from overriding the airlock controls can blow it apart. One wonders how the IQs at NASA could have dropped so sharply since the Apollo missions. But I haven’t had so much fun with a scene since Dave Bowman crossed over to Discovery without his space helmet.

7. While it might be great fun to fall inside the event horizon of a black hole, the information you obtained would, in my opinion, be minimal. You would accelerate toward the singularity until you reached its terminal velocity. At some point you might reach the speed of light, experience time stoppage, and continue to exist in your own time frame, eating, drinking, laughing, and scratching, but not becoming aware of anything happening outside yourself. Before that happened, however, you’d probably fragment and turn into plasma and particles. In any case, you wouldn’t learn much about gravity—no matter how many spatial dimensions you invoked—and if you did learn anything, you wouldn’t be able to communicate it to the world outside the black hole. Regardless of what Stephen Hawking predicts about virtual particles appearing and annihilating each other—or not—black holes are famous for not giving up light rays, information, or their dead.

8. These are huge cylinders spinning in space to create the acceleration of artificial gravity on their inside surfaces. For stability, one usually places them at the Lagrange points in an orbital system, such as around the Moon. Creating them requires no unusual information or interpretation of gravity. You do, however, need to transport a lot of rebar, concrete, glass, hardware, potting soil, and money to some distant point in space.

Sunday, November 23, 2014

Stranger Than Fiction

The old adage holds that “Truth is stranger than fiction,” to which I reply, “Because fiction must be realer than truth.” That is, fiction must be plausible, while the truth only has to be verifiable. … But sometimes you gotta wonder.

As proof of the adage—at least as far as popularly accepted history goes—I offer the events of 61 years and one day ago: the assassination in Dallas of President John F. Kennedy. If I were to submit an idea for a manuscript or screenplay based on that sequence of events, I would be laughed out of both New York and Hollywood.

Consider: a U.S. president was shot in broad daylight during a motorcade along a crowded parade route. The assassin was apprehended almost immediately, held for two days, and then was himself assassinated. An investigative commission headed by a sitting Supreme Court Chief Justice was launched to analyze the event but could not say with any certainty how many bullets were actually fired, where they came from, or where they went. One bullet, a full metal jacketed round consistent with the type of rifle found in the book depository and which the assassin presumably fired, a round which passed through the President’s body in the back seat of the limousine and then through the torso and wrist of the Governor of Texas sitting in front of him, was later recovered from a hospital gurney but appeared suspiciously pristine. A second bullet, supposedly of the same type from the same gun, went through the back of the President’s skull and fragmented inside his head. Evidence was produced—the Zapruder film, the puff of smoke, the grassy knoll—suggesting that the actual shooter was not in the book depository at all but instead to one side along the motorcade route, or perhaps firing from the front. And finally, the Warren Commission published its official findings and sealed all of the evidence for 75 years.

So much about the event did not add up, or was left unanswered, that a deluge of conspiracy theories quickly ensued. Key evidence was considered mishandled, misinterpreted, or forged. This is not the way history is supposed to be made.

The current popular theory, presented in Oliver Stone’s movie JFK, suggests that Lee Harvey Oswald was merely a patsy, a fall guy, and the real shooter or shooters were on the knoll. This story puts forth a conspiracy between FBI Director J. Edgar Hoover and the CIA at the behest of the “military-industrial complex” to assassinate the popular president because he was about to pull out of our involvement in Vietnam. The fact that Oswald’s assassin, Jack Ruby, appears to have had CIA connections is offered as proof: Oswald had to be killed before he could talk and show that he was just an innocent loony who could not have pulled off the job alone but was in fact working for government spies.

The Stone version is unbelievable on the face of it. First, although the anti-war Left would love to claim the martyred Kennedy as one of their own, the fact is that he subscribed to the Domino Theory and was pushing this country deeper into the Vietnam conflict because of that conviction, rather than pulling us out of it. I remember this, because I was alive then and conscious of the news. If anything, the “military-industrial complex”1 would have wanted to unseat Kennedy because of recent fiascoes like the Bay of Pigs invasion of Cuba. And if you’re an industry getting rich on government contracts, you wouldn’t relish the prospect of the United States going further into a sideshow conflict like the Vietnam insurgency, which could promise only procurements of M16 rifles, army boots, and light helicopters. Instead, you would want to focus on the Cold War and the big strategic picture, which would continue to net you lots of complex and expensive contracts for aircraft carriers, ballistic missile submarines, and heavy bombers.

Second, Earl Warren was an eminently respected jurist and a man of great intellectual honesty. If he had obtained evidence that J. Edgar Hoover had connived to assassinate a sitting U.S. president, Warren would not have hesitated to make those findings public, whatever the outcome. The idea that he might have helped to cover up an attempted coup is ludicrous.

Third, why seal everything for 75 years? Presumably that’s one whole lifetime—of anyone born on the day of the assassination—in order to protect the “innocent persons” connected with the case? What innocent persons? And how would an innocent person be damaged by the truth?

I have a theory about the assassination. I believe it fits all the facts. And I think I know why the CIA was involved and why the Warren Commission sealed their unpublished findings.

I believe Lee Harvey Oswald did in fact kill Kennedy. He was a trained marksman from the U.S. Marines, and his position in the book depository was the right one for the attempt. With the motorcade moving away from the building in a more or less straight line, he had time to align and take his shots. Of course, the range was increasing all the time, which would make the first shot most important and any that followed increasingly uncertain. It might have been better to take a position on the bridge, in front of the motorcade, so that the range would be decreasing and the chances with second and third shots getting better all the time. But that would have put the shooter in daylight, with every eye in the motorcade looking toward him. No, the shot from above and behind was the right one.

If Oswald were the patsy, the fall guy to hide the actual shooters, then these presumed professionals gave him the best shooting position. And instead they took the least likely: from the side, where a gunman would have to lead a target moving at an unpredictable speed, where a stray bullet would go off into the crowd standing opposite or, worse, end up in the limousine’s side bodywork, proving that there had been more than one shooter and that Oswald, the patsy, was not working alone. The firing position on the knoll was in the open, too exposed to passersby, while Oswald’s position on the sixth floor of the book depository was inside a window, hidden in the shadows.

Oswald was a disaffected young American. Before the assassination, he defected to the Soviet Union, spent time in Minsk at an unsatisfying job in an electrical plant, and married a Russian woman. Then he asked to be repatriated to the United States. To me, these are significant facts that tend to get lost in the high weeds of the conspiracy theories.

So here is what I think happened. Somewhere in the Soviet Union he was contacted by a low-level KGB agent. The agent recruited Oswald in a plot to assassinate Kennedy. Not being senior level, this agent possibly believed he would be making his superiors happy because of the embarrassment Kennedy had caused the Soviets by standing firm during the Cuban missile crisis. Or maybe the agent just thought, “What the hell,” and tossed Oswald over the wall to see what might happen.

Then, wonder of wonders, this unhappy dweeb actually manages to kill the President under the eyes of the Secret Service and a very large crowd. The low-level agent tells his KGB bosses, “Hey, look what I did!” But they know, as he does not, that decapitating a foreign power is the worst possible move to make during tense diplomatic times.2 They quietly dispose of their rogue agent and then call their opposite numbers in the CIA through back channels. “Ah, look, this is a big, big mess,” the Russians say. “We’ve handled it from our end, but you have to take care of your end or we’ve all got terrible problems.” To this the CIA guys say, “Okay, but remember, you owe us one.” The CIA then sends in Ruby to shoot Oswald before he stops denying his guilt and starts proclaiming his status as a new hero of the Soviet Union.

All of this goes into the lap of the Warren Commission. Everyone involved knows that if the assassination is proven to be the work of a Soviet agent then, given the current international tensions, the U.S. Congress will want to declare war. And, given the nuclear capability on both sides, this could be the end of civilization. So the commission is pleased to play up all possible alternate theories: single bullets, alternate shooters, puffs of smoke, and grassy knolls. They spread the conspiracy theories as a smoke screen, then they declare Oswald to be a lone and disaffected shooter. Nothing to see here, folks, just a disturbed young man. And finally they seal all the evidence for 75 years.

Those 75 years are up in 2039, long after the fact. By lucky coincidence, the Cold War and the Soviet Union itself will have been gone for 50 years by then. So will most of the “innocent persons” who might otherwise have been incinerated in World War III. By that time, all the conspiracy theorists will be dead, too. Only the historians will be alive to care.

Gosh, I hope I live long enough to hear the truth. I’ll be 91 by then and hope I have the wits to understand what I hear.

1. I use quotes here because, while the military and industry in this country have strong links and mutual desires, no single group of men and women or any one organization, not even the Pentagon, exists to act in their interest with concerted purpose. The “military-industrial complex” is a large and varied group of enterprises, often in competition with one another and sometimes working at cross purposes, even within the Pentagon. It’s certainly not a hierarchical body with a responsible leadership like the Roman Catholic Church or the Democratic Party.

2. Okay, Kennedy and the CIA didn’t know this either, as their repeated attempts to assassinate Castro proved. But that still doesn’t make decapitation a sensible policy.

Sunday, November 16, 2014

Look Out Among the Stars

Look up at the night sky, look through a telescope, look at the thousands of images brought back by the Hubble Space Telescope and similar huge “light buckets.” What do you see? What do you see in your mind’s eye when you think of space? Vastness, emptiness, vacuum. Coldly shining stars which, up close, are actually maelstroms of searing fire, writhing gases, fractured plasma, and immense gravity. Stellar neighborhoods—our own included—which are fields of deadly radiation at all frequencies and with particles flying through at nearly the speed of light. Interstellar neighborhoods filled with dust, ice chips, and gases that are probably poisonous and definitely unbreathable.

The universe upon which we gaze is a place of chaos, silence, and death. … Or so one would imagine.

Despite this surface appearance, I believe the universe teems with life. Life is out there, waiting to meet us, maybe to greet us, maybe to eat us. But life exists. Life, this wondrous “temporary reversal of entropy”—and ultimately the consciousness that life has led to, at least here on Earth—is the whole point of having a universe.1 Otherwise it is just empty space and explosively fusing hydrogen.

Or consider the contrary proposition: that it only happened once, and it happened here. Of all the hundreds of billions of galaxies, each containing hundreds of billions of stars, only Sol, a minor sun a third of the way out from the center of the Milky Way, and only our own Earth, a medium-sized rocky planet in Sol’s inner orbit, were visited by this phenomenal accident of chemistry. Everywhere else, just fire, gas, and inert dust.

That’s like thinking your family’s house out in the suburbs was the actual birthplace of the arts of cooking, reading and writing, or television simply because you first encountered them under that one roof. A toddler thinks this for about five minutes between the ages of two and four. Then he or she discovers neighboring children and the household next door.

Why do I think life is common? First, because we find its building blocks elsewhere in the solar system. Amino acids, the precursors to proteins, have been found on comets,2 which means that they were scattered—perhaps intentionally seeded—among the dusts and gases out of which our star and its planets formed. And carbon-based, organic chemicals are found in quantity beyond Earth’s sky. For example, the atmosphere of Saturn’s moon Titan is rich in methane. The possibility also exists that at least some of the fossil fuels we drill from the Earth’s crust were not the products of decayed life on earth—old swamp forests becoming seams of coal and methane, and tiny diatoms becoming pools of oil and domes of natural gas—but instead some of these hydrocarbons existed in the coalescing planetary matrix and were squeezed together during the Earth’s formation, rising toward the surface like the veins of gold and uranium.

Second, if you look at the Earth today, you see a planet covered with life, teeming with life, and transformed by life. But it was not always so. The atmosphere contains breathable oxygen and the soil contains nutrients only because the first microbes and then the larger organisms that evolved from them have been softening up the rock and freeing up gases for more than two billion years.3 If you had come to Earth back then, before the first life got to work on the planet, you would have found a stony surface, sterile seas, and an unbreathable atmosphere composed mostly of nitrogen with admixtures of carbon dioxide, methane, ammonia, and water vapor.4 This is a planet shaped by life, made habitable for life by its own processes. Life is the ultimate terraforming service and, through the process of evolution, it always gets the details exactly right for the type of life that will eventually live there.

If humans were to discover this solar system from afar during an interstellar scouting expedition, we would see two obvious places to look for our kind of life. First would be the Earth itself, which we would deem a remarkable and wholly unaccountable paradise. Second would be the watery world under Europa’s icy crust. We can suppose that heat from an active inner core or gravitational kneading from nearby Jupiter keeps the water under Europa’s ice liquid and perhaps warm enough for life’s processes. Europa may even offer volcanic vents rich in minerals, like those under Earth’s oceans, capable of supporting its own active biological ecology.

Mars may once have held life, although it now seems pretty dead. Perhaps some planetary disaster killed Mars. But perhaps any early atmosphere that Mars possessed leaked away because the planet was too small, its gravity too weak, to hold onto gas molecules lighter than carbon dioxide. Perhaps Mars once had an active iron core that was able to generate a magnetosphere to deflect the solar wind, as Earth’s core does. But now Mars has a thin atmosphere and high surface radiation, unsuitable for any kind of advanced life.

Venus may once have held life, because its starting conditions were very much like those on Earth, and its orbit, while nearer to the Sun, is not so near as to account for the planet’s high ambient temperature—about 800 degrees Fahrenheit—all by itself. Venus apparently suffered some kind of runaway greenhouse accident that increased both the atmospheric pressure and the cloud base. Its atmosphere is also not very nice, being mostly carbon dioxide like Mars’s but vastly denser. The sky rains sulfuric acid because of the high concentration of sulfur dioxide. In addition, radar studies of the planet’s surface indicate suspiciously few “astroblemes,” or visible impact structures. This apparent lack of asteroid bombardment, which is found everywhere else in the solar system, suggests that either the weather wears old craters away at a phenomenal rate, or the lithosphere lacks a tectonic plate structure like that of Earth. Without shifting plates and occasional volcanic eruptions along the plate boundaries, the heat from Venus’s core might build up until the entire surface simply melts and subsides, as if it was being groomed by a some kind of planetary Zamboni machine.

As for the probability of finding life, the rest of our solar system would appear to that scouting mission as either gas giants with no discernible surface or barren rocks and balls of ice without atmospheres—Titan excepted. And both of our immediate neighbors, Mars and Venus, have interesting and perhaps tragic histories that render them inimical to life. But tragic for whom? If Earth was reshaped by life as it grew here, what influences might a different kind of life have brought to these planets?

We define life fairly closely.5 We recognize our kind of life by carbon-based chemical processes, a fragile cellular structure, and some degree of mobility and interaction with the environment. That may be an overly narrow definition. Who is to say that some kind of piezoelectric circuitry flowing through a crystal, or a chemical reaction among various metals, could not create a fully satisfied form of life, perhaps one with an active and questing intelligence?

In our journeys out among the stars, we will have to step carefully. Otherwise, we might mistake the local inhabitants for paperweights or some aggressive form of corrosion. But my bet is we’ll meet a fair number of carbon-based life forms out there, with some analogue of proteins shaping cellular structures that are active, mobile, and interacting with their environment. My guess is they’ll come right up and tap us on the shoulder—either to greet us or proceed to eat us.

1. “Or what’s a heaven for?”—in the words of Robert Browning.

2. See Found: First Amino Acid on a Comet, from New Scientist, August 17, 2009. Similar articles trace further discoveries. At the time of this posting, I expect the European Space Agency’s Philae probe to find similar chemicals on comet 67P/Churyumov-Gerasimenko.

3. See my blog post DNA is Everywhere from September 5, 2010.

4. But scientific views of the early Earth’s atmosphere differ. See, for example, Earth’s Early Atmosphere in Astrobiology Magazine from December 2, 2011.

5. See Between Life and Not-Life from November 9, 2014.

Sunday, November 9, 2014

Between Life and Not-Life

Recently I was fixing a broken clock,1 and that set me thinking about things that move by themselves but are not actually alive. The biological definition of life is quite exact: what differentiates animals, plants, fungi, and microbes from inorganic materials, even those of complex and exquisite design that happen to move—a leaf blowing on the wind, for example—is that living things can grow, change, sustain themselves, exhibit functional activity, react to their environment, reproduce their own kind, and eventually die. The broken clock, aside from once exhibiting functional activity and now an apparent state of death, does none of these things.

Definitions are tricky things, however. They are trying to put a precise meaning in words to a state which can easily be understood by observation and sensed in the gut, but which may be a slippery thing for the intellect to grasp. For example, many people I know have not reproduced themselves, being childless. Many people in a vegetative state cannot sustain themselves or even react to their environment. Many more people may grow and change in the barest organic sense, but not in any intellectual, emotional, or spiritual capacity. And finally, death is not proven for any of us—especially now, with more new medical techniques being developed every year—until it actually comes. So, are these cases of people who are, according to the definition, not alive?2

Modern advances in robotics and cybernetics are going to test that definition of life even more stringently. A software program can be reproduced quite easily, and it’s no stretch of the imagination to think of a program with the right internal commands—let alone the volition of an artificial intelligence—that can replicate its own code, package the result, and send it down the line into a new computing environment.3 So, would such a program qualify as alive under at least one parameter of the biological definition?

Software fulfills other specifications for life as well. Depending on the type of code, it certainly exhibits functional activity, can grow and change, and can react to its environment in the form of received inputs and commands and internally generated outputs and displays. One might argue that a piece of code cannot sustain itself without a computer’s central processor, memory chips, storage space, and the electricity to run these bits of hardware. But then, a human being cannot sustain him- or herself without the environment of a suitable planet or space station, externally provided pieces of hardware in terms of clothing, furniture, tools, entertainments, and other comforts, and the food—generally grown and processed by others and shipped over long distances—required to sustain the human organism.

The fact that the software has required—at least initially—a human mind and human invention to create both the code and the machine on which it runs has no bearing on the definition of life. After all, humans did not create the planet on which they live, the solar energy that drives its climate and crops, or the air they need to breathe. And humans did not create themselves from first principles, either. Questions about a creator god, or the origins of the organic chemical reactions necessary to promote molecular biology, are outside the biological definition of life.

So life according to the old biological definition may be complete for animals and plants, but insufficient to encompass the new world that technology is making for us. And it may not be sufficient for all the types of beings we might find out among the stars.

If you asked me to create an iron-clad definition of life, suitable for all purposes, I would strip the biological definition down to its barest chassis. Life is an open-ended process, reflecting functional activity that is usually, but not always, carried out by an underlying mechanism employing material substances which interact through energy inputs. And that process must be susceptible to interruption and cessation.4

Under these terms, an artificial intelligence operating on a computer chip or inside a robot could be considered alive. So could an automated factory or a fire engine. Questions about volition, free will, or freedom of action and movement are outside my stripped-down definition of life, as they are for the biological definition. After all, clams, mussels, and corals are all alive and yet have no volition or free will to do other than settle down on a rock or in the sand and perform the filter-feeding for which their bodies were designed. A domesticated horse that lives in a stable, gets hitched to a plow or wagon everyday, and eats what, when, and where its human owner directs is hardly able to exercise its innate volition. And humans who are enslaved or under the psychological domination of another person have vastly diminished capacity for free will and freedom of action.

In my definition, the difference between a horse and a fire engine is that the horse can refuse the orders of its master, can fight the bit, shy back from the harness, and balk at the feedbag. The slave can decline into a psychological depression, lie down, and choose to die. But the fire engine goes where its driver directs—even if that means plowing into a brick wall at sixty miles per hour. But exercise of free will is still not part of the definition of life.

When we go out among the stars we are going to find many strange and wonderful things, and not all of them will be in their first flower of growth and development. We will discover decaying worlds full of automated machinery and robots, either waiting dormant or still actively functioning, but left over from the organic civilizations that invented them, perfected them, used them, and then died out either slowly or rapidly, leaving them behind. We will discover slave cultures whose biology, capacities, and expectations were manipulated into a state of perfection—perfect in terms of what the manipulators desired as to nutritional requirements, mental and physical capacities, and personal direction and volition—and then left to collapse or evolve on their own when the master race died out.

And note that in my stripped-down definition I added “but not always” to the part about an underlying substantive mechanisms. This allows for the sort of life forms made of pure energy—presumably, once-organic creatures who have surpassed their physical bodies and become pure spirit—that are found in much of science fiction.5 Who is to say that patterns previously established in the electrochemical circuitry of neural nets or the silicon pathways of chip sets might not reform and propagate outside those physical structures? Certainly, self-sustaining electrical circuits are not known in our definition of physics. But we know that radio waves propagate in complex patterns of photons outside any conducting aether or physical fluid. And who is to say that our definition of physics is so very complete?6

Perhaps one day we will discover that leaves blowing on the wind do have minds of their own.

1. No, not fancy clockwork filled with gears and springs, as in the picture—that’s just to draw your attention. I am not that mechanically gifted. Instead, it was one of those black boxes with a three-layered drive shaft and place for a Double-A battery, which then fits through any fancy clockface as part of an art project. But the principle is the same: as an object with its own functional activity, the little black box had stopped working.

2. I know, I’m treading the distinction between group characteristics of the species Homo sapiens and the individual characteristics of a John or Jane Doe. But the definition of life is also generally stated without making such a distinction.

3. That’s the basis of my early novel ME: A Novel of Self-Discovery. But you don’t have to look to fiction for self-replicating software: tapeworms, viruses, and other malware do it all the time—and with a persistence and tenacity that mimics life itself.

4. I’ve thrown in the notion of death just so we don’t have to consider the roiling plasma inside a star as some kind of life form. And come to think of it, every star grows, changes, is self-sustaining—so long as its gravity and its fusible elements remain in proper proportion—and eventually it dies out with either a bang or a cinder.

5. Not to mention a few gothic horror and ghost stories.

6. See Three Things We Don’t Know About Physics (I) and (II) from December 30, 2012, and January 6, 2013.

Sunday, November 2, 2014

The Saving Grace of Democracy

Although I am conservative by nature, my political leaning of is that of a “little-D democrat.” That is, I believe in the innate wisdom of crowds and trust in the consensus of a large group of people.

In John Brunner’s 1975 novel of an internet-connected future, Shockwave Rider, the main character at one point operates a Delphi poll. The Delphi method is a technique for establishing consensus by asking a group of experts the same question and comparing their answers to find some kind of empirical truth. In Brunner’s novel, using the vastly more powerful resources of the internet, the character posts questions online, invites thousands of average people to respond, and tabulates the results. The questions could be specific—such as “How many cars does Ford make in a year?” or “How many hospitals are there in the U.S.?”—or they could be purely speculative—such as “When will humankind have a base on Mars?” Some people will possess expert knowledge of the subject, like Ford production managers, hospital accreditation examiners, and NASA administrators. And some will just take wild guesses. Brunner’s point was that it didn’t matter. If you averaged the results, you would come eerily close to the correct answer or, in the case of speculative questions, a surprisingly reasonable estimate.1

Brunner’s summation on the Delphi poll: “While nobody knows what’s going on around here, everybody knows what’s going on around here.”

The Germans call it Zeitgeist, the “spirit of the times,” or more loosely the consensus of a culture and the society it produces. This is the collected knowledge, the folk wisdom, and the expectations and limitations within which any individual operates. It’s the tide along with, or against which, any individual swims. So the Delphi poll and the consensus it samples works within a broadly defined group. You can ask 21st-century Americans about Ford’s production figures and get a pretty good reading. Try the same question with a group of Somalis or Sudanese, and you will probably be less satisfied.

This kind of collective wisdom harks back to the Lincoln quote: “You can fool all the people some of the time, and some of the people all the time, but you cannot fool all the people all the time”—where the fooling involves working against what the culture holds as common knowledge or accepted wisdom. So Lincoln was a little-D democrat, too. Any one person can be a damn fool at some time or another, but the mass of men—the common run of the country—is collectively going to be pretty smart.

But as a conservative, I also believe in the wisdom of individuals and trust in the insight that the dedicated expert may amass by training and study. I believe in individual genius, personal responsibility for actions and intentions, and the virtues of contrarian thinking. So I am also a “little-L libertarian.” Furthermore, as a thorny individualist myself, I don’t relish my intentions and actions being judged, let alone constrained, by a crowd of people who operate on guesses and instinct coupled with the fancies and folktales their grandmothers whispered to them in their childhood.

It’s a conundrum—and one that faces all western technological societies most strongly in the 21st century.

The commonest form of government in the developed countries of the West is some kind of representational democracy. Local people elect representatives to go to Washington, London, Berlin, or Tokyo and sit in Congress, Parliament, the Bundestag, or the Diet to decide the issues of the day and make laws for the whole country. Some kind of overall leader may exist, whether a prime minister chosen from among the legislators or a separate executive elected at large, but the power still resides in the collected group of representatives. This form of democracy developed, not from any philosophical principle, but from pure logistics. In the 17th and 18th centuries, when these systems were codified, it was actually impossible to bring all the citizenry of a sprawling, agrarian country together to hear and debate issues. The direct and personal democracy of the village, the tribe, or the Greek city-state did not work among millions of people scattered across a whole country, let alone a continent.

Even then, it might take weeks for an elected representative to travel between his constituency and the national capital, and to communicate with the electorate from day to day in any detail on the current issues—whether by letters, newspaper articles, or published pamphlets—was simply impractical. So each locale voted for its representative, sent him off to the big city, and trusted in his judgment.2 As issues became more complex, as laws became more exacting and tried to encompass more specifics and exceptions, the staffs of the individual legislators had to grow in order to keep up with the tide. And the staffs of the various cabinet posts, government departments, and other agencies and commissions charged with executing and administering the laws also grew. Our laws are now written by staffs recruited from lobbyists, interest groups, and lawyers, and executed by departments full of civil servants and registered contractors.

Today, modern elective governments are pulled in two directions. On side, the actual power of decision in the execution and enforcement all these complex laws is in the hands of non-elected experts who were hired into those government departments, agencies, and commissions. Presumably, they were hired because they had the education, skills, and insight to deal with the actual cases that might arise. So one could hope that people working in the Department of Energy know something about electricity, gas, nuclear power, transmission, and science, and that people working in the Department of Education know something about children, teaching, and psychology. If I am right about trusting educated and dedicated experts, then we would seem to be in good hands.

On the other side of the question, today our modern, computerized communication technologies like the internet, social media, polling techniques, and data mining tend to make the representative nature of modern democracy almost charmingly obsolete. Why pick an individual to speak for a locale, pack him on a horse, and send him off to the big city when any citizen with a telephone and a computer can dial into a polling service, post a blog or comment on one, share in a viral meme, and make his or her views instantly known? We are coming back around to the Greek ideal, where every citizen can pick up an electronic potsherd, make his or her mark, and put it in the computerized jar. If I am right about trusting the wisdom of crowds, then we might be in even more capable hands by working through the new digital democracy.

Except … except … we can see that kind of direct democracy at work in California, where statewide propositions are launched by petition to make laws that bypass the state senate and assembly. Sometimes good laws are made and survive judicial review. But sometimes mischief is set afoot that requires eventual backtracking and rebuilding to set things right. And sometimes the result is just a nuisance.3

So a government of experts might be preferable … except that even the smartest, most educated, most distinguished genius can still have his or her moments of foolishness or suffer from personal quirks that render a considered opinion or decision foolish in one dimension or another. And an expert who has been granted authority of position and the power of decision can lose all humility and come to believe that he or she exercises some kind of divine right over the rest of us. Arrogance is a special province of fools.

So the conundrum remains. Should we trust the wisdom of the many? Or of the few?

I don’t have an answer for this. But sometimes the issue makes me want to retreat to a high mountain valley that has a good water source and defendable passes, and then barricade the road behind me. I would take along only my family and a few friends—but then how are we to organize and govern ourselves? Still a conundrum. …

1. This is not unlike the old carnival challenge of guessing the number of jellybeans in a jar. Any one person’s guess might be laughably far off, but the average of all guesses would fall within a few beans of the exact count. And about a decade ago the magazine Popular Science ran a Delphi poll on its back cover, asking readers to answer speculative questions about science and technology. Although I never saw their published results, I’m sure the editors were looking for insight into the future.
       Something similar takes place in any bookmaking operation: by taking bets on a sporting proposition—such as which horse is faster or which fighter hits harder and has more stamina—the bookie establishes the odds or probabilities based on the wisdom of a crowd of gamblers. Some of them will bet with a keen eye for horseflesh or human heft. Some will bet because they like the color of the jockey’s silks or the look in the fighter’s eye. And some will bet knowing that the fix is in. … It all averages out.

2. Here I would use the gender-equivalent “him or her,” except in this case women were not part of the equation until the early 20th century—and our society bears more shame for that.

3. For an example of mischief, consider Proposition 13 from 1978, which effectively freezes property tax rates for people who don’t move around much but settle in one home for years at a time. Intended to control public spending by slowing the power of taxation, that initiative has created an imbalance in public participation which leaves many cities and counties still struggling.
       For a nuisance, consider Proposition 65 from 1986, which requires every product and public building to post a standardized notice about possible toxins found inside. Since we all live in a dangerous world, and modern science can detect chemicals in increasingly minute concentrations, the warnings are posted everywhere. No sane person stops at the door because of a Proposition 65 warning anymore, and so we are desensitized to all possible hazards.