Sunday, September 16, 2018

Situational Ethics

Ancient of Days

William Blake’s Ancient of Days

A young friend of the family recently started the first day of a freshman ethics class. The teacher’s first question, requesting a show of hands, asked how many of the students believed ethics are a social and cultural construct. All but one hand went up. And how many thought ethics are a universal given. My young friend’s hand went up. At that point, the teacher told him that he was wrong, and he later dropped the class.

This appears to be a doctrine of our times, at least in the academic world: that everything is a cultural construct, from morality to sexuality to the principles of science itself. Of course, if everything is a construct, then one might question if the construct might somehow, somewhere be constructed differently. The old values that you learned “at your mother’s knee,” or in your church or synagogue, or as the bedrock of your native civilization can then be characterized as local, parochial, and false. And new values—values more suited to the questioner’s purpose—might be substituted in their place. But I digress …

My first quarrel with this teacher—whom I never met, except in the abstract of the story—is that this definition of “ethics” is too broad. Yes, some questions of ethics and morality are culturally based, like not pointing the sole of your shoe at a person in some Eastern cultures. Even some principles that we in the West hold to be universal, like intentional killing, can be culturally and situationally approved. Every war is based on provisionally ignoring that commandment.

Early in my studies about Zen, I learned that the response to certain types of questions should properly be mu, or “no thing.” When a question is too broad, or poses an assumed but unproven dichotomy, or creates a logical fallacy, then the answer cannot be either “yes” or “no.” So the only right answer is “no thing,” meaning “the question does not apply.” And that would be my answer to this ethics teacher’s question.

Yes, certain ethical practices that shade between etiquette and morality—like pointing with your shoe—are purely cultural. Not all of them are minor and involve petty insults. In other Eastern cultures, for example, a father may kill his children if they dishonor the family, and religious persons are called upon to deceive, beset, and even sometimes kill idolaters and nonbelievers who remain steadfast and unrepentant in their error. In other cultures and contexts, however, these practices are simply wrong, wrong, wrong.

But I would argue that there is a universality to certain basic ethical questions. The transmission of the principle may be cultural, as told in religious stories, fables, children’s fairytales—or simply passed on from parent to child—but the principle remains solidly based in the dynamics of human interaction.

For example, I would challenge the ethics teacher to name one society that would condone, approve, or recommend coming up behind a stranger, bashing his head with a rock, and then picking through his pockets for his wallet and other valuables. The victim is not known to be a nonbeliever or idolater or belonging to any other class worthy of killing. The act is not motivated by mercy killing or implemented as part of wartime tactics. It is purely intended for personal gain.

Name a society that condones telling lies to someone who has reason to trust you—friend, family member, or other responsible person in your community—again for the purposes of personal gain. These are not the “white lies” of commission or omission on the order of answering the question “Do I look fat in these jeans?” This is lying in order to swindle someone out of land, money, or some valued possession that the liar wants to obtain for him- or herself.

Name a society that recommends or supports the genocide of a people who have previously been accepted and valued in the community, people who were once friends and neighbors but have suddenly become “the other” and outsiders for the political, economic, or religious purposes of some subset of the community.

The list could go on indefinitely. And it’s not that people don’t do these things, or that they sometimes get away with them during the upheavals of war, economic disintegration, or natural disaster. But find me a society or culture that would point to these ethical challenges and say that this is right and proper behavior.

I am not arguing that these actions are wrong because a god or a religious book somewhere said thou shalt not kill, lie, cheat, steal, or murder your enemies once you get the upper hand. Many religious traditions do transmit these and other cultural values and still prohibit such foul deeds. My argument is that these ethical principles are like the adaptations of biological evolution. They are so, not just because your tribe or culture says so, not because your god or your priest invokes them, but because these are the only ways in which human civilization can reliably function.

If a person cannot walk the streets without fear of becoming the victim of imminent and unrestrained murder for profit, then you don’t have a society but a jungle. If you cannot trust your friends, family, and respected members of your community to have your best interests at heart and seek to protect your life and rights to property and security, then you don’t have a family or a friend—or a community. And if your extension of good will and fair dealing to others in your society can sour to the point of murder over matters of race, religion, politics, or other noncritical and immaterial differences, then again you don’t have a society but a state of undeclared war.

Every species on Earth represents a hard-fought and -won adaptation to a particular environmental niche. The bodily configuration, reactions, capabilities, energy levels, and metabolism of any one species are not designed by an intelligence or selected according to some ideal pattern. Instead, they developed and became perfected over time because these features worked best in that place. And the fact that we see some of these species as precious and beautiful—think of songbirds and butterflies—is a fact of our own evolution. While the fact that we see others as creepy and scary—think spiders and alligators—is also evolutionary. We humans are evolved to find both beauty and terror in this world. We are adapted to this environment. If we had adapted to metabolizing sulfur compounds in the dark and boiling water of an undersea volcanic vent, we would find that kind of life beautiful, too.

In the same way, our nature—human nature—has evolved over time. While some of this evolution is adaptive to the physical environment—such as our peripheral vision, allowing us to perceive subtle movements in the bushes beside us, which might be a leopard waiting to pounce—much of our nature evolved in relation to our mental environment. Like many other mammals and some insects, we are social creatures. Our life exists in both the physical world and in the mental world of dealing with others of our kind, predicting their actions and reactions, and keeping ourselves and our loved ones safe.

In this sense, yes, much of our ethical teaching is a social construct. But it is not cultural in the sense of being limited to one cultural interpretation—say, Western Civilization—and either useless or irrelevant, and perhaps harmful, in terms of other cultures around the world, like being careless about where you point your shoe.

The core issues of ethics and morality are human issues, which means they bridge cultural affectations. They are so universal that they might as well have been pronounced by a god and preserved in a religious book. Because the image of that god is always created from some aspect of human nature and our species’ collective wisdom.

Sunday, August 26, 2018

Tracing Evolution Backwards

Jupiter’s moon Europa

Jupiter’s moon Europa

This meditation is an extension of a series of Facebook posts around the question of what conditions are necessary for the development of life, which itself is an extension of the Drake equation for estimating the probability of finding other life and civilizations in the universe. The proposer, William Maness of my Facebook acquaintance, posted: “Let’s go the other way. Let’s say that Earth’s condition is astonishingly rare. How rare does it have to be to be the only one in the galaxy. How rare to be the only one in the universe?”

And then he proposed conditions for life on Earth as we know it: strong magnetic field, stable sun, Goldilocks zone (meaning both the right part of the galaxy, in terms of density of nearby stars and their radiation, as well the solar system’s “habitable zone,” with planetary temperatures that can support liquid water), a large companion body (to create tides, which set a pattern of inundation and exposure for sea life at the edge of the land, among other things), no gamma emitters nearby, debris-cleared orbit (to minimize life-killing asteroid impacts), abundant liquid water, no conditions that kill carbon life in said ocean, an active lithosphere (with plate tectonics to renew the surface, replenish the atmosphere, and relieve geothermal stresses1), an active water cycle, and a transparent atmosphere. “These are just a few that come to mind,” he wrote.

My first response was to say that some of these conditions overlap and work to the same purpose. For example, the conditions of having a strong magnetic field and a stable sun are related, as their result is to protect developing and existing life from the solar wind and radiation bursts. Having no nearby gamma emitters is part of that requirement, too. But note that if your definition of life includes—or is excluded to—cockroaches and tardigrades, which seem not to care much about hard radiation, these several requirements may not be absolute.

Having liquid water and an abundance of carbon are nice. But as I’ve noted elsewhere,2 you could construct a parallel DNA chemistry from silicon and arsenic. The silicon atom has the same chemical-bonding valence as carbon, while arsenic has the same valence as phosphorus. So silicon might replace the carbon atoms in the ribose rings and the purines and pyrimidines that are the main features of DNA and RNA molecules. And arsenic might replace the phosphorus atoms in the bonds that connect those ribose rings into a long-chain polymer. The resulting molecules would be heavier, of course, having a higher aggregate atomic weight. And they would be somewhat more fragile, because their traded electrons would occupy a higher electron orbit. But these replicant molecules would still function like carbon-based DNA.

And liquid water does have some unique properties. The water molecule is easily dissociated into its component oxygen and hydrogen atoms. The molecule has an asymmetrical arrangement, placing the two hydrogen atoms at sixty degrees apart on one side of the oxygen atom, creating a positive and negative side to each molecule. This arrangement allows other molecules to be either “hydrophilic” and attracted to water or “hydrophobic” and repel water. Water as a fluid is also relatively incompressible—you can’t squeeze it in its liquid phase—so that the water in a deep lake or ocean doesn’t get thicker and sludgier as you descend, becoming paste-like or semi-solid. Instead, the pressure just increases while the density remains the same. These features create an important condition for life forms like Earth’s sea creatures, who are composed of mostly water themselves, metabolize the dissociated oxygen in water, and range freely from the surface to the deeps.

That angular separation on the water molecule forces it to form a hexagonal crystal when frozen, so that the solid phase is actually less dense than the liquid phase, enabling it to float. If solid water sank to the bottom of a pond or ocean, where temperatures are generally cooler, then a temporary drop in ambient temperature might freeze any body of water solid. And there it would stay frozen for who knows how long—not until next summer but more likely until the next extreme in the climate cycle.

But other liquids with a low chemical reactivity and low compressibility could support life almost as well as water does—although it would be chemically and physically different from ours and might prefer different ambient conditions.

Other planetary features like a large companion (for tides) and active lithosphere (for plate tectonics and volcanoes) are only required for the kind of life we recognize. I’m betting that, when we find life out there among the stars, it will surprise us. But that wasn’t the premise of the question as originally posed, which acknowledged that it was working backward (i.e., “going the other way”): What kind of conditions will produce us, the life that we know and recognize? And that may be too limiting a definition.

We can begin as a given that the same laws of physics and chemistry exist elsewhere throughout the universe. Go to any other star with a planet, and you’ll find the same atoms from our Periodic Table—although not necessarily in the same abundance and distribution. They will tend to form similar molecules—although perhaps with different underlying chemical reactions having different, temperature-dependent endo- and exothermic requirements—and so the abundance and distribution of life-creating or life-destroying substances will depend on local conditions. The gravity curve will follow the equations we use to measure it here on Earth—although the resulting values will necessarily be different, based on solar and planetary density and distance. The physics of electromagnetism and radiation will apply—although the quality of the light and its effects on biochemistry and biodiversity will be different, based on the output of the local star.

The nature of life, however defined, is that is evolves in and adapts to the environment it finds. Otherwise, whatever you find on a new planet is just an artifact or an exception. This presumes, of course, that evolution is present on the planet and is based on either a system of replicating molecules, similar but not necessarily identical to Earth’s DNA-RNA-protein coding system.3 Once the principle of replication-with-modification becomes established and gives rise to “life,” it will already be adapted to the conditions that it finds and then change itself as they change.

This evolution will be able to give rise to organisms that are not like us either physically or chemically. Even on Earth, and working under the DNA-RNA-protein coding system, we can find life that is strange and different. Consider the organisms that our deep-ocean searches have discovered clinging to the sides of undersea volcanic vents: adapted to total darkness and huge surface pressures, tolerating the extreme temperatures of superheated water, and metabolizing sulfur compounds instead of carbohydrates. The life that we recognize from this planet’s surface was able to descend and adapt to that hell. Or rather, our kind of life didn’t adapt itself: any of its great-great-grandchildren who happened to survive because of compounding genetic mutations became able to thrive under those conditions. Remember that the original life on Earth evolved in a carbon dioxide–rich atmosphere. Then plants began metabolizing that carbon in a photosynthetic reaction driven by sunlight and released free oxygen into the atmosphere. Only then did later organisms—“our” kind of life which moves, wiggles, walks, and talks—adapt to breathe and metabolize that oxygen.

As for what conditions might be required to create life, consider the smallest of the Galilean moons, Europa. Jupiter is not in the Sun’s “habitable zone,” with temperatures that generally keep water a liquid. Still, Europa is suspected of having an ocean under its icy shell that is kept warm by tidal flexing in its orbit around the giant planet. The ocean under the ice might contain life, protected not by a thick atmosphere and planetary magnetic field, as on Earth, but by the layers of ice themselves, because water is a good shield against radiation.4 Whatever life develops in this ocean would be different from ours—not based on or even seeing the Sun’s light, with no possibility of moving out onto land and developing the things we humans cherish, like fire, metals, and radio and television. But it would still be life under conditions that do not entirely match those on Earth.

When we get out among the stars, we’re going to have to expand our definition of life exponentially. I suspect that will quickly turn our teaching of biology—and so much else—on its head.

1. If you think geothermal stress isn’t important, consider Earth’s sister planet, Venus. By studying the uniformly limited number and apparent recent age of the impact craters on the surface, astronomers have determined that Venus must lack a system of plate tectonics, with its corresponding subduction of surface layers and creation of volcanic hot spots that release core heat, as on Earth. Instead, the planet appears to go through periodic renewals, where the entire surface melts from within and then resolidifies. That would be bad for any life trying to gain a foothold on the rocks there.

2. See The God Molecule from May 28, 2017.

3. For example, a machine-based organism that was able to sample its environment and rewrite its underlying operating code to thrive under those conditions would be a similar but different analog of our biological kind of life. For that matter, you might consider our molecular form of life as simply a kind of nanotechnology.

4. When I was in college, I had a roommate who worked as shift operator at the university’s TRIGA reactor. This was one of those “swimming pool” reactors, used for research, training, and experiments with radiation. When he took me on a tour, we stood at the railing and looked directly down at the reactor core, which when operating glowed with the beautiful blue light of Cherenkov radiation. I pointed at the active core and asked my roommate, “Why am I not dead?” He replied that the twenty feet of water between us and the radiation flux with its fast neutrons was better protection than a foot of lead shielding. I then saw bubbles of gas rising from the reactor and bursting on the surface about eight feet away. I asked what it was, and he said it was a radioactive isotope of oxygen. “Why am I not dead?” Because that isotope possesses a half-life of eight seconds and had mostly decayed to regular oxygen by the time it reached the surface, where any residual isotope dissipated into the room’s atmosphere before decaying further.

Sunday, August 19, 2018

Thinking With Our Skin

Embryo fold

It’s fascinating to think about how an embryo, a single cell comprising the genetic material of a mother’s egg and father’s sperm, develops into a complex, multi-celled organism with each cell having its own place and function.

I’ve written before about the beginning of the development process,1 as described by the late Eric Davidson at Caltech, who was working with sea urchin embryos. After the original zygote created by the union of egg and sperm had divided again and again to form a hollow sphere of cells called a blastula, he and his team sacrificed these embryos at subsequent fifteen-minute periods to map out the interaction of genes in the nuclei of these undifferentiated cells. The team discovered that, depending on where in the sphere a cell was situated and time after the blastula formed, one gene would produce a bit of microRNA that moved elsewhere inside the nucleus and promoted another gene to make a different miRNA.

This process of promoting different genes continued—all without coding for any proteins—and directed that particular cell toward becoming a different and unique body part, like a backbone spine or a section of gut. These various bits of microRNA and their interactions formed an instruction set and timing mechanism for developing the entire animal. Davidson’s team then compared these miRNA patterns with other animals that had not shared an ancestor with sea urchins for millions of years and found the mechanism to be “highly conserved”—which means that something similar takes place inside the cells of a human embryo.

In mammals and the vertebrate animals closely related to them by evolution, the blastula sphere develops into the gastrula, a hollow, cup-shaped structure composed of three layers of cells. The inside layer, the endoderm, eventually becomes the gut with associated structures like the small and large intestines and organs like the stomach, pancreas, liver, and lungs. The middle layer, the mesoderm, gives rise to the connective tissues including muscles, bones, bone marrow, and blood and lymphatic vessels, and to associated organs like the heart, kidneys, adrenal glands, and sex organs. The outside layer, the ectoderm, becomes the skin with associated structures like hair and nails, the nasal cavity, sense organs including the lens of the eye and the tongue, parts of the mouth including the teeth, and the anus. The ectoderm also forms the body’s nervous tissue including the spine and brain.

So how does that outer layer of this cup-shaped gastrula become something so interior to our bodies as the brain inside its bony skull and the spine inside its chain of bony vertebrae? The answer is that during embryonic development of this outer layer, the ectoderm acquires what’s called a neural fold. A groove forms in the layer that soon folds over to become a hollow tube, the notochord, which eventually becomes the spine inside its sheath of protective bones. The anterior or front end of the spine becomes the brain. In primitive life forms, the brain remains just a cluster of nerve cells, a ganglia. In more developed organisms—from fish on up to humans—this ganglia develops a complex structure with the brain stem, where consciousness originates; the cerebellum or hindbrain, governing autonomic functions like breathing and balance; the neocortex, governing thinking, speech, and fine motor control; and the limbic system, which is associated with emotions, instincts, moods, and the creation of memories.

It’s no accident that the nervous system arises from the skin, because sensation through the skin is the brain’s first major contact with the outside world. Other structures derived from the ectoderm include the eyes, ears, taste buds, and sense organs in the nasal cavity. It might at first seem that the brain, where the cell bodies of the neurons reside, should form in place and then extend long, thread-like nerve fibers, the axons, down along the spine and out into the skin to get such widespread coverage. But instead they all form in place from the same tissue, starting embedded in the skin.

At the same time that the neural fold is forming the spine and brain, the cup-shaped structure of the ectoderm and endoderm curve around and fuse to form the body cavity. That puts the guts on the inside, the skin on the outside, and the skeleton and muscles somewhere in between. All of these tissues are developing together and sometimes—as with mesoderm muscles of the heart and the endoderm structure of the lungs—merge to form integrated systems.

But how does the developing organism know which end of the spine will become the “anterior” as well as what and where the posterior might be? What directs one end of the hollow tube that represents our bodies to become the mouth and nasal cavity, with their sense clusters, and the other end to become the anus? One further set of genes is needed to manage all this, the homeobox genes, or “hox” for short.

This is another highly conserved gene set. Fish have it, as do frogs, lizards, dinosaurs, birds, and all the mammals. So do the insects and arachnids. The hox genes are only active during embryonic development and determine the major body parts that we share with all these other animals: the head with its brain or nerve cluster, the major sensory organs, and mouth parts; the thorax with the heart, lungs, and nexus of the blood vessels; and the abdomen with its digestive and reproductive systems. In human beings, the thorax is enclosed inside the ribcage and separated from the abdomen by the diaphragm. In insects like the fruit fly and arachnids like spiders, the thorax and abdomen are separate body structures. The hox genes also define the limbs and where they are attached: four limbs connected to the spine in the tetrapods, which developed out of lobe-finned fish and first walked on land—that’s us, along with frogs, lizards, dinosaurs, birds, and all the mammals. Other less closely related animals like insects and arachnids have multiple legs attached to the thorax and sometimes wings, too.

It’s not just a coincidence that we share the same basic body structure with fish and frogs. It’s written into our genes. I always marveled at the movie Avatar, where on the planet Pandora the humanoid natives, the Nav’i, are four-limbed like the Earthly humans, but every other species in close evolutionary proximity to them has six limbs. Given that the hox gene set is relatively stable, creatures so closely related that they can attain near-telepathic communication by mixing the tail ends of their neurons really ought to have a parallel body structure.

The hox gene set is also the reason that we classify mythological creatures like Pegasus, the flying horse; gryphons, which are half lion–half eagle; and dragons, which have four legs and a pair of wings, as “chimera,” or impossible animals. The hox gene set simply doesn’t allow for mashups of six-limbed creatures that closely parallel the known tetrapods. It also forbids angels with two arms, two legs, and a pair of wings. All of them are violations of basic body structure.

We still have a lot to learn about fetal development. And certainly the hox gene set deserves more study. But I find it fascinating that the process of going from a single cell to a complex organism passes through a multi-layered sphere that then folds inward and outward like a piece of origami. And it’s a bit chilling to understand that we all think and feel with cells that originate in our skin.

1. See Learning as a Form of Evolution from December 10, 2017.

Sunday, August 12, 2018

Keeping Busy

Storyteller

Storyteller in a Turkish coffee house

We human beings are endlessly concerned with finding our “purpose” in life. It’s a question that faces a child from the first time he or she is asked “What do you want to be when you grow up?” Answering “I just want to be” is not considered sufficient, although it’s the answer that every other life form, every bacterium, plant, and animal on this planet has for the question.

Biologists define life with a number of different characteristics. First is cellular organization—any organism, even a one-celled prokaryote, has an arrangement of pieces and parts, systems and subsystems, that enable it to function. Second is reproduction—it survives for a time and then divides into or buds off daughter cells, or joins with a complementary partner to form a new organism sharing the traits of each. Third is metabolism—it ingests nutrients such as proteins and carbohydrates, or in the case of plants, minerals and sunlight, and excretes waste products. Fourth is homeostasis—it tends to maintain a stable internal environment and seeks to maintain a stable external environment. Fifth is heredity—it can trace an ancestry based on changes through mutation from its parent cell or organism. Sixth is response to stimuli—it senses and reacts to its environment, moving toward light or nutrients or prey, avoiding predators or unfavorable conditions. Seventh is growth and development—the result of that heredity and metabolism is successful accumulation of resources and changes in structure. Eighth is adaptation through evolution—while the individual may not always change in response to its environment, the hereditary line changes through natural mutations that enable some future individuals, but not necessarily all of them, to survive.

These characteristics are not immutable like the laws of physics. Bacteria don’t react to their environment as readily as a gazelle being chased by a leopard. And not every individual successfully reproduces. Some of the characteristics listed above, also, are concatenated on other biologists’ lists, such as heredity being an element of evolution. But the principle is the same: life reacts to its environment in a way that, say, a stone weathering on a mountainside does not.

For every other species on Earth, this is enough. My dog does not question her life. She does not attempt to be something other than a part of the situation in which she finds herself. This is a shame, really, because in an earlier age of the world she would have been hunting small mammals, finding and mating with a male dog, digging a den and giving birth to litters of puppies, and only occasionally getting to lie in the sun in contentment. It would have been an active life full of interesting activities with occasional moments of terror. As it is, she is an adjunct to my household and has the primary function of nuzzling my hand when she wants something and having her coat stroked and hearing soothing words when I choose to give her attention—or feeling the tugs of the brush and the terror of the toenail cutter when I groom her. She won’t mate or reproduce because that potential was surgically removed at the shelter where I found her. So her life is reduced to eating the food that I put down for her, exercising her excretory functions only when I take her for a walk, and otherwise lying in the sun or on a cushion under my desk, waiting for something to happen. But it’s a life.

Human beings would go mad in this situation. We cannot be kept as pets—or not most of us, and not the best of us. And therein lies one of the basic problems of our modern world.

For a million years or more, our hominid ancestors lived as hunter-gatherers. Life was a struggle. We lived from one animal kill to the next, from one berry bush to the next. And when the seasons changed and the streams dried up, we suffered. We mated according to our hormones and our opportunities. We carried our feeble young along on the trail by instinct alone, not dreaming of a different or better life for them. We had an existence prescribed for us by circumstance, full of interesting if repetitive activities with occasional moments of terror. No one among this primitive species—or almost no one, surely—looked up into the sky at night and wondered about the Moon and the stars and what they might be or mean. Almost no one asked if there might be any other purpose to life. Everyone was just too busy surviving to ask such stupid questions.1

All of that started to change when human beings settled down in the fertile river valleys, planted crops and tended domesticated animals, invented city life with its artificial hierarchies and its wonder at the Moon and stars and what supernatural beings might lurk behind them. We suddenly had more food—most of the time—than one person could hunt or gather and eat by him- or herself. We had an unfamiliar condition called abundance. And we could indulge the pastimes of people who did not directly produce food, shelter, or clothing and yet still wanted to eat, sleep indoors, and cover their own nakedness. We had room for priests, shamans, storytellers, tax collectors, and other government officials. We began having a civilization and all of its questions.

Things have only gotten better—or worse, depending on your point of view—with the advent of science, technology, and modern methods of agriculture, production, and distribution. Where the labor of one person on the soil might once, in that fertile river valley, have supported two or three more people in the nearby town, now the labor of one or two people plus a cohort of robotic machines and systems support a hundred more. Working to stay alive and wondering where your next meal is coming from are no longer the primary concerns of most people in the Western and developed countries.

Physical needs have been replaced in our modern society by existential needs. A person who eats, lives in, and wears the products of other people’s labor has to question his or her own existence, no matter how the value of those goods and services in terms of dollars, credits, or other forms of exchange was acquired. More importantly, without the requirement of spending every waking moment concerned with the fulfillment of those physical needs, what is the person going to do just to keep busy? The question “What are you going to be when you grow up?” becomes “What are you doing here in the first place?”

Some people have a specific answer to that question. They are usually the humans lucky enough to be born into a family with a tradition of productivity: the family farm, the family business, or a profession followed by parents and grandparents such as medicine, law, or engineering. These family situations set a child’s mind in a pattern of work, responsibility, and obligation.

Many people transfer the question of personal purpose to a higher authority. They know they are valuable and worth the food they eat, the shelter they inhabit, and the clothes they wear because their deity sets apart all human life as having such value. What they do in their day-to-day occupation or their role as homemaker and caregiver is secondary to this important and holy purpose.2

My own role, which I think came about from my maternal grandfather’s love of books and my own father’s lifelong interest in reading, is that of perpetual student, then as an interpreter and explainer of life and the world, and finally as a storyteller. The family thought that, with my facility for languages, I would become a lawyer, like that same grandfather, but I lacked the aggressive instinct for courtroom battle. Instead, I became fascinated with stories themselves, with fictions that make more sense of the world than the daily lives we all encounter, with their power to sum up and explain the human condition. I spent my high school and college years learning the literature of my culture as an English major. This was not just the language but its use in the business of transmitting personal and cultural experience. I worked my entire professional life as a communicator. First, I was a book editor and technical editor, helping authors and engineers tell their stories in a coherent and pleasing manner. Then I was a technical writer, a speechwriter, and an internal communicator, telling about and explaining the business—whatever business I found myself in: engineering company, public utility, pharmaceutical company, or maker of genetic analysis equipment—to its operators and its other employees.

And all the while I knew that I was peripheral to that corporation and to society as a whole. The publishing business, in which I was a direct contributor to the end product, is a nice-to-have in a civilized society but not need-to-have in the way that farmers, carpenters and masons, weavers and tailors, and the truck drivers that move their products to market are necessary to life. As a technical writer and internal communicator, I was not even central to the business function but a convenience to the employees who do the actual work and the managers who want to see it continue. As a novelist, I might directly bring my readers moments of interest and even joy—or at least a release from tedium while waiting for a bus—but I am not central to their lives.

I don’t regret any of this, and performing these peripheral functions has paid me well over the years. I’m one of the living examples that an English major does not necessarily have to teach or ask “Do you want fries with that?” But I also know that my function in society has not been critical to its operation. If I had disappeared years ago, no one would have starved, been made homeless, or gone naked to the elements. And when the end finally does come, I will know that my life has been an elaborate and complicated form of keeping busy.

But that’s more than some people have. And it may be better than chasing rabbits with a sharpened stick or pulling berries off a bush for a living. At least I never had to run from a leopard, either.

1. So you can imagine that the subjunctive mood was not a part of their speech patterns. There’s not a lot of need for expressing potential or counterfactual conditions—shoulds, woulds, coulds, oughts—when you’re chasing a rabbit with a sharpened stick or tasting a new and unfamiliar kind of berry for the first time. You do or you die.

2. Not having a personal god—nor even an abstract idea of any god—I cannot rely on this definition of personal value. Unless the thoughts of my brain are made real by writing them down and preserving them in my function as a student, explainer, interpreter, and ultimately a storyteller, I have no more personal dignity or right to life than a bacterium or a dung beetle.

Sunday, August 5, 2018

New Kids on the Block

Spiral galaxy

The question of finding intelligent life in the universe was most concretely addressed by two scientists years ago. One was Enrico Fermi and his paradox, which to paraphrase asks: If life is so common among the billions of stars in the trillions of galaxies, and if space travel is even relatively achievable, then where is everybody? The other was Frank Drake and his equation that attempts to quantify—or at least set the framework for quantifying—the likelihood of intelligent life appearing elsewhere in the universe.

Both of them miss a point that is, I think, obvious to anyone who thinks about it deeply enough: the universe is really big and really old.1 We ourselves have only been around for comparatively the last two seconds of all that time. So far, we’ve managed to explore the skin of just one local planet and populate only a fraction of it in any numbers. And we’ve placed a couple of dozen robot probes on the surface of, or in orbit around, the other planets in our own system. We have accomplished the latter in a remarkably short time, considering the age of our own species. In terms of the age of the universe, we’ve gone outside the Earth’s atmosphere in just the last microsecond.

Okay then, time scales. The universe, according to our best guesses and measurements, is about thirteen billion years old. Considering its vast size, that’s a bit younger than it should be if the whole works has been expanding at even roughly the speed of light for all that time since a putative Big Bang. To account for this, astronomers posit an “inflationary” period right after the dense monoparticle containing everything we can see, know about, or infer exploded, so that it all expanded faster than light speed just to catch up with modern observations. But I digress …

Our own solar system is approximately four billion years old. That is, the Sun and the planets started condensing out of a cloud of dust and gas some nine billion years after the whole shebang exploded or started expanding exponentially. And that dust and gas was the residue of earlier stars that had lived and died, vomiting up a rich mixture of hydrogen, helium, and every other type of atom on our periodic table. We live in a second- or third-generation—possibly a fourth-generation—star system and are richer for it. But I digress …

That curious reversal of entropy that we call “life” actually appeared on the surface of this third planet—one of only three in the habitable zone where water can appear as a liquid rather than a vapor or a solid—soon after the bombardment of planetoids and meteorites stopped and the planet’s surface had cooled enough to be solid and entertain pools of liquid. The first life was nothing remarkable and not really visible to the naked eye: single-celled bacteria and blue-green algae that processed the chemicals available in their environment and the sunlight streaming down by using a relatively sophisticated DNA-RNA-protein coding system—or perhaps just RNA-protein coding to begin with, because the DNA form may have developed a bit later. These simple creatures worked to—or I should say “evolved to”—exploit the existing chemistry of the planet and replace its original atmosphere of carbon dioxide, water vapor, ammonia, and methane with one that was mostly nitrogen and oxygen.

These bacteria and blue-green algae, which would only be really visible as mats of colored slime on the edges of the seashore, persisted for two or three billion years. Over that time, they developed environmental niches and separate species, but they still remained simple, mindless cells that processed chemicals and sunlight, grew and divided, and reworked the planet’s surface and atmosphere. It wasn’t until about five hundred million years ago, or three and a half billion years since the solar system formed, that the first multi-celled organisms appeared. This required variable expression of that DNA-RNA-protein coding system and the development of cell types that were different from each other but still originated with a single cell and worked together as a single system. This generative period was called the Cambrian Explosion because suddenly the Earth—or at least its seas—had plants and animals that a visitor from beyond the stars could recognize with the naked eye and avoid stepping on, if he-she-it were wading in a tide pool or shallow lake. The animals and plants didn’t move up onto the land masses until about a hundred and fifty million years later, during the Devonian period.

Here we’re still talking about relatively mindless beasts: fishlike vertebrates and scorpionlike arthropods who spent their entire lives grazing or hunting and reproducing in kind. Even when the land animals developed study legs and grew to the size of houses, such as the dinosaurs, they were still just predators and prey, fighting for survival, reproducing, and not much else. It wasn’t until sixty-five million years ago, when the Chicxulub asteroid wiped the slate of life nearly clean and allowed little mouselike nocturnal creatures, the earliest mammals, to survive and develop, that we got the sort of brains and intelligences that we recognize in primates, whales, dolphins, elephants, and ourselves. And even then, any visitor to this planet would not have found much in the way of interest, nothing to report home.

We humanoids of the genus Homo didn’t come down from the trees and out onto the grasslands until about one or two million years ago. We didn’t develop the sapiens, or “wise” species, until about sixty-five thousand years ago. And even then we were in competition with the neanderthalensis species, named for a valley in Germany where their bones were first discovered. These other Homo species may or may not have been as developed intellectually as our direct ancestors. But any visitor from another star system would have found both the neanderthalensis species and our sapiens ancestors picking berries and killing slower-moving animals with sharp sticks and edged rocks, then sucking the marrow from their bones. These collective ancestors probably used a decipherable language, and some of them may have carved a whistle or rudimentary flute from a shinbone in an attempt at making music. But for the most part, you had to look and listen closely to distinguish them from a troop of chimpanzees or baboons. And still, the number of Homo sapiens—the species with such future promise—was pitifully thin on the ground.

It wasn’t until about ten thousand years ago that our ancestors began gathering in groups, usually along fertile river valleys, to practice farming, domesticate animals, and build shelters larger than a tent made of saplings and animal skins. It wasn’t until five thousand years ago that they begin to think about baking clay forms into useful pots, smelting metals for better tools and weapons, and writing down their grunts and squeaks as an encoded language that would last longer than this morning’s conversation.

It wasn’t until a hundred and eighteen years ago that Guglielmo Marconi sent the first long-distance radio transmission—across the Atlantic Ocean—and then that message was in coded a series of dots and dashes. Even in its heyday, radio was a broadcast system rather than beamcast, which meant that its signal strength dissipated exponentially under the inverse square law. The signal from a 50,000-watt radio station in Kansas, belting out Patsy Cline in the 1950s, would be something less than a whisper by the time the wave front passed Saturn, and hardly louder than a bug fart by the time it got out of the Oort Cloud on its way to the stars. That famous first television broadcast of Adolph Hitler opening the Berlin Olympic Games in 1936 would not have fared any better. And these signals would then have had to compete with the blasts of radio noise coming from our own Sun. And now that so much of our communications is carried by coax and fiber-optic cables, and beamed down from satellites in near-Earth orbit, our planet will have gone virtually dark within our local spiral arm of the Milky Way.

So, even if the universe is crawling with life at the stage of bacteria and blue-green algae, or shambling along with creatures that resemble the dinosaurs or our own Homo habilis, it’s not listening for us and not able to visit us. Even if an advanced species has developed radio sets and antenna with which to search the skies—and remember, we didn’t develop radio telescopes until the engineers at Bell Labs tried to establish the source of bothersome static on long-distance radio-telephone calls in the 1930s, about eighty years ago—they wouldn’t be likely to hear anything that sends them cruising toward Earth.

And did I mention that space is really, really big? The nearest star system is four light years away, which means that even if we could travel at light speed—and our mathematics says we can’t—it would take four years to make the one-way voyage, even if we had the proper technologies for propulsion and life support. Forget about science-fiction tropes like warp drives, wormholes, matter-antimatter energy sources, and other forms of magic. Going to the stars will still be a civilizational undertaking—for us and for any other species out there. Stellar empires might grow in the minds of speculative writers and nuclear physicists like Fermi, but establishing one and holding it together as an enterprise of cultural and economic exchange, under the conditions of generational time lag presented by the distances involved, would be a daunting and perhaps fruitless task.

Exploring the cosmos just to see if some other planetary system has developed something more than slime molds and dinosaurs, or even humanoids knocking over pigs and butchering them with sharp rocks, would be a remarkably altruistic or academic pursuit costing a huge percentage of a planet’s and a culture’s resources. Even beaming signals out in all directions in the hope of one day getting a response that made any kind of sense would be a significant undertaking.

pWhere are all the other intelligent species? I think they’re out there, but they’ve got better things to do than visit us.

1. I know, fellow writers, “really” is an adverb and we should eschew the use of adverbs. When you want to write “really” or “very,” just write “damn,” because it means the same thing. Still, I could use “really” about four times in succession to describe the universe’s magnitude and about twice to describe its age—but that would just sound silly. Hence, the rest of this essay.

Sunday, July 29, 2018

Living Behind the Veil

Veiled face

Surprise fact: We are all, every one of us, going to die someday. You, personally, may harbor the secret belief that you are that rare exception, the fairy changeling replaced in the cradle soon after birth, who will live on and on, never changing, never growing old, and eventually mourning your friends and family long dead a hundred, a thousand, ten thou—or forever from now. But, as Damon Runyon would say, “That’s not the way to bet.”

We all know that we’re going to die someday. That is the curse of being a human with an intellect capable of self-awareness—the perception of oneself as a separate entity in relation to time, space, and the world around us. Dogs and cats don’t have this awareness, and so they live in the moment, never questioning tomorrow. Dolphins, whales, and elephants may have it, and so they might recognize themselves as separate beings with pasts and futures that can be considered, probed, and defined intellectually and emotionally. This awareness seems to be the dividing line between the intelligence we recognize in other human beings and the degrees of relative awareness and responsiveness we see in other animals. It is, in my opinion, the first step in defining the human condition.

With this self-awareness, we can look at a dead bird on the sidewalk, a blue jay that used to fly by our window in the morning, or see a rotted log in the forest, a tree in which we once carved our initials inside a heart, and think: “This is death. This is a sign of time’s passage. This is what comes at the end of life. One day this will be me.” It is the memento mori, “remember that you are mortal.” It is the foundation of both human joy and grief: joy in the moment of living, and grief with the knowledge of life’s passing.

But in everyday life a veil descends on the human mind. We put away these death thoughts. We let our hopes—or that secret belief in our changeling exception—grow to dominate our thinking about life and the future. And we succumb to the persistent distractions of our work and hobbies, our love and other pleasures, our expectations and plans, and the daily round of whatever we have to do next. This is another Jedi Mind Trick.1

Are we fooling ourselves about death? Yes, probably, in a strict-constructionist sense. Death is inevitable: for you and me personally, when we grow old and our brains or our bodies outlive their usefulness; for this planet eventually, in five billion years or so, when the Sun blows up and blasts away the Earth’s surface; and for the universe itself ultimately, when the expansion of dark energy smears space into a tenuous wisp of dissociated molecules, or the process reverts under the influence of some kind of dark gravity that contracts space back into a tiny, dense spot. Change is inevitable. And part of change is the possibility of ending.

But the fooling—the foolish dream of living on, as if death itself doesn’t matter—is necessary to becoming a fully functioning human being. Otherwise, the first time we learned as a child that things die—the passing of that pet goldfish or hamster or, in my case, a parakeet—we would totally absorb the lesson that life is futile. We would collapse into the fetal position, take only shallow and shuddering breaths, and never rise to hope again.

I’ve mentioned elsewhere2 that one definition of an adult is someone who has come to grips with the knowledge that one day he or she is going to die. An adult doesn’t dwell on that fact, like a simpering child, but instead uses it as a measure of self-worth. Knowing that life—at least the life you know on this planet in these circumstances, aside from any hope you might have of an afterlife in a heaven or hell—is finite puts pressure on you to make the best of it. You know that your years, days—perhaps even your minutes—are counting down on an invisible clock somewhere, and this thought gives you a reason to get busy and make the most of them. An adult knows that life’s ultimate meaning is not found in the words of some ancient holy man, or the benevolence of a god up in the sky or in some other dimension, or written in some sacred book. Instead, the meaning that each person finds in life is the subject of reflection and choice, of striving and sometimes sacrifice, different for me than for you, and a source of either personal satisfaction or perpetual desolation.

Animals don’t know any of this. They can’t even think of this. For them, life simply is. The striving is merely glandular, and the sacrifice is entirely circumstantial. But human beings can lift the veil, look at death, and make a sober, thoughtful choice. And that is personal power.

1. See The Original Jedi Mind Trick from May 13, 2018.

2. See In Defense of Denial from March 30, 2014.

Sunday, July 22, 2018

Seeing Ourselves in Alien Eyes

Orange polygon

The following is going to sound awfully obvious, but sometimes I just have to work through an idea to get to a basic understanding. The obvious part is that we humans as a species have never experienced verifiable contact with another equivalently intelligent life form in the universe. We’ve never met any extraterrestrials that we can conclusively say exist.

We have met other species on this planet that may or may not possess intelligence equivalent to that of an average human being. Various species of whale and dolphin, certain great apes, and most elephants have intellectual powers that we can intuitively appreciate. One gorilla, Koko, was taught sign language and was able to use it to converse with her handlers at the level of a human child. For the others—whales, dolphins, elephants—we know that they communicate among themselves, but we cannot interpret or reconstruct their language. We can communicate with them by means of visual signals and spoken commands in human language that they appear to understand. But so far the communication is all one-sided: the human trainer commands, and the animal responds. This exchange is not limited to demonstrably intelligent animals because, after all, my dog is attentive, watches my face and gestures, and responds to certain spoken words.

This tells me that, once we meet any extraterrestrials, the communication problem is going to be larger than any science fiction writer appreciates. If we can’t interpret the language of whales and dolphins, although we study them intently, we’re going to have an even harder time with a species that does not originate on Earth in an environment that we understand and into which we can project their existential issues. But, for the purposes of a good story, we writers will overlook the obvious and allow for workable communication—usually based on the aliens having prepared themselves before coming here by studying our radio and television signals broadcast into space. So the imagined travelers are better at unraveling an unknown language than the human scientists working on dolphin speech with the animals themselves conveniently at hand.

All of this is a long-winded way of saying that any aliens we eventually meet aren’t going to be like us and probably not like anything we can even imagine. If they have advanced in their own evolutionary pattern1 beyond the level of single-celled organisms—which were the definition of life on Earth for about three and a half billion years, before the Cambrian Explosion of multi-celled creatures—then they will take shapes appropriate to their native environment and have brains designed to meet conditions, opportunities, and problems we can only begin to anticipate. Figuring out what a dolphin, who lives in the warm Earth’s own gentle seas, thinks and wants to communicate is a snap in comparison with a understanding our first extraterrestrials.

While we accept the nature of the intelligent animals we find on Earth as part of our everyday environment, the question of what intelligent aliens from elsewhere in the solar system or the galaxy will be like stumps us. Science fiction stories—here I’m examining those captured in recent movies, more than in books, but the premise still holds—have long been based on various assumptions, and to me they serve as a kind of Rorschach test of the human psyche and spirit. Even “documented” UFO encounters are puzzling and open to interpretation.

Let’s start with the benign depictions: the bumbling gardener-cum-magician of E.T., The Extraterrestrial and the mysterious and apparently powerful but ultimately childlike creatures of Close Encounters of the Third Kind. This is what we might hope visitors from the stars would be. They are scientists and explorers, they might gather our plants—or our long-lost airplanes and missing people—for study, but they basically mean us no harm. These are the attitudes and intentions we think—we hope—human beings will adopt when we have the capability of traveling out among the stars. We would explore under a Prime Directive of non-interference, like the members of Star Fleet in all the Star Trek series.

But we fear that any intelligent aliens from beyond the solar system will have what we would perceive as evil intentions directed against us. The ravaging, life-sized army ants of Independence Day and the vast, cool intellects of a dying Mars, who look across the void to a vibrant green Earth in War of the Worlds, come not to study and to cherish but to colonize and destroy. We fear this because it is the way human beings have actually behaved over the millennia. The Romans did not walk into primitive Gaul, Germania, and Britannia—or the more ancient and advanced civilizations of the Near East—to become teachers or helpers. They came to colonize, plunder, and control. The Spanish and Portuguese, and later the French and the English, all came to the New World with the same intentions. The indigenous peoples these Europeans discovered—Stone Age tribes who lacked the wheel, industrial-grade metals, and even horses—were either a nuisance to be tolerated at a distance, out of mind somewhere off in the forest or on the far plains; slaves to be traded back to the old country, or ground under in building an empire in the new; or enemies to be simply exterminated.

Then there is our innermost hope: that the intelligences we find will be like gods, such as the vanished beings who left behind a marble mask and mausoleum that is also a signpost to a glorious future somewhere else in Mission to Mars; the unseen but all-powerful intellects that actually control events in our part of the Milky Way galaxy in Contact and 2001: A Space Odyssey; and the governing forces that evaluate humanity and find it wanting in the recent remake of The Day the Earth Stood Still. We pray to such an all-powerful, all-knowing, benignly loving—or sternly judging—being in our Earthly churches, mosques, and temples. We secretly hope that He, She, or They really exist and have a dominant hand in ordering the universe we will encounter out there beyond our atmosphere.

And finally, there are the mysterious, unreadable, and detached intellects driving the ships, or light patterns, or holographic images that we humans routinely document as UFO sightings. If these events are not wishful thinking or cases of mistaken identity regarding clouds and atmospheric effects—that is, if they actually exist—they are still open to interpretation. Who- or whatever is piloting those aerial phenomena seem to be uninterested in human beings per se, except when they are slaughtering our cattle, abducting and probing isolated agricultural workers, and leaving cryptic markings in our wheat fields. Analysis of the reported sightings describes an interesting pattern: UFOs operating in the vicinity of passenger airliners tend to behave rationally, maintain a margin of safety, and simply allow themselves to be observed; while UFOs that encounter military jets act more aggressively, play with them as if testing their aeronautical capabilities, and sometimes participate in mock dogfights that never seem to harm the aliens, even when the human pilots are firing live ammunition at them. So the UFOs, which seem to be uninvolved with us, actually have some internal interpretation of and intention regarding the encounters, even if we cannot understand them.

Whether the universe holds other intellects, other people for us to meet—and I certainly hope so—the expectations in the last hundred years of imagination and speculative fiction have been a test of our own reaction to the nature of intellect itself: scholar, marauder, god, or mindless phenomenon. Take your pick. One day we are sure to find out.

1. And yes, I believe evolution exists out among the stars. It is the most obvious and elegant way for matter and energy to achieve that curious reversal of entropy we call life, applicable to any environment containing a liquid medium, without the intervention of a Supreme Designer. Alien evolution will probably be based on chemistry, as is ours, but it probably won’t be based on the DNA-RNA-protein coding system we use—not unless life on Earth was seeded here from another solar system, or group of systems, four billion years ago. And that’s a possibility I’m still pondering.

Sunday, July 15, 2018

Shortcuts to Reality

Robot juggling

Sometimes we don’t see life when it’s right in front of our noses. That’s part of the way our minds work. And combatting this loss of perception is one of the goals of Zen mindfulness, to enable us to confront reality as we experience it, not brush past it with mind tricks and traps.

One of the mind traps is the human tendency to develop daily routines. Routines like shaving, brushing teeth, washing dishes, and so on—necessary business that we all just have to get done—help us streamline our lives. The eyes move, the hands move, and the work proceeds without our having to think about and plan for each separate action. It’s an efficient way to move through the day, but in the exercise of these routines, we become more like “meat robots” than perceiving human beings.

Sometimes, when I’m brushing my teeth or doing another daily routine, I actually lose track of time. I use an electric toothbrush, which fortunately has a thirty-second cycle and beeps at me. This reminds me to move from one side of my mouth to the other, then from the lower jaw to the upper: the same pattern, timed to the beep, morning, noon, and night. If the thing didn’t make that noise, then I wouldn’t know how long I might brush the same set of teeth, mechanically, blindly, without thinking about it, or perhaps thinking about something else entirely. I might also forget and leave one part of my mouth not brushed at all.

I can lose track of time while driving, too. The motions are automatic: watch the road and center the car in the lane; locate other cars in the pattern all around me; scan the mirrors left, right, and center; watch the road; locate cars, for mile after mile. The routine of driving on the highway, without the distractions of having to look for a street sign or watch for an upcoming exit, can bring on “highway hypnosis,” where the mind is lost to reality. Sometimes I can become so fixed in the mechanics that I become separated from the very things I’m supposed to be watching for: the car next to me that is actually moving into my lane; the light up ahead that hasn’t seemed to change for a few moments, was green the last time I looked, and—holy cow!—it now is red, not green!

Even routines that are supposed to be Zen-like and to free the mind, like doing karate exercises, can become perceptual traps. I’ve been doing the same Isshinryu katas for almost fifty years now. What I’m doing at this late age is not so much learning the moves and committing them to somatic memory as keeping my joints limber, my balance stable, and my muscles supple and strong. If I ever need to actually fight someone, I’m pretty sure I will execute the punch or kick correctly per the forms. In the meantime, I proceed through the motions, the same motions, the patterns I learned back in college, whole regimented sets of them in the same order, during workout sessions two or three times a week.

Lately, I have noticed that I will start a kata and then begin thinking about something else: a plot point in the book I’m working on, how I’m going to react in an interpersonal situation, or some decision I have to make. My body will still be moving, but I won’t be aware of it. And then ten or twenty seconds later I will “wake up,” having mentally come to a decision on the issue occupying my mind, and realize that I’m ten or fifteen moves further into the exercise—or approaching the end—with no awareness of whether I have performed the intervening moves correctly, made the right number of repeats and variations, or anything that’s been going on in the room for those passing seconds. The routine that is supposed to heighten awareness of reality has actually dulled it through repetition.

Another mind trap is the labels we use in our daily lives in place of active and mindful attention to what we see, hear, think, and feel. The human mind cannot actually survive without using labels in place of their more complicated referents, at least in some cases. But depending on them too heavily can insulate us within our own minds and separate us from life.

The sciences have a rich history of assigning labels to new phenomena and processes—so much so that some people think the study of biology, chemistry, and physics is nothing more than an exercise in label manipulation. Because I try to keep up with the fields that interest me, I subscribe to Science and Nature. But I freely admit that some of the article titles—and even the abstracts, which are supposed to offer a higher-level view and be more reader-friendly—baffle me. “Multivalent counterions diminish the lubricity of polyelectrolyte brushes.” “Second Chern number of quantum-simulated non-Abelian Yang monopole.” “Enantioselective remote meta-C-H arylation and alkylation via a chiral transient mediator.” I am not making these up: they are three article titles from recent issues. Even if I recognize some of the words, I can guess that they are not being used in the way that, say, an English major would understand them. Sometimes I can only guess the field of science they are discussing. But what is life without mysteries?

Actually, the process of learning anything is a matter of, first, understanding the underlying nature of a principle, object, event, or process—the referent—and second, assigning proper terms and labels to those concrete understandings so that we can communicate about them. Otherwise, we end up talking about “the thing that does the thing to the thing”—or words to that effect. First you understand the ideas of dichotomy and duality, and then you assign the label “two” and “twain” to the things they represent.

But the more you bandy these labels about, the more risk you run of losing sight of the wonder you felt when you first understood the thing itself. The shortcut does not lead you toward reality but away from it.1 Sometimes you think you know the thing when you only know the label. The name is not the reality, in the same way that following a daily routine is not really living.

One of the differences between human beings and the artificial intelligences, robots, and automated systems that we are starting to build today—and which will become ever more important in years to come—is this access to reality. Humans can experience a wide range of senses and put them together in novel ways. Having that “Aha!” moment of clarity, the epiphany, the sudden understanding, is a uniquely human thing. Robots and software systems don’t perceive reality except as it affects or interferes with their programming. They are focused on the parameters and processes for which they were designed. That design may encompass a wide field of view and a breathtaking array of sensory inputs and programmed contingencies. But it is still a focus, a built-in routine, and a label for which there may not be an understood referent. The robot does what it was designed to do. The automated system processes the parameters that are given to it, or for which it has cameras, microphones, haptics, and strain gauges designed to receive certain signals.

A robot brain is not designed to hear a rustle in the grass and suspect it may be a tiger about to pounce. A mechanical brain is not designed to read meaning into patterns, like the sodden tea leaves in a cup or the glints of candlelight in a crystal. A robot is not susceptible to the wonder and mystery of the life around it. But we are.

1. And sometimes that is intentional. There are scientists in any field who speak in code words simply for the delight of sounding more sophisticated and knowledgeable about the subject than those who speak clearly. Although, on the other hand, there are subjects that can’t be approached without a knowledge of the nomenclature. You can imagine trying to discuss quantum mechanics and the discovery of the Higgs boson if you don’t have a reference for the nature of subatomic particles, concepts about mass, and the theories of this Higgs fellow.

Sunday, July 8, 2018

Still Questioning Gravity

General relativity

With the usual caveats,1 and at risk of showing my great ignorance, I still don’t understand how gravity works. I’ve written about this before,2 and I read every popular explanation I can find, because the math-dense version is generally beyond me in all sorts of dimensions. And yet … some things about general relativity and gravity just don’t make sense to me.

Aristotle, the polymath and chief explainer of things scientific in the Greek golden age, thought gravity was simply the way that things find their own natural place. It’s linked to the concept expressed when we say that water seeks its lowest level. That is, gravity and its action on objects like draining pipes, falling stones, and stumbling people is merely a characteristic of the object itself. Water flows downhill, rocks tumble off cliffs, and you fall on your face if you trip, because that’s where the water, the rock, and you actually want to belong. That is, it’s the water’s and the rock’s fault—and yours.

Isaac Newton, who was the premier polymath of the 17th century, thought gravity was a force. Something about massive objects like the Sun and the Earth exerts a force to attract all lesser bodies, such as falling apples and those same stumbling humans. That is, gravity is a characteristic of the ground, not of the falling object itself. That is, it’s the Earth’s fault.

Newton’s concept of gravity worked well for a couple of hundred years and satisfied most of the observations of astronomy, governing the motions of stars and planets. The one problem was that Newton’s force of gravity was thought to be instantaneous: it was action at a distance not governed by time. So, if the Sun were to explode—no, that would still leave the equivalent mass in rapidly dispersing hydrogen, helium, and fusion products at the center of the solar system—or rather, the Sun could be magically “disappeared” from its central position, the Earth and the other planets would immediately head off in a straight line tangent to their normal orbits that had previously been shaped by gravity. In reality, at the speed of light—the limit governing all actions in the universe—the effects of any such instantaneous removal would take the same eight minutes to be felt at Earth’s orbit that it normally takes light from the Sun to reach us.

Albert Einstein, the polymath of the 20th century, rejected the idea of a “force” and, through his theory of general relativity, defined the effects of gravity as being a curvature in space and time. That is, massive objects bend space and slow down time. And the more massive the object, the more the surrounding space and time—which Einstein conceived as simply different dimensions of the same reality and called altogether “spacetime”—are curved. That is, it’s the fault of the geometry of space and time themselves.

In this conception, the idea of force and how quickly it might act or react is irrelevant, the curvature exists so long as the mass is present. And, of course, while the star might explode and scatter its mass, nothing known to physics is going to remove that mass, magically or otherwise, at any speed greater than, or in any timeframe shorter than, the speed of light, c.

As every science popularizer is quick to point out, Einstein’s concept of general relativity didn’t prove Newton “wrong.” Einstein’s concept of spacetime curvature and the mathematics to support it were just a more refined approach to the problem than a generalized force representing gravity. While Newton’s math worked for most problems in planetary astronomy—being useful, for instance, in calculating a near-Earth orbit or plotting a trip to the Moon—Einstein’s equations gave better answers to more decimal places. Einstein’s math, for example, gave a more accurate prediction of the precession of Mercury’s orbit around the Sun than Newton’s by a few seconds of arc.

Still, and mathematics aside, Einstein and Newton offer very different and irreconcilable conceptions: Newton postulates a force whereby one body acts upon another, like a pitcher hurling a baseball;3 while Einstein postulates the effect that a massive body has on its surroundings, and that effect is present regardless of whether any second body is around to experience it.

For ease of visualization by the layperson, illustrators show the curvature effects of gravity under general relativity with something like a bowling ball sitting on a trampoline and creating a curved depression in the surface—like the illustration here. The trampoline is supposed to represent the “fabric of space.” Of course, the curve is not in just the two dimensions shown for this flat surface but in all three dimensions of space plus a commensurate slowing of time.

I have always had a problem with this usage, even as an analogy, of the word “fabric” to refer to space and time. Space in all other contexts, is generally accepted as simply being empty. If it has a structure, an internal component that can be bent or warped, then space is not just a form of emptiness but instead is something all its own and separate from the protons, neutrons, electrons, photons, and other particles that exist within it and pass through it. Similarly, if time can be made to slow down, that implies some structure or medium that a nearby mass somehow manipulates. Time is not just the measured passage of events but a thing all its own, separate from those events.

In quantum mechanics, particles have associated fields, and these fields guide the motions of the particles within them. The photon, for example, is not just tiny, massless “thing,” but it also establishes a field that guides the movement of light and the properties of magnetism. Supposedly, in quantum mechanics, there must exist a particle called a “graviton” that has an associated field governing what we experience as gravity. But such a particle has never been discovered.4 Otherwise, we could hope to make our cars fly by blocking the exchange of gravitons with the Earth beneath them. But no one has yet been able to reconcile the concepts and mathematics of quantum mechanics with general relativity. Big is big, small is small, and they don’t seem to play by the same rule sets as currently conceived by the best human minds.

The confusion I have with general relativity and the curvature of spacetime is this: I can understand how curved space and time might alter the direction of a body that is already in motion, such as planet Earth wanting to move in a straight line (like all good inertial objects) but being forced into an elliptical orbit by the mass of the Sun. But what about a body that is not moving with respect to that center of mass? Just sitting or standing on the surface of the Earth, I am—according to general relativity—accelerating toward the center of the planet. But I am not moving with respect to that center. I never get any closer to the center, although I am accelerating toward it at a rate of 9.8 meters—more than 32 feet—per second per second. That would be a pretty good clip if I were moving across the surface of the planet and going faster and faster with every second.

Sure, the analogy with a trampoline shows a depression that I might be sliding into, like a kid on a sled sliding “down” a hill. But if I am at rest with respect to the center of the planet or another nearby mass, why would I be moving toward it at all? Even if that surrounding space is curved, what … forces me to move down the curve?5

I’ve read explanations that all of this has to do with different and higher orders of geometry. Also, that objects existing in a faster timeframe, such as in the less-curved spacetime further away from a planet or star, will seek to move toward the slower timeframe created by the mass of a large nearby body. Perhaps it all works out with elegant mathematics. But that still leaves the common-sensical question of why an object would prefer, and naturally move toward, a slower timeframe. Isn’t that just a version of Aristotle’s definition of gravity: that things just try to find their natural place?

I don’t mind if there’s math involved. That doesn’t insult or frighten me. But I do mind if the concept is solely based on mathematical equations. If the underpinnings of the universe cannot be explained except through a set of equations, then we run the risk of the ever-inventive and fertile human mind creating an equation that describes a situation without actually explaining it.6

You can write any number of equations, and they may make mathematically perfect sense. I can measure the distance across the continental Unites States in terms of gallons of milkshake consumed at Dairy Queens along the way. I can relate this function to a traveling body’s metabolism and the ambient temperature, and then link that intake to toilet flushes in restrooms further down the road. I can create elaborate mathematical structures related to distance and dairy products. But they won’t explain anything.

I still don’t understand gravity. And given that we have to fudge around with concepts like “dark matter” to reconcile current conceptions of gravity with the observed motion of stars in galaxies, and with “dark energy” to relate the motions of those galaxies with the size and scale of the universe itself … I don’t think anyone else does, either.

1. I was an English major in college with a minor in karate. The highest level of math I took in high school was Algebra II and Geometry, and I satisfied my college math requirement, as did so many other liberal arts students, with Philosophy I (aka Logic). But since then I’ve been reading continuously in the sciences, particularly physics and astronomy, to support my science fiction writing. My professional life over the years has been to explain the work of engineers and scientists for the lay reader. So, while I am math-challenged, I am neither ignorant of nor disinterested in the subject.

2. See Three Things We Don’t Know about Physics (I) from December 30, 2012, and (II) from January 6, 2013.

3. A force is represented by the most basic equation in physics, f=ma, or “force equals mass times acceleration.” The pitcher’s arm muscles accelerate the 142-gram mass of a baseball from, say, zero miles per hour in his set position to, say, 90 miles per hour—or 132 feet per second—for a fastball at the full extension of his arm at release, which occurs about half a second later. That’s an acceleration—not an exit velocity, but the acceleration needed to achieve it—of 264 feet per second per second. Mass times acceleration. Einstein used a variation of this physics equation to come up with his famous statement about the energy content of matter itself, e=mc2.

4. Recently, there was much to-do when the Large Hadron Collider at CERN identified the previously theoretical Higgs boson. This heavy particle, which is not normally found in nature, is supposed to give matter its mass. This is a different particle from, but might be a kind of precursor to, a graviton. We still have much to learn.

5. I used this question to create a fantasy mechanism for time travel in The Children of Possibility.

6. I can define gravity as the hand of an ever-watchful and invisible little god, call him “Mr. G.” He watches me and every other thing in the universe. If I am sitting, he presses gently on my lap so that I don’t float away from the seat of the chair. If I am walking and careful about my steps, he has a hand on my shoulder to keep me in contact with the ground. But if I stumble, he flicks the back of my head with his finger, pushes me over, and presses me down. If I jump, he lets me rise only so far, consistent with my muscle tone, and then pushes me back down to the floor. And if I take a capsule into orbit, he watches my direction and speed, and at the appropriate time he stops pushing down on me so that I can float freely around the cabin. There! I have a working concept of gravity that fits all observations. I could even write out Mr. G’s influence in the form of a set of equations. But is this what’s actually going on in the universe?

Sunday, July 1, 2018

Distrust of Government

Minute Man

I’ve written before1 about how for the past four centuries America, and the New World in general, simply by existing became an escape valve for Europe’s population of disenchanted individualists. And now, by extension, we have become the magnet for people from all over the world who want more freedom, greater opportunity, and a better life. This drive for freedom and what my mother used to call “inde-goddamn-pendence” is not just a casual or passing attitude, it’s written into our genes from ancestors who voted with their feet long ago—or maybe in just the previous generation.

Our founding fathers, the authors of the American Revolution, also known as the American War for Independence, had a profound distrust of government. It wasn’t just distrust of a distant and unresponsive king and parliament, “taxation without representation,” and the economic strictures and political disadvantages imposed on the thirteen colonies because they were, after all, possessions and not the same as English counties and boroughs with direct representation in Parliament and ancient rights under English law. It wasn’t a bad experience or two with the occupying force of British redcoats, having to quarter them in civilian homes, or enduring the Boston massacre, and later having to fight a war in which the might of the British nation—or as much as it could spare at the time—came down on ragtag bands of freedom fighters and a woefully underfunded and ill-equipped Continental army.

The distrust was in large part the heritage of dissenters, deportees, transportees, indentured servants—and later freed slaves—who had seen the iron rule of law at work in the hands of men grown too well-stuffed and powerful to care about their neighbor’s plight. Of people who wanted a place less crowded, less restricted, less governed, in which they could live where and how they wanted. A large measure of this dissatisfaction was also religious—carried by people with different ideas who were escaping an established Church of England that poorly tolerated unconventional practices and viewpoints—and gave rise to local congregations and enclaves of Puritans, Calvinists, Quakers, the Anabaptist Amish, and later the Mormons or Latter Day Saints. But the distrust went beyond religion to any established institution that would impose that iron rule with no easy or direct line of escape for the freethinker.

Distrust of government as an institution is written into the U.S. Constitution. The basic structure is arranged to provide those famous “checks and balances.” The Congress, however structured and elected, can only write the laws. The President, however supported by cabinet and other administrative positions, can only enforce the laws as written. And the Supreme Court, whose members are nominated by the President but must be confirmed by Congress, can only rule on the soundness of the law in practice, once someone has brought a case contesting its actual application. No one branch of government is meant to be all powerful or able to take action except in the context of the other two.

Today, as in the past, various Presidents have sought to bypass Congress through “executive orders.” While the Constitution makes no specific reference to executive orders, they are usually justified as part of the broad powers that the Constitution gives the President as chief executive and Commander in Chief. Still, they are not meant to supersede the power of Congress to make law.

Similarly, the Constitution has no provision for the vast federal bureaucracy that has grown up around the President’s cabinet posts and its various departments like Agriculture, Commerce, Education, Energy, Health and Human Services, Housing and Urban Development, Interior, Labor, Transportation, and so on. Defense and Homeland Security would appear to be the only posts necessary to the President’s role as head of the armed forces. State and the Treasury would also appear necessary to the chief executive’s function as representative to other nations of the world. But the rest of the cabinet has grown up over the years—mostly during the 20th century—to become interpreters and implementers of the laws passed by Congress.

These days we have the spectacle of laws passed with ever more pages of detail, requiring ever more interpretation by the executive branch. Simple laws that can fit on a page or two and be easily read and understood by the average citizen are a thing of the past. Our country’s book of administrative law, the Code of Federal Regulations, as published in the Federal Register, now adds about 80,000 pages a year. It’s a commonplace thought that everyone, without doing anything out of the ordinary or intentionally criminal, is guilty of something under current federal law. All the more to put the average citizen in his or her place.

I believe the founding fathers would regret this state of affairs.

In part their distrust of government was based on the founders’ own experience with what they called “factions,” which today we would call “parties” or “partisanship.” Not only is each branch of government set as a check and balance on the other two as a matter of design but also as a prevention against one group gaining control of the levers of power and using them without fear of obstruction, impediment, or retaliation. The members of Congress are—or were—supposed to be impermanent, serving for terms of two or six years, and capable of being voted out if they failed to do the job the public wanted. The President nominates members of the cabinet but they must be confirmed by Congress, as are the heads of major bureaucracies like the Central Intelligence Agency. The U.S. Civil Service, representing non-appointed, non-military civilian government employees, was only established by law in 1871. But these positions have traditionally been and are supposed to be filled by competitive hiring based on personal merit—and not, as in the conspicuous case of corruption in New York’s Tammany Hall, as a reward for partisan support.

The founders’ respected majority opinion, but they also looked out for the rights of the minority. People and political positions that lose a legislative battle by a vote of 49% to 51% are not to be automatically ground under, hunted down, or led to the guillotine.2 And important votes, such as overriding a President’s veto, have to be settled by more than a simple majority. The Constitution also allows each body to set its own rules for operation, and the Senate early on—that is, from about the 1850s—allowed minority objectors to a piece of legislation to filibuster it, or hold the floor and delay the vote for as long as their legs and their breath held out.

And finally, the Constitution’s own Article VII allows for its ratification by the states. That is, the new government under the Constitution could not simply impose itself on states that did not want to be ruled by this document. They had to choose to abide by its conditions.

Distrust of government is thickly strewn through the Bill of Rights, too. These first ten amendments to the Constitution were proposed after the battles for ratification in the late 1780s and specified federal guarantees to individual citizens. The people could speak their minds and worship how they pleased; defend themselves against tyranny; refuse to house soldiers except as prescribed by law; be secure in their persons, houses, papers, and effects from unreasonable searches and seizures; be free from double jeopardy and self-incrimination; enjoy the right to a speedy public trial before an impartial jury and to confront their accuser; be free of excessive bails and fines, and from cruel and unusual punishments; and enjoy all the rights and powers not enumerated in the Constitution.

The Bill of Rights staked out the ground where the new government could operate—quite narrowly, in fact, when compared with the old laws of Europe. These rights were designed to say that people, on their own as individuals and without the consent of a king or parliament, or even of their own elected government, had worth and stature. It was really meant to be a government of, by, and for the people, and not government for its own sake or as a convenience to those who held temporary power.

In short, the founders considered a national government, state government, or any formal control over the freedom of the individual as a necessary evil—not as a good thing in and of itself.

There are people and parties in this country today who would like to bring back the old European ideals: that the government grants rights and sets limits for the individual; that the products of an orderly society should be uniformly shared, even if that means giving up individual freedoms; that the average person is too willful, reckless, or stupid to make reasonable, intelligent decisions for him- or herself; and that to protect the rest of society, the “best and brightest” must step forward to direct the common citizen.3 These people want a more orderly, controlled—and controlling—state to define the limits of human existence.

And there are people and parties in this country today who say to that: “Been there. Done that. No thanks.”

1. See We Get the Smart Ones from November 28, 2010.

2. Thomas Jefferson, in his 1801 inaugural address, interpreted the Constitution thus: “All … will bear in mind this sacred principle, that though the will of the majority is in all cases to prevail, that will to be rightful must be reasonable; that the minority possess their equal rights, which equal law must protect and to violate would be oppression.”

3. This was the essence of Plato’s “philosopher kings” in The Republic. But remember that Plato and his crowd were admirers of the rigid Spartan regime, which was a closely held oligarchy and not an open society of equal individuals. His ideas were notable in Athens not because they were revered but because they were antithetical to Athenian democracy. Or else why was Plato’s annoying mentor and protagonist Socrates forced to drink poison?