Sunday, March 18, 2018

Death Is All Around

Dead bird

The other day I was pruning back the two Ficus trees in the apartment. They tend to reach for the sun from nearby windows, which means that the branches further away lose their leaves and become dry and spindly. I had to cut back the dead branches and turn the pot a bit to even out the growth.

As I was doing this, I knew the tree’s response would be to send out new shoots from the clipped branches, fork and spread them, and continue growing. But as I made each cut, I also remembered a science fiction story from my childhood, about a man who invented an extremely sensitive microphone that could record the cries of a tree as it was being cut down or grass as it was mowed. Now, I know that in order to feel pain and cry out, an organism needs a central nervous system and a locus in the brain where pleasure and pain can become registered in consciousness. The best a tree or any other plant can do is drip some sap—a result of capillary action beginning in the roots—and let that stump of a branch wither for lack of leaves and photosynthetic nourishment. In this case, the branches were mostly dead already, although some of them had a small and inconveniently placed twig still sprouting foliage that was brushing up against the wall or the curtains, and I was simply performing some elementary bonsai for aesthetic purposes.

That old science fiction story, however, got me thinking. Even a confirmed vegetarian—who laments the slaughter of cows and pigs for food, because of their death agonies—is comfortable with raising crops for their food value. They don’t rely on acorns that have already fallen from the tree, as Native American tribes in California did for their grain supply. And our farmers don’t pluck the ears of corn and then leave the plant standing to reproduce and sprout new ears next spring. No, we kill plants by the thousands of millions and never stop to think about their piteous cries too subtle for our human ears to hear or any of our instruments to register. The American farmer and all agriculturalists throughout history are wholesale merchants of death.

On this same morning, I was in the living room doing my exercise regimen when I noticed a mourning dove on the window ledge. We have doves all over the woods in back of the apartment complex, but this was the first time I had seen one so close. It was looking around, walking back and forth, and otherwise appeared nervous. Occasionally it would glance straight up into the sky. It occurred to me that we also have three red-tailed hawks that live on our hill, and occasionally I find loose feathers, sometimes clumps of them, when walking in the woods. The dove might have been using the ledge to protect itself from attack from above, because the angle between the sill and the window glass was too narrow for a clean kill. Only when another dove flew down from the ledge of the apartment above mine, and the first dove took off to follow, did I realize that all of its walking back and forth was waiting for some sign from its friend or mate.

While I was looking at the dove, I could see that it really was a work of natural art. Whether you believe in intelligent design by a living god or blind creation through evolution adapting random mutations into systems that function perfectly in the real world, you have to marvel at that small package of life. It struts, it flies, it scans the skies, and it has relations with other doves. Those tiny, beadlike eyes can measure angles, calculate distances, recognize shapes, and perhaps even register some of the beauty of the world. This wondrous being is initiated by the fusion of sperm and egg, gestated in a shell, raised in a nest by parent doves, learns to fly, finds seeds and grows to maturity, finds a mate—if it’s lucky—and eventually gets bounced from on high by a hawk and dies. All that articulation, all that recognition, that beating heart and bright eye—snuffed out in a minute by a chance encounter. Maybe some doves live to old age, get arthritis, and break their necks falling off a branch. Maybe some get cancer or another illness and die suffering in the long grass. But for most, it’s pounce and gone!

Some thirty years ago my wife brought home a box of books and papers from the library where she worked. As a result, we have developed a lingering infestation of silverfish. About every six months I will see one boldly scuttling across the hardwood floor, appearing like a moving scrap of dusty tinsel. I always smack them, and then I have a smear of paste and a scrap of damp paper. But even as I am killing it, I marvel at the articulation of this thing that is barely alive. It moves, it seeks light and shade, it knows that it’s being attacked when I miss, and sometimes it scrambles and flees successfully. It’s not the jeweled mechanism of an ant or a beetle, but it works on the same principles. I almost feel ashamed to kill them.

This planet is covered with life. I’ve said this before: everywhere you look that isn’t bare rock, dry sand, or blue sky is teeming with signs of life and the DNA that propels it. Some of this is easy to see, like forests filled with birds and deer or fields full of grasses, wildflowers, and burrowing rodents. Some you have to search out or imagine, like microbes living in the soil or plankton in the ocean. And some you can infer from its handiworks, like anthills, coral reefs, and skyscrapers. All of it is bursting with natural energy, all of it growing and reproducing. And all of it will die. Even the supposedly immortal jellyfish that are going the rounds on Facebook now will one day find a predator or a boat propeller and turn back into their component molecules.

Death is not the enemy. If none of this life ever died, by now the planet would be a hundred feet deep—or more—in struggling animals, the seas would be solid with fishes and the sky black with birds. Since all that is impossible, some natural mechanism would have intervened to end reproduction on this planet. Immortal creatures would live out their lives and never change. For more than three billion years that was indeed the state of things: bacteria growing and dividing, growing and dividing, never dying except by happenstance, and never much advancing. Then about five hundred million years ago, something happened and life exploded in thousands of multi-celled forms—the Cambrian Explosion, which laid the groundwork for what we have today. Since then, we have had periods of intense growth and diversification, followed by periodic extinction events that wipe the board clean and clear the way for life to go in a different direction.1

But all of it dies, every time. If it’s lucky, an organism gets to breed before it dies, and new life follows after it. And if it’s very lucky, that new generation carries mutations that might, just might, allow its progeny to survive when the climate or the food supply or the predator-prey balance changes and the organism’s progeny need to adapt. But that’s still a matter of chance. And for the organism that is alive right now, death is certain.

This is not a tragedy, not a thing to dread. Because death is all around, we know that the time right now is precious. Because life adapts and changes, we know we will never again see the exact mix of animals, plants, and even microbes that we can observe and catalog right now. This planet is alive because the things that make it interesting can die. And this is a blessing.2

1. Think of the extinction of the dinosaurs 65 million years ago, which cleared the path for the rise of mammals and, eventually, humans.

2. I didn’t intend this to be morbid, but it has been six months since my wife of forty wonderful years has died, and I am still reverberating with the loss.

Sunday, March 11, 2018

The Meaning of Life, Again

Piano keyboard

I’ve written about this before,1 mostly from a scientific and technical perspective. Now, I’m thinking more along spiritual and/or philosophical lines.

Any question about the meaning of life is a product of our own brains, which appear to be alone among the animals—and that would suggest among all the other life forms on Earth—in having the capacity to think both abstractly and self-referentially. We can think about things that are not immediately in front of us, not cued by any sensory input or by our immediate life situation, and sometimes not even related to any of our experiences or memories, perhaps not even related to any other thing in the universe. And we can think about ourselves, examine our own motives, call up our memories at will, and even place ourselves and our personal reactions in imagined, hypothetical, and future situations.

We can live, second by second, in all three tenses—past, present, and future—and modify each of them with linguistic moods such as the subjunctive. So, instead of having to say, “I will do that,” our pan-temporal perspective allows us to say, “I would do that, if this other thing were to happen.” We can hold a thought that is concerned simultaneously with something that may occur in the future and with something hypothetical and contingent upon other factors that may or may not occur in the future. That’s pretty complex thinking. Animals—not even our closest mammalian relatives—don’t do this, and that means the plants and protozoans probably don’t, either. Our thinking processes and our perspective are unique.

We humans seek a meaning to life because we are capable of thinking about and examining hypothetical alternatives. What was I like before I was born? What will I be and where will I go after I die? Why am I here? Am I living up to my personal potential? Will I ever achieve the dreams I had when I was young? Is what I’m doing now with my life important enough to satisfy the expectations of my family and friends? Will it satisfy the expectations of people I don’t know personally, the general public beyond my intimate circle, and future generations? Will I be remembered after a death that, although I don’t like to think about it, seems to be coming for everyone and may one day come for me?

Animals do not have these thoughts. All of these questions are based on hypothetical alternatives to what we can immediately sense and know. They are even outside the realm of what we can remember from past experience. My dog does not question her life. She can be disappointed if I must cut short her midday walk because I have to leave for an appointment, but after a few anxious tugs at the end of her leash and a reluctant turn toward the house—because she knows how far she wants to go right now, and that she’s being shortchanged—she finds new smells to investigate on our way to the door. By the time we’re in the hallway, her tail is up and wagging again.

Even a dog that is suffering base cruelty—whipped by an angry master, left out in the hard sun or the cold rain, shut in a small space without the society of its pack for hours or days at a time, or even starved—does not begin to question its existence. It may be depressed, with head drooping and tail down. It may feel that it has lost the love of its pack and its alpha—that formerly loving and now cruel master. The dog may assume that, as caresses and treats once came when it acted to please the alpha, it has now somehow done something displeasing in order to deserve such hard treatment. A formerly loved dog who is maltreated or abandoned can recognize the change in its situation and react with confusion and despair. But even then, the dog will not ask why it was born into this life. And it will not commit suicide because life has become something different from what the dog once experienced.

Animals do not question their lives and its meaning. They do not feel they were born for a purpose; they simply live. If life has a meaning for animals—and plants and protozoans—it is written into their genes, which means it is part of the physical structure that organizes their brains—if they have any—and responds with innate drives keyed to their hormonal secretions. They eat because their stomachs are empty and chemical cues tell them they are hungry. They seek out sex—without thinking about its reproductive effects or future generations of posterity—because certain smells and pheromones stimulate their glands. They try to get out of a cage because they are used to open and familiar spaces, and the bars keep them from their known space. They resist a steel trap because the bite of the jaws is painful. And when death inevitably comes, they go quietly because they don’t think about alternatives.2

Socrates is supposed to have said at his trial, “The unexamined life is not worth living.” But Socrates was a philosopher and a human being. He lived in the dreamtime that all of us humans—and perhaps our closest primate relatives—inhabit. It is a realm of expectations and possible alternatives. It is a place that demands meaning. But life itself—as lived by every other animal, plant, and protozoan—is a chemical mystery without inherent meaning. It has its imperatives, of course: eat, move, reproduce, seek prey, evade predators, survive. But even these are unexamined premises for most of this world’s living things. They don’t have words for their drives, let alone think about them in the abstract.

We humans are the apex animal in terms of sensing, perceiving, appreciating, and examining the realms of both the abstract and our own existence. We are the first living thing in a heritage of almost four billion years—years occupied mostly by bacteria and other one-celled chemical machines—to ask that life have a meaning. We ask both from the broader perspective of the human species and from the narrow view of our own personal lives. And in both cases the answer seems to be, in the words of Colour Sergeant Bourne in the movie Zulu, “Because we’re here, lad. Nobody else. Just us.”

If life has no apparent meaning for any other species—it just is—that suggests we humans will have to make up a meaning for ourselves. If life has a purpose, other than the chemical imperatives, then we must create it.

Perhaps there really is an omniscient, omnipresent, and omnipotent God up in the sky, or somewhere beyond normal existence, who created the Heavens and the Earth and who invented life as a good idea among all that otherwise inert matter bound up in star stuff. This notion supplies a ready-made meaning. Or perhaps the simple belief in such a god—or in the nature of goodness and purpose themselves, as represented by such a belief—is enough to supply that meaning. Certainly, many people hold such beliefs and find meaning in them.

For the rest of us who don’t quite believe, and yet wonder what comes after the death that is surely awaiting us all, we are left with having to create our own meaning, both for the species and for ourselves. I tend to believe that the purpose of our big brains and their ability to sense, perceive, and wonder is to seek out and create that meaning. We are the next stage of evolution, and as the apparent inheritors of existence from all the inert matter and the non-thinking life forms in this star system, we have a duty to think up a good one.3

1. See The Meaning of Life from October 9, 2011.

2. When you take a terminally ailing dog to the vet to be put down—as we have had to do a couple of times now—it will shiver and shake. But that is not because it fears death. The animal reacts that way because the veterinary office is generally a place of painful pokes and pinches, and it smells of other fearful animals. Also, the dog senses the sorrow of its master and knows that this trip is somehow different from all others. Different is hormonally dangerous for an animal.

3. There’s a story I’ve read that says Douglas Adams, author of The Hitchhiker’s Guide to the Galaxy, had early experience as a computer programmer. In the novel, when the supercomputer Deep Thought responds to the question about “life, the universe, and everything” with the answer “42,” this is not just random nonsense. In ASCII (American Standard Code for Information Interchange) encryption, the number 42 stands for the asterisk (*), and that symbol is used as a wildcard in queries and sorts. So the Deep Thought answer was computer shorthand for “Whatever you want.”

Sunday, March 4, 2018

The Structure of Music

Piano keyboard

For many years, the big hole in my education has been the structure and theory of music. I love music and have listened to many types of music—classical, rock, country, Celtic, and new age or electronic—all my life. Certain passages and even some basic chord progressions can bring tears to my eyes. But if I thought about it, I did not know why. And that bothered me.

Like most children in suburban public schools in the 1950s and ’60s, I was invited to play an instrument in the fourth grade. This was partly for music appreciation and partly a recruitment process for future junior-high and high-school bands in the district. My instrument was the trombone, and I played it badly. Although I took regular lessons, was given the étude books to work on, could read music—more or less—in the bass clef, and was theoretically committed to practicing on the instrument for an hour a day, I was still bad at it. Partly, this was because I was bored with the exercises and cheated on the practicing. Partly because, with all that training, I still didn’t know what I was doing.

For one thing, I was hazy on those sharps and flats shown on the staff at the beginning of each composition. I tended to forget that the sharp or flat way up there governed every other note on that line or space throughout the piece, unless there was a sharp, flat, or natural sign attached to single notes to counter it. I knew those leading sharps and flats had something to do with the “key signature,” but since I had no working knowledge of what a “key” actually was—other than that plonky, push-down thing on the keyboard—it didn’t mean much to me and so I tended to ignore it.

For another thing, I had no concept of chords and harmony. The trombone, like all brass instruments, is a one-note instrument: toot, toot, and toot. What you blow is what you get. If the composer wants a chord from the brasses or woodwinds, he or she assigns different parts to the different “chairs” in the section, one to play the root note and the others to chime in with the harmonics. But as a single player sitting there, all you know is you’re playing a different note from the fellow beside you.

At the same time, at home my dad was learning to play the organ—his school instrument had been the violin—and so we had a Hammond B-3 in the living room. I would dust it as part of my weekly chores, and that got me fooling around with the drawbars to make different sounds. I even learned to play one simple two-handed song, although I had no idea what the left hand was actually doing when I held down those three keys at once, but it sounded nice. So three years ago, when I decided to correct that hole in my education, I bought myself an organ—a Hammond XK-3c, because I already had a feel for how the drawbars put together each note from the different pipe-organ lengths, and also for old time’s sake—and began taking keyboard lessons. I also bought a set of lessons on music fundamentals from The Great Courses and worked my way through them.

What I’ve learned since then is a revelation as to the structure of music. Start with the keyboard, which is standard for all pianos, organs, synthesizers, harpsichords, accordions, and anything else that plays polyphonic—that is, “many voiced”—music by pressing down keys with your fingertips. Those keys represent a repeating pattern of twelve notes, seven in the white keys and five in the black. Even my childhood music training had taught me that the white keys were whole tones and the black keys half-tones: sharps a half a step up from the previous key, and flats half a step down from the next key. But why they were arranged in that pattern of two blacks together followed by three blacks was a mystery. If these were the whole and half steps, then shouldn’t there be a black key between each white key, evenly spaced out, like inch and half-inch marks on a ruler? And why were two of the white keys stuck together in that series?

No one had ever told me before about the musical modes. What we hear when we sing do, re, mi, fa, sol, la, ti, do in school is not a simple whole-note progression. It sounds like that to our ears, because we’re familiar with it. But actually, what we’re singing and hearing is a varied pattern: whole step, whole step, whole step, half step, whole, whole, whole, half step.1 If you play that out on just the white keys of the piano starting at C (the white key immediately to the left of the first group of the two black keys) and ending at C an octave higher, you get the C-major scale.2 This pattern of wholes and halves is called the “Ionian mode.”

The modes go back to the ancient Greeks and their music, and so the names have Greek references. They were actually refined and organized in the Europe of the Middle Ages. There are six other modes: Dorian, Phrygian, Lydian, Mixolydian, Aeolian, and Locrian. You can hear each of these modes by playing the scale on only the white keys starting with the next higher note above C (that is, D, E, F, G, A, and B).3 The patterns represented by the other six modes just sound wrong compared to the C-major scale we’re all familiar with.

Circle of Fifths

So then, what are the “key signatures”? Simply put, they are ways to move that eight-note Ionian pattern up and down the keyboard—a process called transposing—to accommodate the limited range of most people’s singing voices. You can recreate that T-T-T-s-T-T-T-s by starting with D instead of C but only if you sharp the F and C—or, conversely, flat the G and D—and that’s the key of D major.4 Or you can start in E and sharp the F, C, G, and D. You just work your way up the keyboard, sharping or flatting notes to recreate that whole-and-half-step pattern of the Ionian mode. So, to correct my earliest misunderstanding, the sharps and flats shown at the start of a composition, the key signature, are not just random, arbitrary adjustments to the notes. Instead, the key signature transposes the familiar scale up or down the keyboard to make it easier for people to play and sing.5

Finally, I’ve learned the other mystery of my early education, chords and harmony. The basic pattern is the triad, starting with the root note and adding the third and the fifth. So, with the root in C, the third is E, and the fifth is G. That makes a pleasing sound and one richer than just the root note by itself. With the root in D, the third is F sharp and the fifth is A. And so on. But as with everything, there are complications. If you flat the third (Eb in C), you get a minor chord. If you add in the whole note just below the root, you get the seventh chord. Add the whole note above the root but an octave higher, and you get a ninth. And so on and on. All of this makes the set of sometimes pleasing, sometimes jarring sounds that so move us in music.6

The organization of the key signatures is best described by the “Circle of Fifths,” as shown nearby. Each key signature derives from the fifth note of the key that came before. So the key of C major, with no sharps or flats, yields the scale based on its fifth, G, with one sharp (F#, or, conversely, Gb), which yields the scale based on its fifth, D, with two sharps (F# and C#, or Gb and Db), which yields its fifth, A, with three sharps, and so on around the circle until you come back to C major.

The whole thing works because these various vibrations sound in the human ear—at least, one that has been culturally attuned to hear them—as pleasing patterns. It all makes sense in a mathematical and physical way. And that’s what I came to learn.

Now I just have to put in the practice to embed this knowledge in my nervous system and have it come out my fingertips. But luckily, my teacher has me working from actual songs and progressions of chords built into them, rather than the dry and mechanical études I studied with the trombone and found so boring. So teaching has come a long way since my childhood, too.

1. For simplicity in notation, music theorists use tone (T) to represent a whole tone or step and semitone (s) for a half tone. So the pattern is conventionally written as T-T-T-s-T-T-T-s.

2. Actually, each key—black or white—on a piano or organ keyboard is just a half tone above the previous key. The spacing within and around the two groups of black keys clearly shows this, with the white keys playing the whole tones and the intervening black keys playing the half tones. However, the two white keys set side by side between these two groups of black keys—at E and F above the two black keys, and again at B and C above the three black keys—yield the half tones required by the Ionian mode.

3. Why our most familiar mode, the Ionian, starts with C and not with A is a matter of both history and physics. The letter names for the notes came from the work of a late Roman philosopher, Boethius, and the A corresponded with what he considered the lowest note on his scale, then went up through the eight notes to G. He was not, however, working with the various modes as we practice them today, and his lowest note is not today’s A above middle C, also called “A440,” because it sounds a vibration of 440 Hertz.

4. Minor scales are built using the same sharps and flats as the major scale but start on the sixth note of the major. Thus, a minor scale built on C uses the same notes as the C-major scale, but starts with A and so is known as the A-minor scale. That makes an entirely different pattern of whole and half steps. No wonder I was confused as a child in the fourth grade!

5. As part of my keyboard instruction, I’ve also had to learn oddities like the “blues scale,” which represents a seven-note pitch sequence—not eight—that is quite different from the various classical modes. The blues involves minor thirds, and develops a strange pattern: whole tone, minor third, whole, half, half, minor third, whole. So with the blues in F, the scale is F, Ab, Bb, B, C, Eb, F. It’s a strange but beautiful sound.

6. Generally, to my and to most people’s ears, the major chords are bright and happy, while the minors are a bit dissonant and a little depressing. And some chords and their progressions play very well as accompaniment to horror movies.

Sunday, February 25, 2018

Don’t Know Much About Cosmology

Black hole

All right, I’m an English major and became a writer from before I even got out of high school. Language and presentation are my thing—along with a penchant for clear reasoning. Being able to think logically and reason clearly is essential to any writer. Otherwise you get all balled up in your plot points, can’t make a convincing argument, and tend to wander off in the middle of an article or story. But I digress …

On the subject of cosmology, I’ve seen several recent articles including one in Scientific American and an episode of NOVA—a fairly popular press—suggesting that the galactic-core black holes, those with masses in multiples of a million of our suns, might have formed early on the universe’s time scale. The accepted origin of stellar-mass black holes is that they form from the collapse of large stars, ones so big that their cores after they burn out and explode don’t stop at the white dwarf or neutron star stage, but are so crushed by their inherent gravity that they disappear into a point and generate a timespace so curved that even light does not move fast enough to escape it.

In this theory, the galactic-core black holes must have started as regular collapsed stars, but they lived in a dense region of the galaxy with rich feeding opportunities. They tore apart all the stars around them, adding to their mass, until they became giants. And they went on absorbing any wandering nearby stars until they acquired the mass of a million suns. Although light cannot escape a black hole’s event horizon, the inrush of matter forms an accretion disk of colliding particles swirling faster and faster toward the center. And some of this jumble of moving matter escapes sideways as huge jets of energy at the black hole’s poles. These jets are some of the most energetic objects in the universe. Radio astronomers first identified emanations from these jets in the 1950s and called them “quasi-stellar objects,” or quasars.

The trouble is, by peering toward the edge of the known universe—which because of the time it takes light to travel over these huge distances is the same thing as looking backwards in time—we have discovered quasars in the earliest universe. Some of them at the center of the most distant galaxies are associated with black holes that must be less than a billion years old. In a universe that is supposed to have originated from a single point of matter that started expanding outward less than fourteen billion years ago, these earliest quasars—if they are indeed linked to galactic black holes—are an anomaly. The universe in its first few hundred million years, the age of these objects, could not have created the sort of massive stars that collapse into black holes in the first place. And it would not have created enough of these stars, in enough generations, for a stellar-sized black hole to have gobbled down thousands or millions more of its massive companions to become the objects we think we see today.

The conclusion of the articles is that some other accretion mechanism must account for these earliest quasars. The favorite theory seems to be that the central portion of these newly forming galaxies was so dense with dust and gas that black holes could form directly by gravity attraction. Before the mass of condensing material could kick off its fusion reaction and start burning as a star, the huge mass simply collapsed into a point of extremely dense matter concealed within an event horizon of closed spacetime.

Stars form—or at least that’s our current theory and observation—by condensing gas and dust into dense, whirling pools. When enough matter accumulates in one place, the inward pressure of gravity exceeds the outward pressure of the atoms beating against each other in a condensed gas. The compressed gas heats up and the lightest of these gases, the universally abundant hydrogen, start fusing into helium atoms. After enough time in the larger stars, the helium goes on to fuse as well, forming lithium, and later fusion reactions form carbon, oxygen, neon, and silicon. The process in even the largest stars stops at the formation of iron, which shuts down the fusion reaction and causes the star to collapse into a black hole.1

The notion that early galaxies had such dense star-forming regions that they would bypass the normal evolution we see and go straight to core-dominating black holes is a novel idea. I don’t necessarily find it rationally impossible. The idea of any material object collapsing to a point in a space that is too small to be measured but still containing tens, hundreds, or millions of solar masses is bizarre enough. That these collapse objects would create enclosed and unknowable systems—but that the universe itself was created by the outward explosion of a singularity that contained the mass of a hundred billion galaxies, each containing upwards of a hundred billion stars—that defies reason and logic. Infinite density in once case creates an undefinable and self-contained point, but an even greater density explodes with a force that causes the mass of the universe to spread outward, soon condense its raw energy into protons and other atomic particles, later to form stars and galaxies, and then go on to expand forever? Um, no …

And there’s a quirk to all this, too. The universe we can see with our instruments has a radius much larger than thirteen billion light-years. So while we can compute the age of the universe backwards to that point in time, it has actually been expanding away from us faster than the speed of light. Some astronomers account for this by the inflation theory—that in the first micro-instant after the presumed Big Bang, all the matter in the universe was actually moving faster than light, proceeding from an object the size of a proton or atom to a globe about the size of our solar system practically in no time at all. After that, the expansion slowed and the universe took on the shape we see today.

Of course, it’s no good asking where in the universe the Big Bang actually started. Everyone knows it started three inches behind my bellybutton. Or maybe behind yours. The point is, when you have an entire universe created from an origin with no dimensions, the starting point is actually everywhere. In an analogy I’ve used before, imagine four people are sitting at a card table at the center of the fifty-yard line on a football field when a bomb goes off under the table. The blast throws one person to either sideline, one back to the thirty-yard line at one end of the field, and the last person to the opposing thirty-yard line. If you ask each one where the bomb went off, they would point back to the fifty-yard line and say, “Over there.” But suppose they were sitting in a darkened room, and the force of the explosion threw them back an unknown distance in an unknown direction. Ask the same question, and their replies would all be, “Right here.” When there is no established field with no reference points, all origin points are the same.

We live in a universe that is supposedly thirteen-point-something billion years old, but which may be more than twenty-six billion light-years in diameter. Its earliest observable components have characteristics that we don’t really believe could yet have formed. The galaxies in this universe spin their stars with a velocity that does not match what we can understand about the gravity interactions of normal matter—causing our physicists to theorize about and search for evidence of “dark matter.” And those galaxies themselves are flying apart at an ever-increasing speed, far exceeding the impetus of the Big Bang itself—causing our physicists to theorize about and search for clues to “dark energy.” And these dark components are not just an add-on to the story of the universe we can see but, according to all the calculations we can make, they are the central facts of this universe. Compared to dark energy and dark matter, the stuff that makes up the world we know about is an irrelevance, a bit of foam on an ocean wave we cannot even detect.

But then, of course, we live in a universe bounded by c, the measured speed of light. And while that speed is constant and not to be exceeded in the vacuum of space, it is still a measure dependent upon both time and distance. However, Einstein’s own theory of general relativity says that time slows down and space curves inward under conditions of extreme gravity. This would seem to contradict his theory of special relativity, which states that the laws of physics are the same in all inertial frames of reference and that the speed of light is the same for all observers. Am I the only one who sees a conundrum here?

As I have written before,2 I believe there are three things we do not fundamentally understand in physics: gravity, space, and time. We can measure them. We can write equations about them. We can fit them into our conceptual mathematics. But we don’t—yet—have a working understanding of what they are. Time is not just the passage of seconds or the sequence of events. Space is not just an emptiness that does not happen to be occupied by bits of matter, which can then be resolved into tiny knots of energy. And gravity is not a force or a field, and it may not even be the resolute manipulation of the passage of seconds and the nothingness of space.

But what do I know? I’m just an English major. But I’m also an agnostic and a contrarian, which means I am suspicious of organized religions and schools of thought, whether they deal with the orthodoxy of an origin story based on seven days of willful creation or the explosion of an infinitely dense piece of matter. For me, neither story adds up or makes complete sense.

1. The heavier elements, including metals like nickel, copper, silver, gold, and uranium, form in the brief bursts of fusion during the catastrophic explosion and collapse of massive stars—or that’s the theory. This would account for their rarity both on Earth and elsewhere in the universe.

2. See Fun with Numbers (I) and (II) from September 19 and 26, 2010.

Sunday, February 18, 2018

Sources of Personal Power

Bronze angel

As a novelist who writes stories about both interpersonal and political relationships, and a professional communicator who has specialized in issues of leadership, I am interested in the sources of personal power. How does a person gain a position of advantage, leverage, influence, or persuasion over other people? What gives one person power over him- or herself and others?

First of all, power comes from strength, both the kind of strength that performs and the kind that endures. This strength may include the physical strength of muscles and nerves, working in coordination with precise balance, sharpened senses, a high-functioning immune system, and generally good habits of nutrition, hygiene, and health. But physical strength is not the most important aspect of a person. Mental and emotional or spiritual strength, exercising the gifts of the mind, can make up for deficits of the body. These are the components of the will, without which a person can attain little mastery and almost no endurance.

Strength of mind and will comes from developed skills, acquired knowledge, and the mastery of some domain of social interchange among people or of the arts, which may be viewed as an interchange with ideals of thinking and perception. Attaining skills, knowledge, and mastery build a person’s confidence. The person becomes strong by being sure of the things he or she knows to be true and, equally, those things known and tested to be false. Knowledge and mastery place the person within the world and set him or her in a secure space.

Strength also comes from experiences of hardship and loss, of expectations denied and delayed, and other shapers of a person’s relationship with reality. A human being cannot know what he or she really wants in the world until the soft edges of intermittent desire are worn away, and whims are shown by their denial to be transient and impermanent. The infant is a vortex of desires, most of them crucial to life: food, warmth, personal comfort. The beloved child is given these things readily whenever he or she cries out. But the child too quickly learns to cry in order to get the things it wants, and then the parent must withhold—not love, never love—but the immediate satisfaction of passing wants.

In the same way, the student starting a new course of study needs encouragement and the attainment of small successes. These fix the mind on the possibilities of what is to be learned. But soon enough the student must meet a real challenge: an honest criticism, a failed test, a publicly felt rejection. These denials force the person to examine goals and test assumptions: Is mastery in this area really worth the extra study and effort that this failure suggests may now be needed? Is this a goal I really want to pursue? Is this domain truly part of my future? Without moments of failure and doubt, the person does not develop determination and perseverance. And then, ultimately, the person does not find that place in the world which confers personal meaning and confidence.

Other people can show confidence in a person and express faith in his or her accomplishments, but they cannot give that person confidence. True mastery and its acceptance come from experience and self-evaluation. Other people can erode a person’s confidence with expressions of doubt and criticism, but as they cannot bestow confidence where it does not already exist in some form, so they cannot destroy confidence once a person has felt the mastery of a body of knowledge and practice through his or her daily exercise and the strength of position.

With skills, knowledge, and mastery of some domain, a person then has something to offer others in the world. The artist and the architect alike can create and build something that will touch other people and change their lives. The doctor can heal the sick and injured. The lawyer can amend their legal troubles. The psychiatrist can address their insecurities and confusions. The novelist or the poet can create insights that help explain the world for others. The leader can use insights and experience to create a vision and solutions for the organization or for society.

The essence of personal power comes from being able to offer other people something that they need, whether it’s a medical prescription, a violin concerto, a novel, or a practical solution to a current problem. The person then has a negotiating position, a point of leverage and advantage. This is the power to shape the world around oneself, to exchange with others on the basis of equality, to confirm one’s place in society, and to confer a meaning in life.

Personal power can also come through sheer conquest: taking and controlling the life necessities that other people value, or dominating a weaker soul physically or psychologically. But that form of power is always unstable. The weaker soul either succumbs and dies out—or finds the strength to resist and rebel. And success at domination eventually invites the attention of larger and more ruthless predators, who will retaliate and exercise their own skills of conquest. The person who makes his or her personal meaning through domination must spend too much time planning attacks and defending against retaliations to make this a stable course. There is no balance here.

The one place personal power cannot originate is in weakness. A person cannot gain satisfaction in life or advantage in a situation by claiming victimhood and making petitions for consideration based on fairness or appeals for equal treatment. This may look like a shortcut to power, with the expenditure of little skill or effort, but that sort of leverage depends entirely on the conscience and concern of others, rather than their self-interest and an honest exchange of value. And, in situations of enforced scarcity or necessity, self-interest will always win out against other-interest.

A sense of power—a false sense—can come from identification with others who claim the status of victim or inheritor of unequal treatment. But weakness multiplied does not equal strength. The voices of the oppressed are only useful to those who will use the status of victim as a tool of conquest. But when those others have achieved their goals, they will have no further use for the weak. There is no balance here, either.

Society is made up of individuals. Its strength is measured by the multiplication of ones, not zeros. Balance is only achieved when strong people, with mastery of knowledge and experience, a skill set, and a place in their own world, come together to exchange the necessities of physical, mental, and spiritual life. Or so I believe.

Sunday, February 11, 2018

How We Do Things Now

Gaius Marius

Gaius Marius

Lucius Cornelius Sulla

Lucius Cornelius Sulla

I’m just finishing up a fascinating read in Mike Duncan’s The Storm Before the Storm, about the civil disruptions in the emerging Roman Empire from about 146 to 78 BC. Duncan’s thesis is that, before the Civil Wars that were chronicled by Julius Caesar, in which he most famously vied with Pompey the Great for control of the Republic, contention and animosity between the Senate and the urban populace, and between the Romans proper and their Italian allies, frayed and unraveled the civil order and sense of purpose that had made Rome such a success in the first place. Although Duncan cautions that one cannot always draw parallels between ancient history and modern times, the parallels are there and they are chilling for anyone with political sense.

This is the story of the Gracchi brothers, Gaius and Tiberius, who were reformers trying to help the landless peasants; of Gaius Marius, a “new man” from the Italian provinces who wanted to make his name in Rome; and Lucius Cornelius Sulla, a patrician playboy who found his footing as a soldier and politician. It is also the story of the factions—they were never so organized as our modern political parties—that coalesced around dominant men of the times: the “optimates,” who generally represented the best people, the patrician class, the Senate, the old guard, or in modern terms the one-percenters; and the “populares,” who represented the interests of the urban poor, landless peasants, ex-soldiers seeking a new livelihood, and others who wanted more than whatever they had—although the populare leaders were not generally from among the lower classes themselves.

If this all sounds familiar, it should.

Two themes emerge in Duncan’s telling of this story. One is that each generation of reformist leaders agitated for and succeeded in overturning elements of the mos maiorum, the unwritten rules of Roman civil order and government procedure that had operated from the earliest days of the Republic. These rules applied to such political business as the annual exchange of command between the outgoing and incoming consuls, the veto power and personal inviolability of the tribunes, and the process for originating laws in the popular Assembly with advice from the Senate. The Romans had always done things in a certain way, and it worked because everyone respected and followed the rules.

Just one example of the breakdown was in the career of Gaius Marius. As a new man from the Italian provinces, it was unusual enough for him to seek office in Rome. But he was elected consul—the supreme office of military and civil government, shared by two men at a time for a term of one year—on the strength of his military skill. Generally, the rules said that a consul who had completed his term could not serve again for another ten years—originally the law said for life—but Marius sought the office and was elected five times in succession for back-to-back terms.

The second theme is the gradual introduction of street violence as an answer to debate and unpopular votes in the Assembly. Tiberius Gracchi served as a tribune and was an advocate of agrarian reform—breaking up the large estates owned by wealthy citizens which had originally been carved out of public lands. When he was elected to the same office for a second term—another break with tradition—his opponents claimed he was trying to become a tyrant. His followers and opponents then engaged in a public riot, during which Tiberius was beaten to death in the Senate, and many of his followers were also killed. His brother Gaius was another reformer. When one of his opponents happened to be killed in a subsequent riot, the Senate decreed Gaius an enemy of the state who could be executed without trial. As a mob came to assassinate him, Gaius took his own life. This tradition of violence gradually accelerated from flash mobs and riots to the organization of mercenary street gangs who could be dispatched to deal with rival politicians and their followers.

At the risk of seeming alarmist—because, after all, the Roman Republic did fall into, first, civil war, and then into the imperium of the Caesars backed by their corralling of military force—I can see two parallels with our modern political situation. The first is the erosion of our own mos maiorum, or how we do things now.

When I was growing up—say, in the Eisenhower, Kennedy, and Johnson administrations—we also had political factions and fights. But people could still respect each other. Democrats could speak well of Senator Everett Dirksen, or Republicans of Governor and later Ambassador Adlai Stevenson, even if they disagreed with their policies. And everyone played by the rules. Politicians running for office were expected to publish their tax returns, and they let their party take care of the political war chest and funding of their campaigns. If they were wealthy, they put their assets in a blind trust as they entered office, so their decisions would be free from the taint of any conflicts of interest.

But while the Clintons, both Bill and Hillary, were in office we saw financial scandals: a cattle futures deal that made an extraordinary amount of money with the hint of impropriety regarding the trading laws; a land deal that fell apart with more impropriety in the banking laws; and finally a family foundation that was supposed to be doing charitable works but seems to have been a personal fund for political campaigns and a place to employ loyal operatives between their service in government posts. None of this was clearly illegal—although an alert prosecutor could probably have made a case—but pursuing these ventures while one partner or the other was in public office clearly violated the traditional separation between money and politics.

Democrats and the appointed public servants in the Department of Justice who supported their policies turned a blind eye to all this. After all, why bite the hand of people who are doing such good works when the case is not entirely clear?

The trouble with this attitude is that eventually the tide turns and by then you’ve already established a precedent. When Donald Trump was running for office, people expected him to publish his tax returns but he never did. When he was elected president, people expected him to put his real-estate and branding empire into a blind trust, and instead he turned it over to his family members, who themselves were to become part of his administration. The notion that personal money and public office should be kept separate had gone out the window. And the only people who could prosecute in the name of the unwritten rules about tax returns and blind trusts had been politically removed from the levers of power.

A friend of mine says that one of the problems with the Roman Republic’s system of government was its lack of a separate police power. No established institutions of public prosecutors and guardians of the peace existed to defend the law when an individual or faction broke it. The Romans left prosecution to opposition politicians, who could take action or not depending on the political winds. By suborning the Department of Justice and the FBI on a federal level, and county prosecutors and police on a local level, our modern political system has achieved much the same end. We all watch the mad circus pass through and trash our unwritten laws with nobody to stand in the middle of the road holding up a hand to stop the parade and punish the offenders.

The second parallel with today’s political scene concerns mob violence. We’ve had political leaders, mostly presidents, assassinated over the years: James Garfield, William McKinley, John Kennedy, Martin Luther King, Jr., Robert Kennedy, and the attempted assassinations of Gerald Ford and Ronald Reagan. But these were individual acts of violence and sometimes of madness. They may have served political ends, but they were not—or could not be proven as—the policy of any group or party.

During the late 1960s, and again after the 2016 election, we have seen public protests break down into riots and mob violence. So far, most of the action has occurred on university campuses and been oriented toward issues of hate speech and unacceptable ideas. And so far, no one has been targeted for assassination or even accidentally killed. But, like the Clinton Foundation and the Trump assets, this politically directed violence sets a precedent. It may start with a riot on campus to silence and prevent a speaker who has the wrong ideas. It could eventually lead to violence in the streets to protest unpopular policies, as the Vietnam War and Civil Rights Act inspired street protests in the 1960s. This same brand of directed violence put the Nazis in power in Germany in the 1930s. I have always believed it could not happen here, but now I am not so sure.

As a thinking person with a center-right point of view, I am conscious of our American traditions, our own unwritten rules, and our sense of public civility—of treating people who act decently with respect, regardless of their politics. I would hate to see those traditions eroded to the point at which civil war might break out, because—aside from the public disruption, property damage, and loss of life—I fear what such a turning point might bring. As with the fall of the Roman Republic, a period of martial law or, worse, a personal dictatorship backed by military force might soon follow. And that will be bad for everyone who believes in a democratically elected representational government.

Something wicked this way comes.

Sunday, February 4, 2018

War Between the Sexes

Arm wrestling

It used to be a joke—or at least it was from the viewpoint of male-dominated, patriarch-oriented Western Civilization—that a war is continually being fought between the sexes. This theme goes back to the Greek myths and drama, played out in the marital tensions between the gods Zeus and Hera and between Aphrodite and Hephaestus, or among mortals between King Agamemnon and Queen Clytemnestra and between the hero Jason and his princess Medea. It shows up in the plot of Aristophanes’s comedy Lysistrata, where the women of Athens deny their husbands sexual relations in order to stop the Peloponnesian War. It’s a tradition that was echoed in the poetry of the Renaissance and the plays of Shakespeare like Much Ado About Nothing and The Taming of the Shrew.

The fact that Greek society could even acknowledge the differences and the tensions between the sexes was a big step forward, because it recognized that women had a part to play and a viewpoint to exercise. A culture that treats its women as a species of chattel, to be bought and sold, guarded as property, and kept secluded and silent in the home—no better than cows or horses—is a culture that does not consider women as having a point of view at all. Who listens to the barking of their dog?

Western Civilization has noted and discussed the differences for twenty-five hundred years or more. Yes, men have long had the upper hand in government and public policy, but that was mainly because these matters have always tended to move too easily into conflict and war. And there the strength, bravery, and organization of men were needed more than the empathy, cleverness, and reconciliation of women.

We have been moving toward equality between the sexes, at least in Europe and North America, for a couple of hundred years now, most notably since the Enlightenment. I credit some of this to the reformation of science from the mystical, metaphysical, semi-religious arts of the alchemists and astrologers to the precise, mathematical, empirical business of chemists and physicists. In this new age of science, anyone could play equally and come up with provable theories, regardless of background and gender. Women like Marie Curie, Ada Lovelace, and Rosalind Franklin could prove the power and effectiveness of women’s brains.

But while differences persist between men and women, they are complementary rather than contentious. Men are strong, women are resourceful, but both can be brave. Men are emotionally reserved, women are emotionally fluid, but both can have strong feelings. Men resort to war, women resort to negotiation, but both can achieve their ends. A man builds a house, a woman makes it a home. A woman bears children, a man supports and protects them. Women and men are, quite literally, the yin and yang of the human experience.

But since the 1960s, and perhaps inspired by the progress women had already made in the West, a new stage of the conflict has developed. The war between the sexes has been weaponized for its political potential. Differences between male and female intentions and approaches stopped being a subject for sober reflection as well as for jokes and comedy. We stopped empathizing with the verbal jousts between Beatrice and Benedict on their way to realizing the attraction between them. We stopped laughing at the extended fight between the flamboyant Petruchio and the irascible Kate on their way to reconciliation.1

In part, I ascribe this weaponization to the underlying politics of the time, which lives on as both a barren root and a hard-shell glaze over our affairs today. The campus revolts of the ’60s started with a reaction to the Vietnam War and the draft, as the street riots in Detroit and the Watts district of Los Angeles were an expression of frustration over delayed civil rights and social progress. But both quickly took on the flavor of Marxist thought, which exploits the dissatisfactions of one group with the perceived successes and depredations of another for political purposes. In original Marxism, the fight was between the producers, the proletarian workers, against the bourgeois merchants and moneyed capitalists, who made their living off the fruits of the workers’ labor. That was class warfare, pure and simple.

The genius of the political activists of the ’60s was to seize on parallels in the differences between women and men, between the black culture and the white. Marxist oppression and its solution—revolution and the abolition of the oppressor class—became the driving theme underlying the feminist and civil rights movements. Class war had become gender war and race war.

Except it doesn’t work that way. In Soviet Russia, Communist China, and every country that took Marxist thought to its bitter end, they could have their revolution, followed by the inevitable reign of terror and civil war, and then the final remaking of society through the triumph and dictatorship of the proletariat. The merchants and the factory owners—and anyone else who supported and sympathized with them, like academics and reactionary government officials—could be tarred and feathered and driven out of town to re-education camps in the countryside. You can always overthrow societies like that, and you don’t actually need an elaborate economic theory to do so. The French had their revolution in 1789 on the strength of reaction to the failings and depredations of the monarchy, the aristocracy, and the church. But that revolution was still mostly about economic power: who had the wealth and ate cake, who lived in poverty and ate bread when they could get it.

Gender war and race war have different roots. Yes, they are about power, but not so much about money, about cake and bread. The extension of Marxist thought into these other struggles is less about real economic control—the gathering, hoarding, and spending of riches—than about ephemeral personal gains like obtaining respect and wielding political influence, and longer-range group ambitions like ordering society and shaping culture. Wealthy women can become active feminists and wealthy African-Americans can make race-based demands without having to analyze their own economic situations or feeling a moment’s blush about their underlying lifestyles.

But where does one take these struggle to their end? In a war of the proletariat, you can kill off the capitalists, create a network of workers’ collectives to run the farms and factories, and govern the country with a revolutionary council which will eventually morph into a popular assembly and a politburo reporting to a party general secretary. And all that can work for a while—seventy years in Russia, and maybe a bit longer in today’s China. But what does winning look like in a gender war? Do all the men go into camps for extermination, except for a few breeding farms for sperm donors? Or does procreation of the next human generation proceed by genetic manipulation, to allow for the eventual elimination of adult men, boys, and any babies that might happen to be born male? What is the outcome of a race war, other than absolute segregation into different societies in separate countries—that, or wholesale genocide?

Marxists are radical thinkers—by which I mean they go to the root2 of the problem and seek out ultimate solutions. In Marxist-Leninist economic theory, there is no place for, nor quarter given to, merchants and capitalists. And serious people—misguided, in my view, but still absolutely serious—can envision a society run on completely communal principles. They can see examples of it in the Israeli kibbutz and the hunter-gatherer tribes out of which human civilization arose. They sense echoes of it in the feudal barony—but minus the feudal lord. They can even call this communist state of affairs a utopia.3

But how do you apply a final solution—what is your intended end-state—in a conflict between two necessary and complementary parts of a functioning society or family structure, like men and women? Or between and among groups of people who merely exhibit minor genetic, physical, and cultural differences—but who remain fully functional members of the species H. sapiens—differences which have become magnified in the discussions of race until they approach the dreaded condition of “the other”? What utopia has all women and no men? Or people of all one, totally homogenous, ethnic and cultural representation with no differences among them?

I am a believer in balance, in dynamic tension, in competition between people in pursuit of their own goals and ambitions. By competing with each other over the production and consumption of goods and services, we can hope to set a fair price at useful volumes of supply. By contending with each other over heartfelt beliefs, we can hope to find an acceptable truth to which most people can subscribe. When aims, viewpoints, and ideas remain open to discussion—with an admixture of charity, good will, and personal reflection—then we can hope to create a balanced society that satisfies, if not actually pleasing, everyone and leaves no one out in the cold. And, given the nature of human imagination and creativity, that competition, contention, and discussion will go on forever, finding only temporary balance and peace before then dissolving again into new contentions and discussions. So be it. Human history is a roiling equilibrium.

But in the war between the sexes and among the races, carried out according to Marxist principles that are directed toward a distant utopia, that balance amid continuing discussion has been broken. Rather than contention and competition—with an admixture of humor and charity—certain feminists, serious people, are now making proposals, without a shred of irony, about the subjugation and annihilation of men. In their published writings and tweets, men are all bad and useless, and women are nothing but good and virtuous. A similar absolutism has entered the discussion of race: just kill them all and start over again.

To say this is not healthy thinking is an understatement. This is not even sane. And what I fear is that thoughts and words—attractive and fulfilling as they may seem—can have consequences. If too many people subscribe to, believe in, and try to act upon the writings of these serious people, it will end badly for all of us. We saw it in the writings—and ravings—of Lenin, Hitler, and Mao. We may one day see it again in the writings of de rigueur feminists and race separatists if we cannot somehow defuse the situation.

1. However, the reconciliation is all on Petruchio’s terms—the taming—and leaves Kate subservient to her husband. That tends to stick in the throat of any modern person.

2. From the Latin radix, or root.

3. See When Socialism Works from October 10, 2010.

Sunday, January 28, 2018

Coming Full Circle


I finally got to watch the movie The Circle with Netflix. It’s a chilling vision of where immersion in social media might take our culture, and that makes it a cautionary tale. The vision of people living in full public exposure, where “secrets are lies,” where personal privacy is a condition of public shame, reminds me of Yevgeny Zamyatin’s dystopian novel We, which I’m sure The Circle’s original author, Dave Eggers, must have read.

The movie is a depiction of peer pressure brought to perfection. By binding all of connected humanity into one hyperactive zeitgeist, this fictional social media software creates a howling mob. But does my analysis go too far?

I am reminded of the nature of the culture we’ve created in America. We are cooperative as much as we are competitive, especially among the younger generation brought up with the teachings of Howard Zinn and the new “woke” enlightenment. When socialism finally comes to this country, it won’t be with the armbands and jackboots of Nazi Germany, nor with the grim-faced commissars of Soviet Russia. Instead, we will coerce people with the smiling positivism of the Romper Room lady: “Be a Do-Bee, not a Don’t-Bee!”

Smiling, positive peer pressure—supported by the veiled coercion of local laws and proprietary rulemaking—have accomplished a lot in my lifetime.

For example, when I was growing up almost everyone smoked,1 and they smoked almost everywhere. My parents were both pack-a-day Camel smokers. I started smoking a pipe—never cigarettes—during my freshman year in college, and this was three years after the Surgeon General’s report in 1964 on smoking and health. We lit up in restaurants, in cars, in every public space, and in every room in the house. The ashtray was a familiar artifact in every home and office.

To counter this, there was no national ban on cigarettes, no repeat of the Volstead Act enforcing the prohibition on alcohol. Instead, we got warnings citing the Surgeon General printed on every package of cigarettes. We got increased excise taxes on every pack, which made smoking prohibitively expensive for many people but especially the young. We got laws making the sale of cigarettes to minors illegal, in an attempt to stop them from ever taking up the habit. And the age limit in California was recently raised from 18 to 21 years old, comparable to limits on alcohol sales.

To change public habits and perceptions, we started by introducing both smoking and non-smoking sections, at first in restaurants. This was followed—at least in California, but soon after almost everywhere—with complete bans on smoking in restaurants, bars, and other public spaces, including indoor workplaces. As of 2008, you cannot smoke in a car with a minor present, under penalty of a $100 fine. As of 2012, California landlords and condominium associations can ban smoking inside apartments and in common areas. And smokers who go outside a public building to practice their vice must stay at least 20 feet from an entrance doorway, an open window, or even a window that can be opened.

This flurry of taxes, laws, and private rulemaking was accompanied by advertising campaigns that showed the damage smoking did to people’s lungs, hearts, and skin. It was supported by statistics about the occurrence of cancer and other life-threatening diseases, not just for those who smoked but for anyone nearby. Smoking became not just an unattractive, dirty, and smelly habit, it became a form of assault on the health and safety of others. In 1998, the attorneys general of 46 states sued the country’s four largest tobacco companies over tobacco-related health care costs that their states paid under Medicaid; they reached a settlement under which the companies curtailed their advertising efforts, agreed to pay perpetual annual amounts to cover health care, and help fund the efforts of anti-smoking advocacy groups.

Sure, you can still buy cigarettes in this country—if you have the money, and a place to enjoy them, out of sight of every other living person.

In my lifetime, I have seen a parallel revolution in the handling of garbage. It used to be that everything you didn’t want went one or two places. Your food wastes went into the garbage disposer in the kitchen sink, to be ground to pulp and flushed into the sewer lines or the septic tank.2 Everything else went into a paper grocery bag—later a plastic grocery bag—in the can under the kitchen sink, to be emptied into the bigger galvanized cans in the garage, taken to the curb once a week, picked up by the municipal garbage truck, and hauled away to some distant place out of sight and out of mind. Where the hauling ended up was a matter of local practice. Some communities dumped their “municipal solid waste” in landfills. New York City put their waste on barges, hauled it out to sea, and dumped it into the abyss of the Hudson Canyon.

Municipal solid waste was anything you didn’t want or no longer needed. It included food scraps you couldn’t be bothered to feed into the garbage disposer; waste paper, newspapers, wrappers, and corrugated boxes; soda cans and any pop bottles some kid hadn’t already taken back to the store for the two-cent redemption; broken furniture and appliances; old paint and construction debris; and dead batteries, worn-out car parts, and used motor oil and filters. If you were through with it, it went out to the curb.

And then the recycling movement started, prompted at first by environmental concerns about landfills and pollution of the environment.3 At first, we saved our wastepaper to be recycled into newsprint and cardboard boxes—in an effort at saving the trees.4 Then we were recycling those aluminum cans, any glass and plastic bottles that didn’t have restrictive redemption value—like California’s five and ten cents on plastic soda bottles, based on size—and other distinguishable and potentially valuable materials. Originally, each of these commodities had to go into its own bin. But now the recyclers take paper, metal, and various grades of plastic in the same bin, because they will shred it all and separate the bits with magnets and air columns.

In the greater San Francisco area, we began seeing metal tags on storm grates reading “No Dumping – Drains to Bay,” usually with a decorative outline of a fish, to keep people from putting motor oil, battery acid, and other pollutants into the water supply. And it’s illegal now to dump these things—along with dead batteries, discarded electronics, and broken furniture—anywhere but at certified recycling centers. More recently, in 2016, California began requiring businesses and housing complexes to separate and recycle their organics—the things we used to put into the garbage disposer—to be hauled away for composting.

So now, instead of dumping everything down the sink or into the garbage can, as we did when I was a child, we rinse out our food and beverage cans and bottles and keep them in the closet until the recycle truck comes. And we put our coffee grounds, vegetable peelings, and table scraps in a basket—my wife used to keep them in the freezer, to cut down on odors and insects, a practice I continue—and hold them until the composting truck comes. We hand-pick, sort, and pack our garbage to go off to the appropriate disposal centers.

All of this change was accomplished not just by laws and fines but more directly with a change in social norms, similar to the anti-smoking campaigns, intended to make those who failed to sort and wash their garbage not just careless but failing in their social duties—a Don’t-Bee—subject to public ridicule. Of course, the movement has not been without its downsides. People who can’t be bothered to take their garbage to the appropriate recycling centers now dump it at midnight on any vacant lot or in someone else’s yard. And alert criminals, who know what the recycling centers will pay for valuable commodities like refined copper, are stripping the wiring out of public parks and farmers’ pumping stations, also at midnight.

Social engineering through a mixture of taxation, regulation, and peer pressure is now an accepted tool in this country. Smoking and garbage sorting are subject to popular taboos. Similar peer pressure—if we let it happen—will soon govern what, where, and how we can drive; where and how we can live; what we can eat; and how often we must exercise. Social media, with its built-in opportunities for social preening and its chorus of social scolds, will only accelerate the process of making us all Romper Room children.

China, which once sought to ban the internet, has now co-opted it—along with aggressive DNA profiling and new technologies like artificially intelligent facial-recognition software—to institute a Social Credit System as a means of public surveillance and control. In their plans, the system by 2020 will index, among 1.4 billion people, each citizen’s social reputation and standing based on personal contacts made, websites visited, opinions published, infractions incurred, and other individual performance variables. People with high ratings and demonstrated social compliance will be rewarded with access to good jobs, financial credit, travel priorities, and favorable living conditions. People with low ratings and recorded infractions will be massively inconvenienced and their futures curtailed.

I’d like to say it can’t happen here, but I don’t know anymore. I can say that, although I try to live a good and proper life, obey the rules, pay my taxes, and not hurt anybody’s feelings, this kind of surveillance and control is out of bounds in a free society. If social media turns into the kind of howling mob predicted by The Circle, then I am out of here. I will take my socially unapproved weapons and move to some socially unacceptable place to join the next social revolution. I will become the ultimate Don’t-Bee.

1. Except for my late wife. I met her just after I had quit smoking, for the last time, and a lucky thing that I did. Her expressed view was “kissing a smoker is like licking a dirty ashtray.”

2. My mother, having trained as a landscape architect, set aside a place at the far edge of the back yard for her compost pile. (Now that I think of it, her mother kept a place like that, too.) Here we carried out and dumped our grass clippings, coffee grounds, vegetable peelings, egg shells, and anything else organic that would decompose into a nice, rich mulch for the garden. But not the raked leaves: those we piled against the curb and ceremoniously burned as an homage to autumn—except you can’t do that anymore because of air pollution laws.

3. I almost became part of this movement professionally. Back in the 1970s, when I was between publishing and technical writing jobs, I answered an ad from a man who had a system for “mining” municipal solid waste. He had studies showing that garbage was the richest ore on Earth, with recoverable percentages of iron and nickel, copper and brass, aluminum, and glass that exceeded the per-ton extraction at most modern mines. After separating these valuable commodities, he proposed pelletizing the paper, organic solids, greases, and oils into a fuel that could be burned at high temperature to generate electricity. Garbage was, in his view, literally a gold mine. The only hitch was that, instead of hauling dirt out of an open pit to a nearby processing plant or smelter, this new kind of ore had to be collected in relatively small amounts by sending a truck house to house. It wasn’t the value of the thing itself that was in question but the small quantities and the logistics involved.

4. Of course, no one harvests old-growth redwoods and pine forests to make paper, not even high-quality book papers and artist’s vellum. As a young child on family boating trips to the upper reaches of the Hudson River, I saw pulp barges coming down to the paper mills around Albany from pulpwood farms in Quebec. Our paper is grown on plantations as a renewable resource, like Christmas trees. The mantra about “saving a tree” is akin to abstaining from eating corn in order to save a cornstalk. We farm these things.

Sunday, January 21, 2018

Hooray for Humanity

White human

A Facebook friend recently posted a Freidrich Nietzsche quote: “The world is beautiful, but has a disease called man.” When I was in college—which is to say when I was very young and naïve—I had a twenty-minute love affair with Nietzsche’s ideas after reading him in a philosophy course.1 Luckily, I quickly came to my senses. Dyspeptic German philosophers don’t do anybody any good. Although I agree with him that the world is beautiful, the second part is horse pucky.

It is a core of my beliefs that human beings are the best form of intelligence and the hottest thing going for about three parsecs in any direction from here. Many other species on this planet exhibit intelligence: most species of dolphins, whales, elephants, octopi, and a number of smaller mammals. Hell, even my dog has a basic, recognizable, communicable form of intelligence. Intellectual capacity is not an on-off switch but a spectrum. On that basis, humans are at the top of the heap, the far end of the scale, the smartest of the bunch.

Why? Is that my human chauvinism speaking?2 Not at all. We’re at the top because we are the ones doing the studying and evaluating among all these other species. We are trying to understand dolphin and whale communication systems, decipher their languages, and determine what—if anything—they are communicating. So far, we have a lot of tantalizing ideas, but no obvious consensus. And that’s not because we aren’t really trying, don’t really care, or haven’t put some of our best minds into the study.

For other animals—and I use the term advisedly—we are still gathering data and, generally, reacting with surprise. Elephants form well-ordered societies, have strong familial and interpersonal relationships, and exhibit an artistic sense. Octopi, which are the only mollusks we know with a big brain, are clever and almost intuitive. However, all of these attributes are components of human intelligence and social order. So maybe we are anthropomorphizing—that is, seeing and interpreting a human element where none exists—about the nature and skills of these animals after all.

All of these animals have the capacity to react to their environment. They can observe in a limited fashion, analyze to some extent, learn and remember, and move away from danger and toward safety and stability. These are qualities that most other animals—think of the wild panic during a stampede ahead of a brush fire—have only in limited amount and that plants have not at all. But complex as dolphin speech might be, nurturing as elephant mothers might be, and clever as the solitary octopus might be, none of them has developed any kind of technology beyond using sticks to poke at an ant hill or a flat rock upon which to break a seed pod. None of them is going to build a radio or a rocket ship anytime soon.

You can argue that advanced technology needs the right environment and the right bodily morphology. We can make radios because we don’t live in the sea, where the study of electronics is pretty much a lost cause. And we can make sophisticated tools, weapons, and rockets because we have grasping hands, opposable thumbs, and limbs that provide usable amounts of leverage. Dolphins and whales don’t have this. Octopi have dexterity and strength, but nothing as precise as ten fingers working in coordination.

Humans also have self-awareness. We know what we are—at least most of us do, on a basic level—and we recognize ourselves as individual beings separate from each other and from our group. Dolphins and some whales have this: if you strap a funny hat on a dolphin’s sleek head, then provide its tank with a mirror, the animal will go over and regard itself to see how it looks. The dolphin knows itself as unique among others of its kind. But no amount of coaxing can get my dog to see that the “other dog” in the mirror is actually her own reflection. Dogs might have “we” and “us” in their inner vocabulary, but their sense of “I” and “me” is limited to the point of non-existence.

We humans value dolphins, whales, and dogs—indeed, most animals that aren’t trying to eat us—because we think of them as pure souls, without the impulse to break rules, choose wrongful actions, and exhibit variable behaviors including meanness and cruelty. We think that the ability to do evil is the special nature of humanity, and that this calculated ability makes us less than the animals. But in reality, all animals are operating according to their natures and by instinct, not by reasoning. When an elephant goes rogue, or a tiger becomes a man-eater, it is not because the animal has chosen to break a rule or violate some internal code of ethics. It is merely reacting to some environmental factor or hormonal conflict that may be complex but is ultimately natural, not reasoned.

It is the special strength of humans—derived from our ability to see ourselves as separate individuals and reason about our place and our status in the group—that we can have a system of morality, and that we can choose to break with it for non-environmental, non-hormonal, non-natural reasons. We can see another person’s property, understand their rights to it, and still decide that it would be better in our possession. We can see another person take action, think deeply and feel strongly about the implications of that action, and take complementary action ourselves—perhaps supporting, perhaps opposing, sometimes even killing that other person—to effect an outcome we believe to be right and proper.

Human morals don’t come from nature or instinct. They derive from the shadow-play of ideas and meanings, a model of the external world we see around us, which takes place inside our heads. And then this model comes back out into the world, mixes with the models and ideas of other humans around us, and takes shape in the form of customs, laws, courts, and punishments upon which a group of people can agree, under which they can operate, and in obedience to which they may lose their property and even, sometimes, their lives. Dolphins and elephants may have intricate and supportive social structures—so do bees!—but none of them to our knowledge participate deliberative bodies, hold courts to try cases of transgression, and exile or execute prisoners.

This is not a failing of humans, that we can choose to do wrong and must punish our perpetrators, but instead the sign of a deeper intelligence than any other animal displays. The fact that we can have personal failings, and then feel bad about them, and work to correct them, is a form of strength. We are the animal that thinks about itself, judges itself, and strives to be and do better. We are the animal that can learn and evolve its own behavior, its society, and its morality without first experiencing a genetic mutation that drives our adaptability to the environment or the interplay of our brain chemistry and bodily hormones.

It is popular these days to see primitive man, unsophisticated humans, and people who live in the wilderness, as existing in some superior state of being. Jean-Jacques Rousseau believed in the “noble savage,” the human and the social group with no concept of sin, of right and wrong, and so who could live in a state of harmony with each other and with nature. Such a person—who would only be possible in a human body without a functioning concept of self—never existed. Societies living in the distant wilderness, untouched by Western colonialism and Judeo-Christian traditions, are no more pure, polite, loving, and in tune with nature than modern humans living in steel and concrete cities under complex social rules. Tribal clans and hunter-gatherer societies are rife with taboos, tensions, jealousies, and murder, just like any other human association. And any respect they have for nature as a social value comes from the same reasoned sense of self-preservation that drives a modern environmentalist to sort his municipal solid waste and stop dumping his chemicals in the river.

It is also popular to imagine that any other human-scale intelligence which might come to Earth from out among the stars will represent pure souls without the taints of aggression, greed, and anger.3 After all, any species that can cooperate to master the incredible energies needed to drive ships across those vast distances—and not blow themselves up in the process—must be superior to grubby old humankind, right? But again, monocultures, groups that live in perfect harmony, that neither permit nor experience differences among their members, that have no concept of self as different from other, are unlikely to be very creative. Societies advance by having some individuals invent and rebel. Individuals imagine different ways of doing things, question the status quo, and occasionally work against it. Perfect societies are static societies. Static societies do not invent fusion engines and go off on interstellar travels.

Only by the darkness in human nature can you see and value the light. Only by wrong can you identify the right. This is a basic principle. Conflict and compromise among individuals in groups is a basic element of evolution. As it is here on Earth, so it will be out among the stars.

Have human beings sometimes acted barbarously? Have we experienced wars and genocides? Have we been too often careless with our environment and harmed our planet’s beauty? Of course. But along with those who act selfishly, hurtfully, and carelessly, we have people who can observe in broad dimensions, analyze in depth, learn and communicate their findings with sophistication, and move away from danger and toward safety and stability. That we can do this as a group, discussing morality, shaping customs, and writing and enforcing laws, is the special strength of human beings. We can act together by operationalizing that dream-model we make of the world. We can act outside of our basic, instinct- and hormone-driven natures.

We are not a plague on this planet. We are its saviors and shapers.

1. And when I was twelve, I had a brief fascination with Marx’s dictum: “From each according to his abilities, to each according to his needs.” It sounds so good, right, and proper—until you think about, or learn about, how economics actually operates. About human motivations, too. Like the planet Miranda in the movie Serenity, where the atmosphere was suffused with government-supplied G-23 Paxilon Hydrochlorate, a world truly governed by Marx’s principle would simply lie down and die for lack of trying. In the real world, however, it takes a heap of intentional killing to bring Marxism to that point.

2. Well, some chauvinism. I am not part of the current intellectual movement that hates itself, whether because of original sin, white privilege, Western colonialism, or environmental guilt. It is a basic policy of mine not to vote for, side with, or support people who want to see me and mine dead. That’s just common sense.

3. Unless they are a race of brutal conquerors, for whom humans represent some kind of nuisance or vermin to be eliminated, as depicted in alien-encounter movies that also partake of horror movies. Personally, I think anyone who crosses the gap between the stars to land here is going to be just as fascinated by us humans as we will be by them: both of us will have something strange and new to study. Of course, accidents may happen, and misunderstandings and feuds may break out between the two intelligent species. But I don’t think you have to travel a dozen light-years to pick a fight or to find valuable resources and steal them.

Sunday, January 14, 2018

Public Lives and Private Lives

People puppets

A whole slew of public figures—mostly politicians, actors, and journalists, but what other kind of “public” is there anymore?—have recently been brought down by accusations of boorish behavior, inappropriate touching, lewd comments, harassment, and other activities that border on, but usually don’t meet the qualifications for, the Rape of the Sabine Women.

I don’t condone this behavior. I believe that women are not disposable pleasures but fully actualized people who should be treated with respect and courtesy, especially the closer a man gets to them and the more intimate their relationship becomes. If a man wants to interact with a woman at the closest level, it should be as friend, companion, and lover—not an object of pleasure. A man who deals intimately with a woman, entering into a position of responsibility for her, and then mistreats or abandons her is lacking in commitment and a sense of personal honor. A man who willfully injures a woman either verbally or physically shows himself to be a diminished person.

But that said, something has changed in our world having to do with relations between the sexes. The 1960s and the sexual revolution, driven by the convenience of the pill and other forms of birth control, and clouded by a haze of pot smoke, cheapened and trivialized love and commitment in the interest of physical pleasure. Self-restraint, caution, and deeper feelings of caring and responsibility were thrown to the winds. If it felt good, you did it, and thought about the consequences later.

Now, in the 2010s, the diminishment of sex has come full circle. Using sexual activity for self-gratification (on the man’s part) and for personal enhancement and career survival if not advancement (on the woman’s part) has become commonplace. It is the quid pro quo of the entertainment and journalism industries and in power politics. In fact, the circle has turned so far that we have suddenly entered a newly puritanical era. In the space of the past year, dating to the upsets of the 2016 election, but with significant outbreaks from the years before, a man’s sexual history and his boorish behavior have become worse than criminalized. His touches and gropings, his comments, his unwanted moments of closeness—let alone any calculated rapes—have now become grounds for public humiliation, economic execution, and spiritual exile.

When I was growing up, in the decade and a half before the sexual revolution, sex stayed in the bedroom. It was nobody’s business what went on in a private home and behind closed doors. Yes, there were laws about incest, sodomy, and other public taboos, but the cops never broke down the bedroom door looking for infractions between consenting adults. You had to perform the unlawful act in public—or solicit it from a member of the vice squad—in order to get arrested. And although most people weren’t exactly comfortable with homosexual activity and other relationships that were considered vices at the time, they tolerated the situation so long as the acts remained between consenting adults and stayed behind closed doors, along with the rest of human sexual activity.

The press offered public figures a measure of polite silence and turned a blind eye to their personal proclivities and weaknesses. That is a far cry from the howling chorus we have today. Yes, John Kennedy had affairs, most notably with the actress Marilyn Monroe. And who can say that all of those affairs with a popular president were free of the taint of gross or subtle coercion? Kennedy also suffered debilitating back pain for most of his life and probably was dependent on opioid painkillers. Yes, Dwight Eisenhower took a mistress while he was stationed in Europe as Supreme Allied Commander. Yes, Franklin Roosevelt had a mistress while in the White House. Roosevelt was also crippled with polio and confined to a wheelchair that his aides artfully concealed. All of these failings were known to insiders and to the press. But none of them was made explicitly public, because wiser heads decided it was proper to judge the man on his political skills and his public policies, not his private life.

I think the hue and cry started with Bill Clinton.1 His lecheries with staffers, and even women with whom he only briefly came into contact, became the stuff of public ridicule in the media. His reaction to questions about these lecheries became the stuff of political obsession by his opponents, which culminated in an attempt at his impeachment on charges of perjury and obstruction of justice. Clinton’s private life became a political liability.

Was Bill Clinton a sexual predator? Probably. Are the men who are now being publicly tarred and feathered sexual predators? On a case by case basis … probably, many of them. While I think sexual predation is a low form of behavior and personally dishonorable, I also understand how the world works, and how it has always worked. The pattern goes back to the Bronze Age and probably to our earliest hunter-gatherer beginnings—as soon as one man in the tribal band became socially powerful and started being treated as a decision-maker and chief.

In any given social or interpersonal situation, one person will always have the advantage and psychological, if not actual physical, power over another. Usually, the person who wants something puts him- or herself in a subordinate position, and so becomes vulnerable to, the person who can bestow the gift, benefice, or advancement that is desired. Similarly, the partner in a relationship who loves the most subordinates him- or herself to the person who cares the least. Men in public office who can grant favors, even if it is just proximity to the wheels of government, or those who can make or break careers, like a movie producer, will always have power over those who need favors or want to boost their careers.

In the human world, unfortunately, the women who need a favor, are attracted to power, or have a career to make usually end up putting their sexual persona, or sometimes just their personal submissiveness, at the service of powerful men who can grant those favors and build those careers. This is the disadvantage of being a women, not just in 20th-century American society, but in all societies going back to hunter-gatherer tribes. Still, it can also be an advantage because, for most adults in our sexually liberated age, contrived intimacy is no longer that big an issue and need have no lasting consequences. A man in similar need of favor or advancement would have to compensate the power broker with a pound of uncut cocaine, a box of Cuban cigars, unlimited personal loyalty, or performance of some legally or morally questionable service.

There is no way to stop this kind of transaction. It is built into human nature, which derives from the social interactions of all primates and all mammals who happen to gather in groups or herds. The big gorilla, the alpha male—or the alpha female—gets what he or she wants. The only way to stop this transactional relationship is to eliminate all positions of power, all distinctions between people, and the basis of all human interactions. Either that, or impose horrendous penalties on all persons engaging in extramarital sexual activity—and enforce existing laws about cocaine and Cuban cigars.

Or, we could return to a culture that holds a man—or a woman—to a standard of personal behavior and expects him or her to follow a code of personal honor. We don’t have to turn a blind eye to the mistresses and the addictions. But we also don’t have to bribe doormen and sift through other people’s diaries looking for infractions, because we can trust that most members of society, even those in positions of political or economic power, are decent, honorable, and living according to such a code.

You can’t enforce decency by civil or ecclesiastical law. But you can make it personally attractive and … perhaps even expected once again.

1. However, the public shaming of Colorado Senator Gary Hart, over an extramarital affair that surfaced during his bid for the presidential nomination in 1988, was a precursor to the outing of Clinton’s follies. Treatment of the Hart affair set a precedent for the next stage of journalistic voyeurism.