Sunday, December 29, 2019

Writing Ourselves into a Box

Time warp

Maybe … contrary to our theories, the universe is not uniform throughout, or not at all scales. Maybe gravity is not a constant at all scales but has different values at different scales.

We know that the force of gravity appears to be nonexistent at the subatomic scale, apparently playing no part in quantum mechanics. We can measure the force of gravity and use gravity equations at the planetary, solar, and interstellar scales, where it governs the attraction between bodies with which we are familiar. But we observe that, at the galactic scale, gravity is apparently not strong enough to produce the effects we can observe. That stars in a spiral galaxy spin in a manner—as if they were stuck to a platter instead of floating freely—which suggests each galaxy contains more matter than we can account for by the stars that shine and the planets, brown dwarfs, and black holes that we presume must accompany them.

Instead of considering variable gravity, we hypothesize that something else must be at work: a substance called “dark matter,” which does not react with normal matter or electromagnetism at all, but has a measurable effect on gravity—but only, apparently, at very large scales. The effect on galaxies is so apparent that we attribute this “dark matter” with comprising 85% of the substance of the universe and about 25% of its total energy density. All this for something we cannot see, touch, feel, or detect—only “observe its effects” at very large scales.

Perhaps we are wrong in our assumptions about the gravitational constant being fixed at all scales larger than that of quantum mechanics.

This kind of “wrong” thinking—if it is wrong, and perhaps “dark matter” actually does exist—has happened before. During the 19th century, geologists debated between “catastrophism” (the theory that geological changes in the Earth’s crust were created by sudden and violent events) and “gradualism” (the theory that profound change is the cumulative product of slow but continuous processes). Gradualism would be the slow accumulation of sediments to form rocks like sandstone and shale. Catastrophism would be sudden explosions like volcanos or events like the formation of the Snake River Canyon from the collapse of the earth- or ice-dam holding back Lake Bonneville about 15,000 years ago. What has since been resolved is that either theory on its own is too constricting, too reductive, and that both theories and systems played their part in the formation of the Earth’s surface.

This is probably not the case with gravity. A variable gravitational constant vs. the existence of dark matter is probably an either/or rather than a both/and situation. One kind of physics that explains the observations does not necessarily make room for a second kind to explain them. However, it wouldn’t surprise me if each galaxy contained more normal matter that we can’t see by either the ignition or the reflection of starlight—that is, black holes, brown dwarfs, and rogue planets—than we have habitually counted. And still there may be something going on with the effects of gravity at the very small and the extremely large scales that is out of whack with the gravitational constant we see at the planetary and stellar scale.

Time and distance are variables that we have only been seriously considering over the past hundred years or so—ever since Edwin Hubble exploded the idea that the local stars in the Milky Way galaxy were all that comprised the universe and instead introduced the idea that the fuzzy patches (“nebulae,” or clouds) on photographic plates were actually distant galaxies that are not much different from the Milky Way, and that they all were receding from us. In one conceptual leap, the universe, the cosmos, all of creation, became really, really—add a few exponents here—big.

But the universe did not become correspondingly old. We still had various measurements and computations putting the age of the oldest stars and the universe itself at approximately thirteen billion years. That’s about four generations of stars from the creation of matter itself to the Sun with its disk of gas and dust that contains elements which must have formed in the fusion processes and catastrophic collapse of older stars. By winding back the clock on the expansion, the scientific consensus came up with a universe that started in the explosion of, or expansion from, a single, tiny, incredibly hot, incredibly dense—add more exponents here, with negative signs—point. That singular point contained all the matter we can see in the billion or so galaxies within our view.

Because that view comprises a distance that the farthest galaxies could not have reached in thirteen billion years, even when receding from a common point at the speed of light, a scientific consensus originating with Alan Guth has formed around an “inflationary” universe. At some time, the story goes, at or near the very earliest stages of the expansion, the universe blew up or inflated far faster than lightspeed. Perhaps from a hot, dense thing the size of a proton to a cloud of unformed matter probably the size of a solar system, the whole shebang expanded in a blink—whereas a photon from the Sun’s surface now takes about five and a half hours to reach Pluto.

While I don’t exactly challenge the Big Bang and the inflationary universe, I think we are reaching for an easily understandable creation story here. We want to think that, because we know the universe is now expanding, and because our view of physics is that basic principles are the same in every place and at every scale, and that processes continue for all time unless some other observable force or process interferes with them, then the universe must have been expanding from the beginning: one long process, from the hot, dense subatomic pencil point that we can imagine to the vast spray of galaxies that we see—extending even farther in all directions than we can see—today. And with such thinking, we may be writing our imagination and our physics into a box.

The truth may lie somewhere in between. And I don’t think we’ve got our minds around it yet.

Sunday, December 22, 2019

Market Forces

Cooking the Meat

Cooking the Meat,
from the Bayeux Tapestry

I believe the views of the economy, as expressed in the politics of the left and the right, suffer from a fundamental philosophical difference—and it may not be what you think.

Basic economics is descriptive, not prescriptive. It tries to define and say what people actually do, not what the economist thinks they do or what they should do. In this, economics is a science like biology or physics: it is trying to describe the real world. In this sense, it is neutral about outcomes and concerned only with accurate observation and reporting.

Socialist, communist, or Marxist economics may start with description—after all, Karl Marx’s book was Das Kapital, which started out describing a system that was developing in a newly industrialized 19th-century society. But then he, along with all these “-ism” and “-ist” studies, launched into theory and prescriptions about how people should act in order to reach a desirable state, a utopia of fairness, equality, and happiness. In this sense, Marxist and socialist economics try to direct and, in some cases, manipulate outcomes, because their intention is aspirational rather than observational.

The reason for this difference is that market forces—the natural rules governing how people in large, anonymous groups exchange products, services, and units of value—are part of the human condition. Whether you like the human condition or wish to change it will guide your choice of economic theory.

What do I mean by “natural rules”? Here is a simple thought experiment.

Put two strangers on a desert island and let one come ashore with candy bars in his pockets, which can immediately be consumed for nutritional needs, while the other comes with gold bars in his pockets, which can have far greater value once the holder is rescued and leaves the island. And right there—barring a quick, premeditated homicide—the two survivors have established a market. How much gold will one give up for a candy bar? How much candy will the other give up for gold? And bearing on that question are sensations of hunger, expectations of rescue and its timing, and even the life expectancy of either or both parties before that putative rescue happens.

Where one person possesses something of value—either a tangible good or a useful skill—and the other person must have it, or may need it, or might have no interest in it but still possesses the means to acquire it, there will exist a market. And from that market will grow in the human mind all sorts of calculations: of cost and benefit, of current and future needs and wants, of available resources, and sometimes of pure, irrational desire. Along with these calculations will come a whole range of future projections. These mental gymnastics and their playing out in real-life action can be traced, plotted, and described. They are not a theory, although theories about motive and calculation can certainly be derived from human actions. Instead, they are facts to be analyzed and formulated into rules or laws—not the legislative laws of human societies but descriptive and projective laws, like those of physics and chemistry.

Adam Smith, in The Wealth of Nations published in 1776, described how each individual in a society and economy works for his own benefit, maximizing his satisfactions, advantages, and security. But through the collective action of all these individuals—that is, the “invisible hand,” as Smith described it—they achieve positive economic and social ends—that is, the national wealth of which he wrote—without the conscious intention of doing so. Smith was defining a phenomenon, a hidden force, that was already at work. People did not need to read his book and follow his precepts in order to achieve that “wealth of nations.” It just happened and he observed and described it.

Karl Marx, on the other hand—writing The Communist Manifesto in 1848 and Das Kapital twenty years later, almost a century after Adam Smith—started with a description of what was becoming the capitalist system in an industrialized setting, noted the instances in which a machine culture failed to lift everybody out of poverty, and thought to come up with something grander. Unfortunately, although he is regarded as a philosopher, Marx was no real visionary, and his “something” was the communal life of the medieval village.

The economics of the village did not depend on the exchange of money and the investment of capital in factories, but instead existed through the simple exchange of goods and services among individuals known to one another. The farmer raises wheat; the miller grinds it for a share of the crop; the baker makes it into bread for another share; and the cobbler and blacksmith get their daily bread in exchange for making shoes according to the needs of everyone else’s children and horses. That communal system—the isolated feudal castle without the authority of the feudal lord—works well enough on a local scale, where everyone knows and trusts everyone else, as in the Israeli kibbutz or those idealistic communes of the American transcendentalist movement. The feudal system even had an element of future planning and provision: if the miller’s children don’t happen to need shoes this year, the cobbler will still get his bread, because everyone knows that children will need shoes soon enough.

Marx’s error, in my opinion, was in ignoring human nature and believing that what had worked on a local level could be expanded to the national—and eventually the global—economic level. His prescription was not simply that local villages and towns should go back to their feudal roots in isolation, but instead that whole nations could exist on the trust and charity—“from each according to his ability, to each according to his needs”—of the local peasant. That cobblers and factory workers in Kiev would make shoes, not for personal benefit in terms of money wages or a share in the enterprise, but so that everyone in the country who wants shoes will have them. And everyone else in the economy will selflessly make their products—wheat, flour, and bread, along with tractors, milling machines, and bakery ovens—and provide their services—carpentry, plumbing, medical assistance, and legal advocacy—free of charge and in the expectation that everyone else will provide selflessly for their needs. This is the antithesis of Smith’s individual maximizing his own satisfactions, advantages, and security. It is the individual sacrificing for the greater good.

Karl Marx believed that once his system had gotten under way, the state, the governing force, would no longer be needed and would wither away. And people would just go on forever, sharing joyously. Marx was no student of human nature but instead a visionary who dreamed of his own utopia. Wherever his ideas have been tried, they required the state to grow stronger and force individuals into line, usually with actual punishments and threats of death. The alternative, was to try to create, through psychology and breeding, a perfectly selfless man, Homo sovieticus, which is to say the perfect slave.

Adam Smith’s capitalism, on the other hand, required no state intervention to operate. People will try to maximize their satisfactions and advantages naturally, and they will only sacrifice for those they hold near and dear, their families and closest friends and associates. But we—that is, later economists—have discovered since Smith that unbridled economic action does need some agreed-upon rules. For example, how is society to treat and dispose of the wastes of a large industrial factory or mechanized, monocrop farm, undreamed of in Smith’s time? And those situations, along with the multiplication of advantage enabled by developing science and technology, do require a government to monitor them and enforce the rules. We have also learned that contracts between individuals do not always operate smoothly, and some system of courts and judges is needed to arbitrate disputes. But the underlying economic system Smith described does not require a magical change in human nature, nor does it need an overbearing state to keep turning the crank of command-and-control decisions, endlessly directing what to make and how and where to deliver it, in order for the system to function.

If you believe that human beings can best decide where their own happiness, benefit, and advantage lie, for themselves and their families, then you subscribe to the natural economics that market forces will bring about—although with some social adjustments, rules of exchange, and the courts for enforcement. If you believe that human beings are flawed creatures, unable to decide where their own interest and that of the greater society lie, then you subscribe to the aspirational economics that only a strong state, the dictatorship of one group—the proletariat, the elite, the “best people,” or Animal Farm’s pigs—over all the others, and an eventual rewriting of human nature itself, can create.

I believe the difference is that simple.

Sunday, December 15, 2019

Chaos vs. Predictability

Red dice

Religious people would have you believe that, without a world or a universe ordered by an omnipresent, omniscient intelligence, or God, everything in human life would be meaningless, unpredictable chaos. If that is the case, I believe they don’t understand what chaos could truly be.

With or without a guiding intelligence, we live in a relatively predictable world. Yes, bad things sometimes happen in the lives of exemplary people, but this is not chaos. Storms sometimes blow down your house. Lightning strikes many places and people at random. Sinkholes open at your feet all over Florida and in other places built on limestone. But an intelligent human being can guard against some of these things. If you live in an area with hurricane-force winds, you can build your house out of steel and stone instead of lath and plaster. If the sky is threatening thundershowers, you can stay inside and not go out, because lightning almost never strikes from a clear sky. And if you live in sinkhole county, you can choose to move or survey the ground before you build.

A truly chaotic world would have hurricane winds and lighting bolts come out of a calm, clear sky. Gravity on Earth would be variable, so that if you accidentally stepped into a null-gee pocket, you might be thrown off into space by the planet’s revolution. Chemical bonds would change without rhyme or reason, so that a substance you had identified as food one day would become poison the next.

But, with or without an omnipresent God, we do not live in such a world or universe. Most interactions and events are governed by natural laws that can be analyzed and understood. Most hazards can be predicted and either avoided or insured against.

And then there is probability. While we can guard against windstorms and lightning, there is not much we can do about asteroid strikes. Even small bodies near the Earth and crossing our planet’s orbit are difficult to observe and harder to avoid—and when they hit, it’s going to be a hard day for anyone inside the impact radius. But still, we can study the history of previous asteroid strikes, observe the skies in our vicinity, and make a prediction based on probability about whether it will be safe to wake up and go outside tomorrow.

And if it’s not safe, if there is no prediction to be made based on identifiable probability, does that mean the world is chaos and life is pointless? Or that God, if He exists, does not love us anymore? Or that, if He does exist and still loves us, He must somehow be testing our faith and our resilience, in this best of all possible worlds?

Some things are unknowable. From a human perspective, which is bounded by experience, reason, and imagination, there are dangers in the universe that we can’t know about until they happen, and so we can’t evaluate them ahead of time. A comet falling toward the sun from the Oort Cloud or the Kuiper Belt for the first time and just happening to cross Earth’s orbit at the same time we intersect its path—that would be such an unknown. The universe is also filled with unresolved questions that extend beyond our experience of life on Earth and observations from within our solar system: the presence, effects, and unforeseen future conditions of gravity waves, dark matter, and dark energy come to mind. For example, we know the universe is expanding, but we’ve only been watching this process for about a hundred years—a snapshot in geologic, let alone galactic time. Might the expansion eventually speed up, slow down, or reverse itself? Might that change create effects that could be experienced and measured here on the planet or within our solar system? Our physics suggests not—but the reasoning and mathematics behind our physics are relatively new, too.

The long-term futures of the world, the solar system, the galaxy, and the universe itself are all vaguely predictable but existentially unknowable. Is this chaos? Does this condition demand the existence of a God who knows, understands, and might deign to tell us one day, perhaps through a prophet or a revelation? In my book, probably not.

The world I was born into, or that my parents and teachers prepared me for, is predictable in the short term: what errands I will run this morning; what I might plan to have for dinner tonight; where I’m going to be on Thursday, or next week, next month, or even next year. But in the long term all bets are off. I could die in a motorcycle crash or be struck by lightning or get hit by a bus anytime between now and then—although I would hope to be able to watch out for and guard against any of these eventualities. But still, that uncertainty is all right. Because I was born with the ability to reason in one hand and the capacity for hope in the other, but otherwise I was and am as naked as a clam without its shell. As are all of us.

Nothing is promised. Everything is ventured. Sometimes something is gained. And as Robert A. Heinlein is supposed to have said: “Certainly the game is rigged. Don’t let that stop you; if you don’t bet you can’t win.”

Sunday, December 8, 2019

The Sorcerer’s Apprentice

The Sourcerer’s Apprentice

“The Sourcerer’s Apprentice”

Just about sixty years ago now, in 1940, Walt Disney produced the animated film Fantasia, which included as a short segment, set to the haunting orchestral music of Paul Dukas, “The Sorcerer’s Apprentice.” The story is a cautionary tale, ostensibly about students trying to run before they can walk and the virtues of doing your own housework. But I wonder if any of those involved in the storyboarding and animation thought it would become advice for future computer programmers—a job that had then not yet been invented.

The first lesson for a programmer is: Debug your code. Don’t release a program on the world—or to your customer base—before it has been debugged, tested, debugged again, beta-tested with select and knowledgeable users, and then debugged a third time. I know from working with, not just new programs, but with a new application on an existing and well understood programming platform, that an application must go through successive rounds of testing and debugging before its release. And that testing must include a range of unusual situations—some of them clearly not meant to be encountered during regular use—that might break the program or cause the user to obtain faulty information if not actual harm. Mickey Mouse putting on his master’s magic hat, waving his arms at a dormant broom, and getting a response hardly counts in this testing process.

A second lesson: Make sure that your DO and DO-WHILE loops are properly defined. These loops governing actions repeated a certain number of times, or while a certain condition persists, are powerful programming tools. But failing to set them up properly, so that the repetition stops at the right number or when the governing condition—in Mickey’s case, when the parameter “cistern level” reaches “full”—is met, can lead to disaster. The fact that Mickey’s enchanted broom went on filling the cistern and overflowing into the laboratory indicates a faulty DO loop.

A third and related lesson: Make sure that you enter an executable STOP code. This is not only the point at which the program has completed its tasks—if that point is ever reached or if it even applies, as it might not in the case of a word processing or spreadsheet program, where there is always more to do—but also the point at which the user is finished with the program and wishes to terminate it, usually indicated by a pulldown command of “Stop” or “Quit.” If Mickey had installed such a command, he could easily have frozen the broom in its tracks.

And a final lesson: Clean up your deleted code and comments. After Mickey chopped the enchanted broom into a thousand pieces, the splinters remained energized and came to life, each as a separate and completely functional broom, and they all tried to complete the task. Code fragments from old compilations and deleted subroutines—along with the programmer’s internal comments that once applied to them but are now probably meaningless—are not necessarily going to come to life and function as zombie programming. But useless junk in the working code creates headaches for later programmers and developers.1 And of course, unlike the master sorcerer who came in, stopped the brooms, and then left the problem there, be sure to document—offline—your code, your issues, and your solutions.

However, “The Sorcerer’s Apprentice” is more just than a set of object lessons for future programmers. It captures—as do all tales of magical spells and demon summoning—the essence of programming itself. From my early days with small computers I had the opportunity to fool around with the BASIC language, which is governed by line numbers, and then with Pascal, which is a structured language. Both involve calls, subroutines, and loops. And like any programming language of any brand, they call forth in the programmer’s heart a kind of wonder.

Like magicians of old, the programming language and the software platform is a form of remotely controlled power. With it, the competent magician can formulate and cast spells. He or she can summon powerful agents, composed of other spells and programs, that can perform tasks. And as with the casting of spells, the language punishes inattention and carelessness. A missing or misplaced comma, semicolon, or colon, or a misspelled command word, or an omitted, misplaced, or misattributed variable or parameter can all render the program inert or send it off into unexpected and damaging functions. The errors can be—often are—subtle and take much study and many retries to find and correct. The only good thing is that, unlike an inappropriately summoned demon, a command-line error will not snatch the programmer by the hair and drag him or her down to Hell—although too many errors and botched codes will have the programmer looking for another job.

For years, the writing of programs—casting of computer spells—was limited to information tasks like word processing, number crunching, and data recording and retrieval. These are functions that exist solely in the realm of zeroes and ones, and they reveal themselves only on display screens and in paper printouts. And unlike setting stupid brooms loose in the laboratory, they usually can’t hurt anyone—although a computer error in your bank balance or in processing a contract document can lead to major monetary damages. Still, no obvious blood.

But from the early days, what we think of as a “computer chip” has always had a secondary purpose as a control processor.2 These chips—usually with hard-wired, solid-state programming called “erasable programmable read-only memory” (EPROM)—now control the engines and brakes in our cars, the functions of our televisions and DVD players, the timing and power systems in our microwave and toaster ovens, the battery charging in our electric shavers, and a slew of other devices in the home.3 But previously programmed processor chips are only the beginning.

Self-actuated robots are already among us—they just don’t look like human beings or smooth skinned C-3POs, with heads having paired visual receptors and vocal chords, two arms with hands and fingers, two legs with feet, and the ability to talk and walk upright. In many factories, however, stationary robots that perform a limited range of functions like welding, materials handling, and packaging are already common. The military now uses drones that are under human control as to what they will do but under internal computer control as to how they will do it. And some of that technology is now available for personal use: think of toy drones and self-guided vacuum cleaners. Almost every car maker is working on systems that will make the family automobile—or one day a machine that you will not personally own but will call for, like a taxi—self-driving and autonomous.

And by that time the root programming—the casting of the spells that will make the automaton move (how it works)—will be done somewhere before the machine leaves the factory, while the refined programming (what it does) will be provided by touchpad inputs on a smartphone and voice commands spoken in the machine’s presence.

And all of it was prefigured—that is, subtly and slyly inserted into our collective consciousness—by the image of Micky Mouse putting on a magic hat, waving his hands and sending sparks toward a dormant broom, and making it perform a simple household task.

1. What? You thought you were going to be the only developer to work on this project? Or that it would be your baby for the life of the product? Think again!

2. The original “microprocessors” that became the heart of small, hobbyist computers like the Tandy RadioShack TRS-80, the Apple II, and the various S-100 backplane systems were originally meant to control mechanical functions like automobile fuel injection and brake systems, rather than respond to keyboard inputs and display output text on a cathode ray tube.

3. My father, a mechanical engineer, used to say that the 20th century saw the rise in the number and use of small motors in people’s lives—for example, how the car went from having one large engine and hand cranks for everything else from windows to windshield wipers, and even for starting the engine itself, to having a dozen small motors placed around the engine and passenger compartments. Copy that into the home with small motors for electric fans, furnace blowers, air conditioners, vacuum cleaners, refrigerator pumps, and even electric shavers. In the same way, the late 20th and early 21st centuries have seen the rise in the number and use of small processors and robots in our lives, so that even the coffee maker has its computerized functions.

Sunday, December 1, 2019

On the Basis of Sex

People puppets

I recently—and belatedly—saw the movie about Ruth Bader Ginsburg’s early fight for women’s equality, On the Basis of Sex. Once again, it reminded me that while I share many views with my parents and their generation, I am yet and remain A Classic Liberal.

My mother was trained as a landscape architect, having studied at Penn State in the 1930s. She never practiced professionally—even if we did have pretty neat gardens around our several homes when I was growing up—although she used her skills to work as a draftsman at Bell Labs during World War II. After the war my brother and I were born in rapid succession, and so she stayed at home as our mother. Still, she taught us various bits about measuring, draftsmanship, graphics, sign making, and other useful subjects.

I have always supported women getting an education—not only in the arts and sciences, but in the martial arts as well, for self-defense. When I was first at college in the late 1960s, I heard university administrators had stated that they favored educating women because by doing so they were educating a family. And yes, this is a sentiment that would appear to limit women to the role of caregiver in the home. But some women did—and still do—want to be stay-at-home moms, and they should be as educated, with developed and refined minds, as anyone else. However, most of the young women with whom I attended college at the time were not seeking husbands and a wifely role as homemaker but instead studying for their own careers and planning for their workplace futures. And about half of the students in my karate classes were women as well.

My entire professional career was spent working alongside, and sometimes reporting to, women. In the publishing business, in technical editing at the engineering and construction company, in communications at the utility company, and in documentation and communications at two biotech companies, women were to be found as often as men. Language skills and the capability for understanding and handling numbers and complex concepts are equally distributed between the sexes. There may have been fewer women working as registered professional engineers back at the construction company in the 1970s, but that is a trend that steadily eroded in my lifetime. When I got to the biotech firms twenty years later, there were as many women scientists and administrators as men.

I believe this gender blindness was actually intended at the nation’s founding. I see the terms “man,” “men,” and “mankind” in this country’s original documents—certainly in the Declaration of Independence—as being inclusive, referring to men and women equally, rather than singling out males for distinction and pride of place. “Mankind” has generally been treated—until recently, when political consciousness kicked in—as equivalent to “humankind” and not promoting one sex at the expense of the other. So “all men are created equal” in the Declaration does not imply a lesser place or lack of distinction among women.

In the Ginsburg movie, the point was made that the U.S. Constitution does not anywhere include the word “woman” or “women.”1 It also does not include the word “man” or “men.” When it addresses an individual human being, rather than a government body, it uses the words “person” and “persons,” which could equally apply to a woman as to a man. So our founding law was sexless and genderless, making no assumptions about the citizens and their roles.

However, I’m still not “woke” enough to align with the current politically correct thinking that sex and gender are entirely mental states. I still believe there are some jobs that certain women, with smaller frames and less muscle strength, simply cannot perform despite any reasonable accommodation. Hauling a firefighting colleague twice her size out of danger would be one such job. Handling a jack hammer as tall as she is and half her weight would be another. A woman may want to do these jobs, just as a man of below average height, weight, and strength may want the job, but that does not alter the biological facts. These people should not be denied these jobs on the basis of sex but on the basis of strength and innate capability. In the same way, I might want to be appointed to a university professorship in the department of mathematics, but my poor old innumerate brain and an education that stopped at Algebra II simply do not qualify me for the position.

If sex and gender are supposed to be entirely mental, then “trans” people would be exempt from their own genetics and physiology and allowed to adopt whatever gender they choose. While I don’t deny that “gender dysphoria” may exist, and that a person in a man’s or woman’s body might feel strongly that they belong to the opposite sex, I still don’t believe this is a healthy state of mind. In the same way, I don’t believe that unrelieved depression or persistent delusions and hallucinations are healthy states.2 But I am also liberal enough—believing in human individuality and a person’s freedom to be what they want—that I can’t condemn the choice of a mature person, after serious reflection, undergoing the surgeries, hormone treatments, and counseling to change their physical presence and presentation to the world. But I would withhold the same surgeries and treatments from a pre-pubescent child, when children are notoriously fluid in their identities and imaginings, and they are not yet settled in their own lives.

Still, there remains the troubling affair, currently waiting to be sorted out, of segregated athletic competitions that are being invaded and dominated by trans people. This is almost always in terms of trans women competing in and—because they transitioned after puberty, and so have larger, stronger bodies—dominating the sport to the exclusion of women who were born female. This is even more troubling when the new competitors are not required to have had surgery or hormone therapy but simply, emotionally, maybe spiritually, “identify” as women. But I think the solution is relatively obvious—although perhaps not to those who are pushing the issue as a political wedge into traditional society. Instead of letting trans women compete in the women’s category, create a separate sports category for them, with their own meets and records and honors to be won. And then let the judges in that separate category decide how much muscle mass and testosterone should count for an achievement. This would be “separate but equal,” but no more onerous than the previous and accepted separation of men’s and women’s tennis, gymnastics, track, and other sports competitions.

In all of this, I am driven by a core belief: that all humans—“persons” in the language of the U.S. Constitution—are equally valid and deserving of respect. We may differ in physical strength and prowess, intelligence and education, emotional development and stability, understanding and insight, and every other parameter that can be applied to H. sapiens. But we are at a basic level all people. And I believe that this is the bedrock of any civilization.

1. You can bring up the full text at The Constitution of the United States: A Transcription and run a keyword search yourself.

2. But while we’re on the subject of mental states, I can accept that attraction to and lust for a person of one’s own sex is not a disease. The urges toward sex are so widespread in the human brain, and the objects and situations that one person may find attractive so varied, that limiting their scope to one popular form of attraction and attachment—binary coupling among heterosexuals—seems as absurd as saying that the “missionary position” is the only natural and acceptable form of sexual encounter.