Sunday, December 29, 2019

Writing Ourselves into a Box

Time warp

Maybe … contrary to our theories, the universe is not uniform throughout, or not at all scales. Maybe gravity is not a constant at all scales but has different values at different scales.

We know that the force of gravity appears to be nonexistent at the subatomic scale, apparently playing no part in quantum mechanics. We can measure the force of gravity and use gravity equations at the planetary, solar, and interstellar scales, where it governs the attraction between bodies with which we are familiar. But we observe that, at the galactic scale, gravity is apparently not strong enough to produce the effects we can observe. That stars in a spiral galaxy spin in a manner—as if they were stuck to a platter instead of floating freely—which suggests each galaxy contains more matter than we can account for by the stars that shine and the planets, brown dwarfs, and black holes that we presume must accompany them.

Instead of considering variable gravity, we hypothesize that something else must be at work: a substance called “dark matter,” which does not react with normal matter or electromagnetism at all, but has a measurable effect on gravity—but only, apparently, at very large scales. The effect on galaxies is so apparent that we attribute this “dark matter” with comprising 85% of the substance of the universe and about 25% of its total energy density. All this for something we cannot see, touch, feel, or detect—only “observe its effects” at very large scales.

Perhaps we are wrong in our assumptions about the gravitational constant being fixed at all scales larger than that of quantum mechanics.

This kind of “wrong” thinking—if it is wrong, and perhaps “dark matter” actually does exist—has happened before. During the 19th century, geologists debated between “catastrophism” (the theory that geological changes in the Earth’s crust were created by sudden and violent events) and “gradualism” (the theory that profound change is the cumulative product of slow but continuous processes). Gradualism would be the slow accumulation of sediments to form rocks like sandstone and shale. Catastrophism would be sudden explosions like volcanos or events like the formation of the Snake River Canyon from the collapse of the earth- or ice-dam holding back Lake Bonneville about 15,000 years ago. What has since been resolved is that either theory on its own is too constricting, too reductive, and that both theories and systems played their part in the formation of the Earth’s surface.

This is probably not the case with gravity. A variable gravitational constant vs. the existence of dark matter is probably an either/or rather than a both/and situation. One kind of physics that explains the observations does not necessarily make room for a second kind to explain them. However, it wouldn’t surprise me if each galaxy contained more normal matter that we can’t see by either the ignition or the reflection of starlight—that is, black holes, brown dwarfs, and rogue planets—than we have habitually counted. And still there may be something going on with the effects of gravity at the very small and the extremely large scales that is out of whack with the gravitational constant we see at the planetary and stellar scale.

Time and distance are variables that we have only been seriously considering over the past hundred years or so—ever since Edwin Hubble exploded the idea that the local stars in the Milky Way galaxy were all that comprised the universe and instead introduced the idea that the fuzzy patches (“nebulae,” or clouds) on photographic plates were actually distant galaxies that are not much different from the Milky Way, and that they all were receding from us. In one conceptual leap, the universe, the cosmos, all of creation, became really, really—add a few exponents here—big.

But the universe did not become correspondingly old. We still had various measurements and computations putting the age of the oldest stars and the universe itself at approximately thirteen billion years. That’s about four generations of stars from the creation of matter itself to the Sun with its disk of gas and dust that contains elements which must have formed in the fusion processes and catastrophic collapse of older stars. By winding back the clock on the expansion, the scientific consensus came up with a universe that started in the explosion of, or expansion from, a single, tiny, incredibly hot, incredibly dense—add more exponents here, with negative signs—point. That singular point contained all the matter we can see in the billion or so galaxies within our view.

Because that view comprises a distance that the farthest galaxies could not have reached in thirteen billion years, even when receding from a common point at the speed of light, a scientific consensus originating with Alan Guth has formed around an “inflationary” universe. At some time, the story goes, at or near the very earliest stages of the expansion, the universe blew up or inflated far faster than lightspeed. Perhaps from a hot, dense thing the size of a proton to a cloud of unformed matter probably the size of a solar system, the whole shebang expanded in a blink—whereas a photon from the Sun’s surface now takes about five and a half hours to reach Pluto.

While I don’t exactly challenge the Big Bang and the inflationary universe, I think we are reaching for an easily understandable creation story here. We want to think that, because we know the universe is now expanding, and because our view of physics is that basic principles are the same in every place and at every scale, and that processes continue for all time unless some other observable force or process interferes with them, then the universe must have been expanding from the beginning: one long process, from the hot, dense subatomic pencil point that we can imagine to the vast spray of galaxies that we see—extending even farther in all directions than we can see—today. And with such thinking, we may be writing our imagination and our physics into a box.

The truth may lie somewhere in between. And I don’t think we’ve got our minds around it yet.

Sunday, December 22, 2019

Market Forces

Cooking the Meat

Cooking the Meat,
from the Bayeux Tapestry

I believe the views of the economy, as expressed in the politics of the left and the right, suffer from a fundamental philosophical difference—and it may not be what you think.

Basic economics is descriptive, not prescriptive. It tries to define and say what people actually do, not what the economist thinks they do or what they should do. In this, economics is a science like biology or physics: it is trying to describe the real world. In this sense, it is neutral about outcomes and concerned only with accurate observation and reporting.

Socialist, communist, or Marxist economics may start with description—after all, Karl Marx’s book was Das Kapital, which started out describing a system that was developing in a newly industrialized 19th-century society. But then he, along with all these “-ism” and “-ist” studies, launched into theory and prescriptions about how people should act in order to reach a desirable state, a utopia of fairness, equality, and happiness. In this sense, Marxist and socialist economics try to direct and, in some cases, manipulate outcomes, because their intention is aspirational rather than observational.

The reason for this difference is that market forces—the natural rules governing how people in large, anonymous groups exchange products, services, and units of value—are part of the human condition. Whether you like the human condition or wish to change it will guide your choice of economic theory.

What do I mean by “natural rules”? Here is a simple thought experiment.

Put two strangers on a desert island and let one come ashore with candy bars in his pockets, which can immediately be consumed for nutritional needs, while the other comes with gold bars in his pockets, which can have far greater value once the holder is rescued and leaves the island. And right there—barring a quick, premeditated homicide—the two survivors have established a market. How much gold will one give up for a candy bar? How much candy will the other give up for gold? And bearing on that question are sensations of hunger, expectations of rescue and its timing, and even the life expectancy of either or both parties before that putative rescue happens.

Where one person possesses something of value—either a tangible good or a useful skill—and the other person must have it, or may need it, or might have no interest in it but still possesses the means to acquire it, there will exist a market. And from that market will grow in the human mind all sorts of calculations: of cost and benefit, of current and future needs and wants, of available resources, and sometimes of pure, irrational desire. Along with these calculations will come a whole range of future projections. These mental gymnastics and their playing out in real-life action can be traced, plotted, and described. They are not a theory, although theories about motive and calculation can certainly be derived from human actions. Instead, they are facts to be analyzed and formulated into rules or laws—not the legislative laws of human societies but descriptive and projective laws, like those of physics and chemistry.

Adam Smith, in The Wealth of Nations published in 1776, described how each individual in a society and economy works for his own benefit, maximizing his satisfactions, advantages, and security. But through the collective action of all these individuals—that is, the “invisible hand,” as Smith described it—they achieve positive economic and social ends—that is, the national wealth of which he wrote—without the conscious intention of doing so. Smith was defining a phenomenon, a hidden force, that was already at work. People did not need to read his book and follow his precepts in order to achieve that “wealth of nations.” It just happened and he observed and described it.

Karl Marx, on the other hand—writing The Communist Manifesto in 1848 and Das Kapital twenty years later, almost a century after Adam Smith—started with a description of what was becoming the capitalist system in an industrialized setting, noted the instances in which a machine culture failed to lift everybody out of poverty, and thought to come up with something grander. Unfortunately, although he is regarded as a philosopher, Marx was no real visionary, and his “something” was the communal life of the medieval village.

The economics of the village did not depend on the exchange of money and the investment of capital in factories, but instead existed through the simple exchange of goods and services among individuals known to one another. The farmer raises wheat; the miller grinds it for a share of the crop; the baker makes it into bread for another share; and the cobbler and blacksmith get their daily bread in exchange for making shoes according to the needs of everyone else’s children and horses. That communal system—the isolated feudal castle without the authority of the feudal lord—works well enough on a local scale, where everyone knows and trusts everyone else, as in the Israeli kibbutz or those idealistic communes of the American transcendentalist movement. The feudal system even had an element of future planning and provision: if the miller’s children don’t happen to need shoes this year, the cobbler will still get his bread, because everyone knows that children will need shoes soon enough.

Marx’s error, in my opinion, was in ignoring human nature and believing that what had worked on a local level could be expanded to the national—and eventually the global—economic level. His prescription was not simply that local villages and towns should go back to their feudal roots in isolation, but instead that whole nations could exist on the trust and charity—“from each according to his ability, to each according to his needs”—of the local peasant. That cobblers and factory workers in Kiev would make shoes, not for personal benefit in terms of money wages or a share in the enterprise, but so that everyone in the country who wants shoes will have them. And everyone else in the economy will selflessly make their products—wheat, flour, and bread, along with tractors, milling machines, and bakery ovens—and provide their services—carpentry, plumbing, medical assistance, and legal advocacy—free of charge and in the expectation that everyone else will provide selflessly for their needs. This is the antithesis of Smith’s individual maximizing his own satisfactions, advantages, and security. It is the individual sacrificing for the greater good.

Karl Marx believed that once his system had gotten under way, the state, the governing force, would no longer be needed and would wither away. And people would just go on forever, sharing joyously. Marx was no student of human nature but instead a visionary who dreamed of his own utopia. Wherever his ideas have been tried, they required the state to grow stronger and force individuals into line, usually with actual punishments and threats of death. The alternative, was to try to create, through psychology and breeding, a perfectly selfless man, Homo sovieticus, which is to say the perfect slave.

Adam Smith’s capitalism, on the other hand, required no state intervention to operate. People will try to maximize their satisfactions and advantages naturally, and they will only sacrifice for those they hold near and dear, their families and closest friends and associates. But we—that is, later economists—have discovered since Smith that unbridled economic action does need some agreed-upon rules. For example, how is society to treat and dispose of the wastes of a large industrial factory or mechanized, monocrop farm, undreamed of in Smith’s time? And those situations, along with the multiplication of advantage enabled by developing science and technology, do require a government to monitor them and enforce the rules. We have also learned that contracts between individuals do not always operate smoothly, and some system of courts and judges is needed to arbitrate disputes. But the underlying economic system Smith described does not require a magical change in human nature, nor does it need an overbearing state to keep turning the crank of command-and-control decisions, endlessly directing what to make and how and where to deliver it, in order for the system to function.

If you believe that human beings can best decide where their own happiness, benefit, and advantage lie, for themselves and their families, then you subscribe to the natural economics that market forces will bring about—although with some social adjustments, rules of exchange, and the courts for enforcement. If you believe that human beings are flawed creatures, unable to decide where their own interest and that of the greater society lie, then you subscribe to the aspirational economics that only a strong state, the dictatorship of one group—the proletariat, the elite, the “best people,” or Animal Farm’s pigs—over all the others, and an eventual rewriting of human nature itself, can create.

I believe the difference is that simple.

Sunday, December 15, 2019

Chaos vs. Predictability

Red dice

Religious people would have you believe that, without a world or a universe ordered by an omnipresent, omniscient intelligence, or God, everything in human life would be meaningless, unpredictable chaos. If that is the case, I believe they don’t understand what chaos could truly be.

With or without a guiding intelligence, we live in a relatively predictable world. Yes, bad things sometimes happen in the lives of exemplary people, but this is not chaos. Storms sometimes blow down your house. Lightning strikes many places and people at random. Sinkholes open at your feet all over Florida and in other places built on limestone. But an intelligent human being can guard against some of these things. If you live in an area with hurricane-force winds, you can build your house out of steel and stone instead of lath and plaster. If the sky is threatening thundershowers, you can stay inside and not go out, because lightning almost never strikes from a clear sky. And if you live in sinkhole county, you can choose to move or survey the ground before you build.

A truly chaotic world would have hurricane winds and lighting bolts come out of a calm, clear sky. Gravity on Earth would be variable, so that if you accidentally stepped into a null-gee pocket, you might be thrown off into space by the planet’s revolution. Chemical bonds would change without rhyme or reason, so that a substance you had identified as food one day would become poison the next.

But, with or without an omnipresent God, we do not live in such a world or universe. Most interactions and events are governed by natural laws that can be analyzed and understood. Most hazards can be predicted and either avoided or insured against.

And then there is probability. While we can guard against windstorms and lightning, there is not much we can do about asteroid strikes. Even small bodies near the Earth and crossing our planet’s orbit are difficult to observe and harder to avoid—and when they hit, it’s going to be a hard day for anyone inside the impact radius. But still, we can study the history of previous asteroid strikes, observe the skies in our vicinity, and make a prediction based on probability about whether it will be safe to wake up and go outside tomorrow.

And if it’s not safe, if there is no prediction to be made based on identifiable probability, does that mean the world is chaos and life is pointless? Or that God, if He exists, does not love us anymore? Or that, if He does exist and still loves us, He must somehow be testing our faith and our resilience, in this best of all possible worlds?

Some things are unknowable. From a human perspective, which is bounded by experience, reason, and imagination, there are dangers in the universe that we can’t know about until they happen, and so we can’t evaluate them ahead of time. A comet falling toward the sun from the Oort Cloud or the Kuiper Belt for the first time and just happening to cross Earth’s orbit at the same time we intersect its path—that would be such an unknown. The universe is also filled with unresolved questions that extend beyond our experience of life on Earth and observations from within our solar system: the presence, effects, and unforeseen future conditions of gravity waves, dark matter, and dark energy come to mind. For example, we know the universe is expanding, but we’ve only been watching this process for about a hundred years—a snapshot in geologic, let alone galactic time. Might the expansion eventually speed up, slow down, or reverse itself? Might that change create effects that could be experienced and measured here on the planet or within our solar system? Our physics suggests not—but the reasoning and mathematics behind our physics are relatively new, too.

The long-term futures of the world, the solar system, the galaxy, and the universe itself are all vaguely predictable but existentially unknowable. Is this chaos? Does this condition demand the existence of a God who knows, understands, and might deign to tell us one day, perhaps through a prophet or a revelation? In my book, probably not.

The world I was born into, or that my parents and teachers prepared me for, is predictable in the short term: what errands I will run this morning; what I might plan to have for dinner tonight; where I’m going to be on Thursday, or next week, next month, or even next year. But in the long term all bets are off. I could die in a motorcycle crash or be struck by lightning or get hit by a bus anytime between now and then—although I would hope to be able to watch out for and guard against any of these eventualities. But still, that uncertainty is all right. Because I was born with the ability to reason in one hand and the capacity for hope in the other, but otherwise I was and am as naked as a clam without its shell. As are all of us.

Nothing is promised. Everything is ventured. Sometimes something is gained. And as Robert A. Heinlein is supposed to have said: “Certainly the game is rigged. Don’t let that stop you; if you don’t bet you can’t win.”

Sunday, December 8, 2019

The Sorcerer’s Apprentice

The Sourcerer’s Apprentice

“The Sourcerer’s Apprentice”

Just about sixty years ago now, in 1940, Walt Disney produced the animated film Fantasia, which included as a short segment, set to the haunting orchestral music of Paul Dukas, “The Sorcerer’s Apprentice.” The story is a cautionary tale, ostensibly about students trying to run before they can walk and the virtues of doing your own housework. But I wonder if any of those involved in the storyboarding and animation thought it would become advice for future computer programmers—a job that had then not yet been invented.

The first lesson for a programmer is: Debug your code. Don’t release a program on the world—or to your customer base—before it has been debugged, tested, debugged again, beta-tested with select and knowledgeable users, and then debugged a third time. I know from working with, not just new programs, but with a new application on an existing and well understood programming platform, that an application must go through successive rounds of testing and debugging before its release. And that testing must include a range of unusual situations—some of them clearly not meant to be encountered during regular use—that might break the program or cause the user to obtain faulty information if not actual harm. Mickey Mouse putting on his master’s magic hat, waving his arms at a dormant broom, and getting a response hardly counts in this testing process.

A second lesson: Make sure that your DO and DO-WHILE loops are properly defined. These loops governing actions repeated a certain number of times, or while a certain condition persists, are powerful programming tools. But failing to set them up properly, so that the repetition stops at the right number or when the governing condition—in Mickey’s case, when the parameter “cistern level” reaches “full”—is met, can lead to disaster. The fact that Mickey’s enchanted broom went on filling the cistern and overflowing into the laboratory indicates a faulty DO loop.

A third and related lesson: Make sure that you enter an executable STOP code. This is not only the point at which the program has completed its tasks—if that point is ever reached or if it even applies, as it might not in the case of a word processing or spreadsheet program, where there is always more to do—but also the point at which the user is finished with the program and wishes to terminate it, usually indicated by a pulldown command of “Stop” or “Quit.” If Mickey had installed such a command, he could easily have frozen the broom in its tracks.

And a final lesson: Clean up your deleted code and comments. After Mickey chopped the enchanted broom into a thousand pieces, the splinters remained energized and came to life, each as a separate and completely functional broom, and they all tried to complete the task. Code fragments from old compilations and deleted subroutines—along with the programmer’s internal comments that once applied to them but are now probably meaningless—are not necessarily going to come to life and function as zombie programming. But useless junk in the working code creates headaches for later programmers and developers.1 And of course, unlike the master sorcerer who came in, stopped the brooms, and then left the problem there, be sure to document—offline—your code, your issues, and your solutions.

However, “The Sorcerer’s Apprentice” is more just than a set of object lessons for future programmers. It captures—as do all tales of magical spells and demon summoning—the essence of programming itself. From my early days with small computers I had the opportunity to fool around with the BASIC language, which is governed by line numbers, and then with Pascal, which is a structured language. Both involve calls, subroutines, and loops. And like any programming language of any brand, they call forth in the programmer’s heart a kind of wonder.

Like magicians of old, the programming language and the software platform is a form of remotely controlled power. With it, the competent magician can formulate and cast spells. He or she can summon powerful agents, composed of other spells and programs, that can perform tasks. And as with the casting of spells, the language punishes inattention and carelessness. A missing or misplaced comma, semicolon, or colon, or a misspelled command word, or an omitted, misplaced, or misattributed variable or parameter can all render the program inert or send it off into unexpected and damaging functions. The errors can be—often are—subtle and take much study and many retries to find and correct. The only good thing is that, unlike an inappropriately summoned demon, a command-line error will not snatch the programmer by the hair and drag him or her down to Hell—although too many errors and botched codes will have the programmer looking for another job.

For years, the writing of programs—casting of computer spells—was limited to information tasks like word processing, number crunching, and data recording and retrieval. These are functions that exist solely in the realm of zeroes and ones, and they reveal themselves only on display screens and in paper printouts. And unlike setting stupid brooms loose in the laboratory, they usually can’t hurt anyone—although a computer error in your bank balance or in processing a contract document can lead to major monetary damages. Still, no obvious blood.

But from the early days, what we think of as a “computer chip” has always had a secondary purpose as a control processor.2 These chips—usually with hard-wired, solid-state programming called “erasable programmable read-only memory” (EPROM)—now control the engines and brakes in our cars, the functions of our televisions and DVD players, the timing and power systems in our microwave and toaster ovens, the battery charging in our electric shavers, and a slew of other devices in the home.3 But previously programmed processor chips are only the beginning.

Self-actuated robots are already among us—they just don’t look like human beings or smooth skinned C-3POs, with heads having paired visual receptors and vocal chords, two arms with hands and fingers, two legs with feet, and the ability to talk and walk upright. In many factories, however, stationary robots that perform a limited range of functions like welding, materials handling, and packaging are already common. The military now uses drones that are under human control as to what they will do but under internal computer control as to how they will do it. And some of that technology is now available for personal use: think of toy drones and self-guided vacuum cleaners. Almost every car maker is working on systems that will make the family automobile—or one day a machine that you will not personally own but will call for, like a taxi—self-driving and autonomous.

And by that time the root programming—the casting of the spells that will make the automaton move (how it works)—will be done somewhere before the machine leaves the factory, while the refined programming (what it does) will be provided by touchpad inputs on a smartphone and voice commands spoken in the machine’s presence.

And all of it was prefigured—that is, subtly and slyly inserted into our collective consciousness—by the image of Micky Mouse putting on a magic hat, waving his hands and sending sparks toward a dormant broom, and making it perform a simple household task.

1. What? You thought you were going to be the only developer to work on this project? Or that it would be your baby for the life of the product? Think again!

2. The original “microprocessors” that became the heart of small, hobbyist computers like the Tandy RadioShack TRS-80, the Apple II, and the various S-100 backplane systems were originally meant to control mechanical functions like automobile fuel injection and brake systems, rather than respond to keyboard inputs and display output text on a cathode ray tube.

3. My father, a mechanical engineer, used to say that the 20th century saw the rise in the number and use of small motors in people’s lives—for example, how the car went from having one large engine and hand cranks for everything else from windows to windshield wipers, and even for starting the engine itself, to having a dozen small motors placed around the engine and passenger compartments. Copy that into the home with small motors for electric fans, furnace blowers, air conditioners, vacuum cleaners, refrigerator pumps, and even electric shavers. In the same way, the late 20th and early 21st centuries have seen the rise in the number and use of small processors and robots in our lives, so that even the coffee maker has its computerized functions.

Sunday, December 1, 2019

On the Basis of Sex

People puppets

I recently—and belatedly—saw the movie about Ruth Bader Ginsburg’s early fight for women’s equality, On the Basis of Sex. Once again, it reminded me that while I share many views with my parents and their generation, I am yet and remain A Classic Liberal.

My mother was trained as a landscape architect, having studied at Penn State in the 1930s. She never practiced professionally—even if we did have pretty neat gardens around our several homes when I was growing up—although she used her skills to work as a draftsman at Bell Labs during World War II. After the war my brother and I were born in rapid succession, and so she stayed at home as our mother. Still, she taught us various bits about measuring, draftsmanship, graphics, sign making, and other useful subjects.

I have always supported women getting an education—not only in the arts and sciences, but in the martial arts as well, for self-defense. When I was first at college in the late 1960s, I heard university administrators had stated that they favored educating women because by doing so they were educating a family. And yes, this is a sentiment that would appear to limit women to the role of caregiver in the home. But some women did—and still do—want to be stay-at-home moms, and they should be as educated, with developed and refined minds, as anyone else. However, most of the young women with whom I attended college at the time were not seeking husbands and a wifely role as homemaker but instead studying for their own careers and planning for their workplace futures. And about half of the students in my karate classes were women as well.

My entire professional career was spent working alongside, and sometimes reporting to, women. In the publishing business, in technical editing at the engineering and construction company, in communications at the utility company, and in documentation and communications at two biotech companies, women were to be found as often as men. Language skills and the capability for understanding and handling numbers and complex concepts are equally distributed between the sexes. There may have been fewer women working as registered professional engineers back at the construction company in the 1970s, but that is a trend that steadily eroded in my lifetime. When I got to the biotech firms twenty years later, there were as many women scientists and administrators as men.

I believe this gender blindness was actually intended at the nation’s founding. I see the terms “man,” “men,” and “mankind” in this country’s original documents—certainly in the Declaration of Independence—as being inclusive, referring to men and women equally, rather than singling out males for distinction and pride of place. “Mankind” has generally been treated—until recently, when political consciousness kicked in—as equivalent to “humankind” and not promoting one sex at the expense of the other. So “all men are created equal” in the Declaration does not imply a lesser place or lack of distinction among women.

In the Ginsburg movie, the point was made that the U.S. Constitution does not anywhere include the word “woman” or “women.”1 It also does not include the word “man” or “men.” When it addresses an individual human being, rather than a government body, it uses the words “person” and “persons,” which could equally apply to a woman as to a man. So our founding law was sexless and genderless, making no assumptions about the citizens and their roles.

However, I’m still not “woke” enough to align with the current politically correct thinking that sex and gender are entirely mental states. I still believe there are some jobs that certain women, with smaller frames and less muscle strength, simply cannot perform despite any reasonable accommodation. Hauling a firefighting colleague twice her size out of danger would be one such job. Handling a jack hammer as tall as she is and half her weight would be another. A woman may want to do these jobs, just as a man of below average height, weight, and strength may want the job, but that does not alter the biological facts. These people should not be denied these jobs on the basis of sex but on the basis of strength and innate capability. In the same way, I might want to be appointed to a university professorship in the department of mathematics, but my poor old innumerate brain and an education that stopped at Algebra II simply do not qualify me for the position.

If sex and gender are supposed to be entirely mental, then “trans” people would be exempt from their own genetics and physiology and allowed to adopt whatever gender they choose. While I don’t deny that “gender dysphoria” may exist, and that a person in a man’s or woman’s body might feel strongly that they belong to the opposite sex, I still don’t believe this is a healthy state of mind. In the same way, I don’t believe that unrelieved depression or persistent delusions and hallucinations are healthy states.2 But I am also liberal enough—believing in human individuality and a person’s freedom to be what they want—that I can’t condemn the choice of a mature person, after serious reflection, undergoing the surgeries, hormone treatments, and counseling to change their physical presence and presentation to the world. But I would withhold the same surgeries and treatments from a pre-pubescent child, when children are notoriously fluid in their identities and imaginings, and they are not yet settled in their own lives.

Still, there remains the troubling affair, currently waiting to be sorted out, of segregated athletic competitions that are being invaded and dominated by trans people. This is almost always in terms of trans women competing in and—because they transitioned after puberty, and so have larger, stronger bodies—dominating the sport to the exclusion of women who were born female. This is even more troubling when the new competitors are not required to have had surgery or hormone therapy but simply, emotionally, maybe spiritually, “identify” as women. But I think the solution is relatively obvious—although perhaps not to those who are pushing the issue as a political wedge into traditional society. Instead of letting trans women compete in the women’s category, create a separate sports category for them, with their own meets and records and honors to be won. And then let the judges in that separate category decide how much muscle mass and testosterone should count for an achievement. This would be “separate but equal,” but no more onerous than the previous and accepted separation of men’s and women’s tennis, gymnastics, track, and other sports competitions.

In all of this, I am driven by a core belief: that all humans—“persons” in the language of the U.S. Constitution—are equally valid and deserving of respect. We may differ in physical strength and prowess, intelligence and education, emotional development and stability, understanding and insight, and every other parameter that can be applied to H. sapiens. But we are at a basic level all people. And I believe that this is the bedrock of any civilization.

1. You can bring up the full text at The Constitution of the United States: A Transcription and run a keyword search yourself.

2. But while we’re on the subject of mental states, I can accept that attraction to and lust for a person of one’s own sex is not a disease. The urges toward sex are so widespread in the human brain, and the objects and situations that one person may find attractive so varied, that limiting their scope to one popular form of attraction and attachment—binary coupling among heterosexuals—seems as absurd as saying that the “missionary position” is the only natural and acceptable form of sexual encounter.

Sunday, November 24, 2019

Apocalypse Chic

Apocalypse meteor storm

Science fiction has always included an element of human catastrophe. H. G. Wells probably started it with The War of the Worlds, about the devastation of an alien invasion, and The Time Machine, in which he looked beyond the horizon of current civilization to a brutal far-future world of Eloi and Morlocks. What-if—in terms of “What if things went terribly wrong?”—has always been part of the genre.1

Mainstream fiction picked up the catastrophic and world-changing mindset during and after World War II, if not before. Nevil Shute’s Ordeal, about a family coping with the aerial bombing of Southampton, which was actually written before the war, and his On the Beach, about an atomic cloud that envelopes the world after nuclear war and snuffs out the last of civilization in Australia, are the earliest works of apocalyptic fiction of which I was aware as a young reader. And then there was George Stewart’s Earth Abides, in which the human population is almost wiped out by a deadly disease—a thread that was later amplified by Stephen King in The Stand.

Certainly, the 20th century was ripe for visions of apocalypse. If World War I was a meatgrinder on the battlefield, then World War II was a horror on the home front as well, especially in view of saturation bombing of cities, mass deportations of enemy aliens and refugees, the Holocaust in Europe, and the two atomic bombs dropped on Japan. And all of this was followed by forty-four years of Cold War between the U.S. and the U.S.S.R. and the constant threat of a nuclear holocaust leading to the end of civilization. That threat was real enough, but there was also the historic, cultural threat of millennial devastation, fueled by some religious prophecies, leading to fears of Doomsday at the end of the first millennium and then the serial collapses projected for the year 2000: global population explosion, global cooling, global warming, and finally the recording of dates in the popular programming language COBOL, whereby allotting only two digits for the year would lead to the end of our computerized civilization.

As I’ve noted before, it’s fun to imagine the end of civilization. What would you do if you no longer had to get up and go to your job every day? Or no longer had to pay rent? Or do the grocery shopping and the laundry? Your hum-drum life would suddenly become exciting and unpredictable. Never mind that scrounging a living in a devastated world à la Cormac McCarthy’s The Road—where half of the picked-over leavings was irradiated and the other half simply poisoned—would be ten times harder than surviving as a current homeless person living off scraps from a dumpster that was refilled daily with the wastes of a thriving economy. Never mind that the population would quickly divide itself into those who still had some dignity and morals and were trying to cling to the vestiges of civilized life, and those who could brutalize and kill them without a thought just for the crust of stale bread in their pockets. Those details aside, it would be fun!

Today, almost twenty years after that fateful millennium year, we still entertain ourselves with visions of apocalypse and the derring-do that we would be forced to emulate in the midst of collapse. In regular waves, our novels and our movies—based in science fiction or not—saturate us with alien invasions, zombie uprisings, nuclear winter, and civilizational meltdown. About the only thing that has changed from the days of Nevil Shute and George Stewart is the emergence of a superhero literature and movie genre, escaping from the comic books—excuse me, “graphic novels”—of the years between the world wars. And still, every superhero sooner or later faces the threat of global catastrophe and civilizational collapse.

It’s fun to think of these scenarios because otherwise, and for the majority of us—at least for most of us with the leisure time to read and go to the movies—life is pretty tame. We have an abundance of goods and services at our disposal. We actually have income that we can call “disposable.” We have handheld devices that link us to every bit of information, every other person, every story and game that our active imaginations can crave and our curiosity could want to know and experience. In the classic line, we don’t have to wonder where our next meal is coming from but what we really feel like eating and where we want to go for lunch. Our lives are so damn good that we just cry out for something, anything, to change—especially drastically and for the worse.

But for some of us—maybe soon for most of us—this cultural background throb of doom and gloom is creating a pang of anxiety. Certainly, in the San Francisco Bay Area, we have been living with predictions of “the Big One”—a 7.9 or 8.0 on the Hayward or San Andreas faults—for more than fifty years. And that’s while the seismologists have given us positive odds of its occurrence “sometime in the next thirty years.” We all expect that will simply be the worst day of our lives, with everything changing, and being no fun at all.

In the country as a whole, we hear constant predictions of economic collapse, even though the markets are full, unemployment is at all-time lows, and stocks are at record highs. After the serial business downturns that have been part of our national economy since the Panic of 1837, we had the Crash of 1929 and the Great Depression in the 1930s. And then we reprised that disaster with a frisson of excitement in the Great Recession of 2007. And every year since, some renowned market analyst has predicted a stock market collapse of 50% or more and, among the wise, a flight to safety in terms of gold, real estate, bitcoins, or some other form of value.

In the widening divergence and hardening of differences between the two major parties comprising the left and the right, many people who are not at all crazy or conspiracy minded are seeing a “culture war” and a “cold civil war.” Some of us—myself included with one of the themes in my recent two-part novel Coming of Age—are even anticipating a heating up of that civil war into actual shooting status. Whether the conflict will be an orderly secession of certain states followed by the federal government’s attempt to recover them, as in the first American Civil War in 1861, or an attempted revolution followed by a subsequent revolt and civil war, as in Imperial Russia in 1917, or a running battle of town against town and neighbor against neighbor, as in the Spanish Civil War in 1936—all of that still remains to be seen. Many scenarios are possible. But if people of a political bent cannot learn to moderate their dreams, desires, and demands, if people on social media and in the punditry cannot agree that some viewpoint other than their own has merit and deserves respect, the threat of civil war still exists.

The apocalyptic vision is out there, floating like a mist in the public mind. Whether it brings you a sense of excitement or dread depends on your imagination and your point of view. But either way, it is a real thing and, sooner or later, we will have to deal with it.

1. I have covered this ground before, in Fantasies of the Apocalypse from August 9, 2015. Consider this “take two.”

Sunday, November 17, 2019

Good Effort

French marketplace

To start with, I am a convinced free-market capitalist, so this meditation is going to sound biased to some ears. I believe that people are individuals with free agency,1 and so they are the best judges of how to spend their hard-earned money: on which goods, what quality of goods, what level of service, and what place of purchase. They are also the best judges of which jobs they want to or can do, how much effort they will put in, and what place of employment will best suit them. I believe that people make an honest choice between Starbucks and Peet’s Coffee, and that this is their own business.

I also believe that shareholder capitalism, as it’s currently practiced, is the most democratic method of funding an economy. People put their savings—their nest eggs, their windfalls, and their 401ks and pension plans—into stock shares and bonds, or manage them through a financial services advisor, and this money fuels economic development. People invest their savings where they feel comfortable, have the most trust, or share a vision. And the essence, here as in the marketplace, is that a million individuals are making a million decisions that seem right to them—as individuals.

With a bachelor’s degree in English literature, my first job out of college—obtained through the good graces of one of my professors—was as a junior editor at the Pennsylvania State University Press. It was there I learned my first real trade and became a successful editor and writer. Unlike most university presses, the director and senior editor tried to publish valuable works that would sell in the marketplace of ideas—rather than simply functioning as a publishing service for the university’s own scholars. That is, they expected most of their list of books to earn a return in sales. And I was treated like any editor in traditional publishing: I had to perform to a work flow; maintain high standards of manuscript preparation, galley proofing, and overall timeliness; and earn my paycheck each month. But at the end of that year, when the Commonwealth of Pennsylvania experienced a $400 million budget shortfall during the recession of 1970, and it was a choice between plowing the roads and paying the salaries of junior editors and adjunct professors at the state university, I was let go. Good effort on my part and on the part of the university press management was not enough. For all their market savvy, the Penn State Press lived or died by allocations from the state budget.

After that experience, I went to work in the private sector. First, I was a trade-book editor at Howell-North Books, correcting manuscripts of railroad history and Californiana for a market that was interested enough in railroads and California history to pay for our books. Next, I went to be a technical editor at Kaiser Engineers, producing engineering proposals and reports for million-dollar projects on drop-dead schedules. Quality, speed, and responsiveness to customer expectations and requests not only earned my paycheck at these places but offered advancement in the organization. The budget was under our own control, not the number-crunchers back in the state capital.

My late wife graduated with a degree in history but, in the economic environment of 1960s San Francisco, the best job she could find was in the typing pool at Western Greyhound’s downtown offices. All her acquired knowledge and historical insights didn’t count so much as her typing speed. She quickly tired of being just a set of fingers, obtained her Master’s in Library Science, and went to work for the prestigious Bancroft Library on the University of California Berkeley campus. She stayed there until retiring almost three decades later as Head of Public Services. Every year, the Bancroft director had to fight for budget allocations against the rest of the library system and the university as a whole. Every year, my wife had to figure out how to maintain the collection and provide access to scholars using only the staff and budget she was given. Good effort were important but not sufficient for success.

I believe that people are pretty much the same, whether they work in private business or the public sector. Everyone—almost everyone, that is—gets up in the morning and wants to do a good job. They want to be productive, get smiles and thanks from their customers, however they define them, feel they’ve done something worthwhile at the end of the day, and earn their paycheck at the end of the month.

For ten years in the 1980s, I worked in Corporate Communications for Pacific Gas & Electric Company. Yes, that PG&E, which has recently been shutting off power to wide swaths of Northern California because their aged equipment and ill-maintained rights of way have started horrendous wildfires. People affected by these “public safety power shutoffs” blame the company for misallocating funds, skimping on maintenance and using the money instead for management bonuses, and generally goofing off at all levels. But while the company may have made mistakes, it is not run by fools and wastrels. When I worked there, PG&E people prided themselves on providing reliable service, maintaining a tight system, and keeping the lights on and the gas flowing.

What most people don’t realize is that PG&E—and any other regulated utility—is not exactly a private-sector business. When I first got there, I once used the word “profit.” Oh, no, no! I was told, by no one less than the chairman himself: “We earn an authorized return on investment.” That is, the shareholders put up their money—the result of selling PG&E stock in the equities market—to build new plant and facilities. In return, the California Public Utilities Commission (CPUC) grants them money taken out of rates to pay back that investment plus a designated percentage return—which is usually a bit higher than one could make on bank savings or would pay on a bank loan. To earn this return, the plant has to be “used and useful”; so the company cannot just build redundant and overlapping facilities and make money on them.

But aside from new buildings, power plants and gas pumping stations, powerlines and gas transmission lines, every other aspect of the gas and electric business was and is a “passthrough” in the rates the company charges. PG&E buys fuel and natural gas, wire and pipe, line trucks and equipment, tools and office supplies, and made not a penny on the sale of electricity and gas over and above the cost of these expenses. PG&E employed 23,000 people, at the time I worked there, but makes not a penny on their efforts above their regular salaries, benefits, and stipulated bonus programs. A customer service rep’s smile earns neither her nor the company not a penny more in rates.

Instead, the company goes to the CPUC every three years with the “rate case.” This is a showing before an administrative law judge (ALJ), who is also employed by the CPUC, as to what the company needs in customer payments in order to provide electricity and gas to customers in its service territory: so pipe and wire, so many trucks, so many people, so much for system maintenance, so much for clearing trees and brush in the powerline rights of way. And the CPUC staff faces the company lawyers as opposing counsel to challenge every article in the rate case in order to hold rates to what they believe is “fair and reasonable.” In the end, the ALJ and the CPUC decide what PG&E can spend in each category and recover in rates. And the CPUC keeps close tabs on PG&E’s books, so that if the company were stinting on maintenance and tree cutting but then paying out extra into management bonuses, the hammer would come down pretty quickly.

PG&E employees—at least when I worked there—wanted to give good service, keep the lights on, and make their service territory a good place to live. But they are not stupid. They know that, unlike a Starbucks or Peet’s Coffee—where good products in a wide selection, and good service with a lot smiles, can keep customers coming back, because they have a choice—the average PG&E customer has no real choice of provider, will pay what the state commission decrees in rates, and will get the level of service that wiser heads in San Francisco will allow.

Everyone, most everyone, wants to do a good job, but they are also aware of the internal and external incentives. They know how much control they have over outcomes. And when choice of products and services is nonexistent and the outcome is predetermined by number-crunchers far off and far up the line, then selection and service suffer. Putting that control over valuable public goods and services into fewer hands and at even greater distance—whether through stricter and more complex public oversight and regulation, public takeover of service providers, or outright government control of production—is not the key to public happiness. Or so I believe.

1. See A Classic Liberal from December 2, 2018.

Sunday, November 10, 2019

Allergic to Chaos

Mandelbrot fractal

Mandelbrot fractal

The other night I attended a concert by the San Francisco Chamber Orchestra.1 While the music was excellent, I had a “disconcerting” moment in the prelude, during the time that the audience was still filing into the auditorium and finding their seats. Rather than a curtain coming up on the whole ensemble of 36 players seated and ready to start, the stage began with empty chairs and music stands. Then, one by one, at intervals of about a minute, each player entered, took his or her seat, and began playing … something.

What each player was doing seemed like random practice: trying out a series of chords, working out a difficult passage in the evening’s music, or just tuning up the instrument. The point is, the playing was totally uncoordinated, bits and pieces rising and falling without any linkage. At one point, I turned to my companion in the audience and quipped, “I suppose they do this so it will sound better when they all start playing together.” It wasn’t quite cacophony,2 because each fragment was a nice bit of music in itself, when I could isolate it, and the overall effect lacked the crash of garbage cans, swell of police sirens, and screech of brakes. But it wasn’t exactly pleasant.

In fact, after a few minutes, I began to feel a certain discomfort, then anxiety, depression, and other negative symptoms. I actually tried to close my ears with my fingers at one point. And this reaction reminded me of something I knew long ago: that I am allergic to chaos.

This isn’t just a philosophical or political aversion. I’m not making a statement here. I just can’t stand a prolonged state of disorder, without pattern, without sense or direction. And I don’t suppose many people can—or not without suffering negative symptoms such as I did.

You would think that I then must hate nature, which is itself fairly chaotic. But within the movements and displays of the natural world, there is generally a pattern that can be discerned in one pays attention over time. The wind in the trees, the falling of leaves or blowing of snow, the ripples in sand dunes … all are patterned. Each trunk and limb in the forest can bend only so far, and then it must bend back, so that the swaying of the trees has a regular span and rhythm. Each wave on the beach is followed by another in a regular succession. Birds migrate with the seasons. Bees swarm and then find their hive. Night follows day follows night. The Moon progresses in its phases, and shooting stars come in orderly streaks from a focal point in the sky. Even the eruptions of volcanos and their lava flows tend to follow a pattern over time.

The trouble is that, given a patternless chaos, my brain automatically tries to make order and sense of it. I try to find a pattern, at some level, large or small, like the inner workings and infinite regression of a Mandelbrot fractal. I do this even when I know there is no pattern and can be no pattern. That night at the concert I knew and understood that all of this musical nonsense and not-quite-noise intentionally held no pattern. And yet my brain tried to make music out of it.

This may explain why I am made nervous in crowds—aside from the obvious possibility of a mob breaking out and carrying me off screaming—or in packed spaces like restaurants and, ahem, theaters before a show starts. There can be found patternless voices, snatches of conversation, and random bumps and thuds. There chaos hangs in the air, mutters and teases at my ear, and sends my brain into fits trying to isolate snippets and make sense of the loose and vacant chatter.

This may also explain why I am made nervous by the laser light shows that generally accompany rock concerts as well as by the jumble of flashes and bangs that generally form the “blowoff” ending to a fireworks display. One, two, or three rockets exploding together and mixing their starbursts and showers, or the calmly rotating reflections of a disco ball … these are agreeably patterned. The sporadic lightning and booming of twenty rockets going off at once, or the artfully programmed confusion of light beams swirling, swooping, and dancing on smoke … that’s nervous-making.

While I am a dedicated wargamer and love the simulated strategic problems of a miniatures battle or board game, I would hate being in a real war. Even when an opposing gamer makes a surprise move—and it’s interesting how, at the height of battle, the reactions of other players are generally not all that surprising—the situation is still somewhat controlled. We have rules in each gaming system about how far rifles and artillery can fire, how far a unit of troops or vehicles may advance in any one turn, and what sequence of play each side must follow. The progression is orderly if not exactly predictable. But in the real thing—and I know this from having played paintball, which is a lifelike tactical engagement, with all its crawling around in the dirt trying to find cover, except that plastic shells filled with water-soluble paint replace actual lead bullets—fire comes in from all sides, intermittently, without any sequence or pattern. You lift your head to look one way and—splat! A sniper who’s been watching your position has just torn open your skull. Surprise, you’re dead! My brain hates that! There is no way to predict disaster and guard against it.

But all of this is not to say that I hate uncertainty. That is a different thing. I am quite comfortable with unanswered questions, like the nature of God, the origins of life, the conspiracy behind the Kennedy assassination, and other situations that may or may not be ultimately knowable but right now are open to question and speculation. In fact, I am more comfortable with a complex uncertainty (e.g., how do those mutant geniuses manage to solve Rubik’s Cube with just a few twists?) than I ever would be with a simple but possibly wrong or inadequate answer (e.g., God said it, the Bible records it, I believe it, that settles it). I can tease out and isolate the uncertainties and missing pieces of an argument or line of reasoning. I don’t have to squash or answer them. My brain is satisfied with just knowing where the potholes are.

But in a flood of random sounds or the flash of random lights—think of the display of little twinkling bulbs that 1950s television shows used in a display panel to suggest a computer at work—I am bothered because the signal becomes indistinguishable from the noise. You cannot isolate and examine uncertainty when an overall pattern does not even give you a framework to begin analyzing.

So, while the chamber orchestra may have found an interesting and dynamic way of bringing the concert experience into focus, they also point to a certain weakness. Thank goodness, this group plays mostly classical works that celebrate order, transition, and resolution, rather than some of the more modern composers—here I’m looking at you, Shostakovich!3—whose works are sometimes indistinguishable from disorganized noise itself.

1. The program started with the Faust Overture by Emilie Mayer, a 19th-century German composer of whom I had not previously been aware. That was followed by the more familiar Fifth Symphony of Beethoven. The treat of the evening was hearing Music Director Ben Simon and various sections of the orchestra “disassemble” the Fifth to show how the composer used key changes, variations, and repetition to build this musical masterpiece.

2. If you break this word down linguistically, kak in Greek means “bad,” and phonia means “sound.” So this is—now from Latin roots—dissonance or “bad sound.” Given how kak and its variations are usually rendered in many languages, it could also be “shit sound.”

3. Although I am partial to his Symphony No. 10, particularly the rousing and martial Second Movement.

Sunday, November 3, 2019

The Magic of Steel

Samurai swordmaking

About a dozen years ago, I tried to by a steel rod that I could use in my karate training as a bo staff. This karate weapon is like the ancient quarterstaff that Robin Hood and his forest band used, but much faster and in greater combinations of moves. Even though the word “bo” means wood, I already had staffs made of fiberglass and aluminum. I figured a weapon made of solid steel would be unbreakable and invincible. So I looked up a steel fabricator online, called them, and tried to place my order.

“What are your specifications?” the polite woman on the sales desk asked me.

“Oh, I want a rod about six feet long and half an inch to an inch in diameter,” I said.

“You don’t understand,” she said. “What are your specifications for the steel?”

Ah! Well. Um … I did know what she was talking about. Any aficionado of pocket knives who peruses the catalogs knows a bit about Rockwell hardness, edge holding capability, rust resistance, and so forth. But what did I want in a staff?

When I was growing up and became interested in knives as both tools and weapons, as well as kitchen utensils, I asked my father—a mechanical engineer who had to be familiar with the properties of various metals—why we didn’t have stainless steel pocket knives. He said that stainless steel was too soft, with a consistency more like stiff taffy, and it could not be ground to take and hold an edge. Nowadays, however, through the work of chemists and steelmakers, we have stainless blades that are as strong as carbon steel, just as good at edge holding, and yet rust free.

In the end, I asked the steel fabricator for a rod of the most “vanilla” metal they had, middle of the road on all specification, except for stainless. They sent me two in the order, and I still have them. But just unpacking the things, I knew I had made a mistake. A six-foot rod of one-inch diameter weighs about thirteen pounds—far too heavy to swing at the speeds of any bo kata and still hold onto. One slip and it would go flying across the room. Still, the experience of purchasing a steel bo had been instructive.

I have long thought it a shame that this country buys most of its steel from China, Japan, and even India. The company called “U.S. Steel” used to be one of the powerhouses of our country’s industry. Steel is the metal of civilization. It’s what we use to build bridges and skyscrapers. It’s the rebar in every concrete structure. It’s the main ingredient in our automobiles, still ahead of aluminum, plastic, and carbon fiber. That we apparently don’t make it anymore, that we rely on overseas, practically Third World countries to supply our steel is a shame.

But it’s not that we don’t make steel in this country. We just don’t make large volumes of non-specific steel for I-beams, rebar, and other general utility purposes. We let the Japanese, the Chinese, or whoever else wants to build a plain old steel mill do that. Instead—as my father knew back in the 1960s and ’70s—we are the world’s leader in specialty steels, metals with specific qualities of hardness, ductility, flexibility, compression strength, rust resistance, and other measures of performance that are dictated by the composition of metal additives in the steel, its carbon content, its treatment, and its finishing.

For example, ball bearings must be extremely hard steel, because they work under the pressure of weight loads, but that kind of steel can be fairly brittle. Tool steel, the sort that goes into drill bits and chisels, must be extremely durable but also shock resistant, because they may be used to cut and shape other metals, including lesser steels. A knife blade has to be durable to stand up to shocks, hard enough to hold an edge, but also ideally resistant to rust in daily use and just sitting in your pocket, where it becomes subject to the moisture surrounding your body.

So when we speak of steel anymore, think of chocolate. The most popular brand in this country, the kind you get out of vending machines—the kind that’s made in great bubbling vats, distributed in boxcar lots, and sold everywhere—is Hershey’s. It’s not bad chocolate, as far as milk chocolate goes. But it’s not special. And it does have a sort of grittiness and leave an oily feeling in the mouth that children don’t mind but that connoisseurs care about. This is bulk chocolate—not the kind they use in making truffles or gift boxes with assorted flavors and fillings. If you want something special, with a particular texture, flavor, cacao content, and so on, you go to a premium chocolatier. The Swiss make fine chocolates for the discriminating baker and the customer’s palette; they let the U.S. pour out vats of Hershey’s milk chocolate.

The Swiss are to chocolate what America has become to steel: the experts in taste and quality.

Being a karate buff and an admirer of oriental martial arts, I used to think that the highest quality of steel—the magical stuff that can cut through anything, hold an edge forever, and never be bent or broken—was found in the samurai sword. We’re not talking about the swords that the Empire turned out by the carload for junior officers at the end of the war, usually cut and hammered from truck leaf springs. No, I’m talking about the ancient art of the sword as practiced by Japanese smiths for centuries—the kind that The Bride sought from Hattori Hanzo in the Kill Bill movies.

The Japanese have made samurai swords for hundreds of years. They work and fold, work and fold the steel, over and over again, to give it strength. They make a certain kind of steel with toughness and flexibility for the back of the blade, and another kind that can be ground sharp and stay sharp for the steel along the edge. These two are then put together: the edge steel in a groove that has been shaped into an ingot of the back steel. The combined blade is then pounded into shape while red hot to weld the steels together, then plunged into water or oil to temper them. The different steels react differently to the sudden temperature change, with the blade steel extending and the back steel contracting at different rates. This bends the blade backward and gives the sword its characteristic curve.

Layering the steel to give it strength, which the Arab and Spanish steelmakers did with Damascus steel, is an old trick.1 But it is only really needed when working with low quality steel that contains a lot of impurities. The Japanese swordmaking artisans would smelt a large lump of steel for just one or two blades in a backyard kiln made of bricks and fired with charcoal. It’s a creative and humanistic approach, requiring great knowledge and the skills acquired through centuries of trial and error and passed down from master to pupil. But the process lacks the precise, scientific control of a modern blast furnace. So these artisans had to layer their steel to drive out the impurities with repeated hammering and to give the steel the strength of many welded bands. A modern steelmaker only makes Damascus steel for show now, not for any inherently superior strength.

Steel has almost magical properties of strength, hardness, durability, and now resistance to corrosion and other specifications. These qualities have made it the favorite of tool makers and armorers over the centuries and to builders and engine makers in the last two hundred years. It was even chosen as his party name by a world leader, Stalin. But anymore, when you think of steel, think of chocolate.

1. It was the Arabs of Damascus, in ancient Syria, who are supposed to have invented this work-and-fold technique of forging, and so they gave this particular kind of patterned steel its name. The Moors brought the technique into Spain, which was also known for its fine steel blades. Tradition says the Damascus steelmakers didn’t temper their red hot blades in water or oil but instead plunged them into the bodies of slaves, to temper them in blood. But that is probably a myth.

Sunday, October 27, 2019

Crazy People Among Us

Depressed person

Just like the poor, the mad have always been with us. They only present differently in society at each stage because we keep evolving our labels, diagnoses, solutions, and attitudes for dealing with brains and personalities that do not function within the societal parameters set by the rest of us “normies.”1

This is one reason you don’t see references to “schizophrenia,” “bipolar disorder,” or other psychological conditions in ancient or even near-modern literature. The first concepts of psychology as a science really came about in the late 19th century with practitioners like neurologist Sigmund Freud in Vienna and pure scientists like Wilhelm Wundt in Leipzig. Terms, diagnoses, and approaches have been evolving ever since and are still being continually re-examined and refined today. The science, such as it is, moves back and forth between seeing psychological conditions as issues originating in the realm of ideas (e.g., bad thinking, childhood mistreatment, mistaken perceptions of reality, or spiritual or moral dysfunction) and in the realm of brain structure and biology (e.g., physical trauma, inherited or acquired disease, and maladjustment of the brain’s neurotransmitters). Or some of both, I would think.2

In ancient times, in the surviving literature, you seldom saw outright references to madness. This is not because madness and insanity did not exist, but because people who were either born with faulty neural wiring or became ill through mistreatment and cumulative trauma died at a young age through starvation or misadventure, or were kept as a shameful secret in a back room of the castle or by the hearth in the family hut, or found some niche in society where a loose attention to reality was either no hindrance or an actual boon. Such a person might become a shaman, or an oracle such as the priestesses at Delphi, a spiritual leader, or if the person had a knack for jokes and storytelling, a king’s fool. Many positions in society benefit from a loose association with reality, an unusual or altered viewpoint or perspective, or the flights of fancy that others can interpret as being in touch with the gods, the future, or the forces that drive fate.

Certainly, it has been suspected that Joan of Arc was an undiagnosed schizophrenic, and the religious orientation of her voices and delusions fit in well with the needs of her king and her society during the Hundred Years’ War. And certain forms of autism and some personality disorders can be useful in giving a religious figure, a politician or war leader, or any kind of craftsman the fixity of purpose, the literal interpretation of facts and intentions, and the firm resolve needed for success. Such people are not usually easygoing or pleasant to be around, but in the right situation they can get the job done.

There are also degrees and variations of psychological conditions. A person may be “high functioning,” meaning that he or she can get along in society and manage not to be ostracized or killed while still dealing with a diagnosable condition. This realization comes under the heading of “Some people are just odd.” And again, some personality conditions, in the right niche, can prosper where the rest of us with conventionally “normal” viewpoints would flounder or suffer from distraction or terminal boredom.

When societies, particularly in Western Europe, became more crowded, urbanized, “civilized,” and socially conscious, the “crazy” people—perhaps pushed out of their comfort zone by the stresses of urban living—could neither find an acceptable niche in society nor survive with their diminished wits and skills. Then the concerned citizens wanted some decent provision to be made for their welfare if not for their declining mental health. So medieval hospitals like Saint Mary Bethlehem in London soon were repurposed as refuges or asylums for the insane and the incapacitated, usually under dispensation or with support from the crown. They were an unhealthy cross between hospitals with no positive medical regimen and debtors’ prisons from which there is no escape. The inmates were allowed to sit in their current condition, sometimes in their own filth, muttering or raving to themselves, with minimal care. Or they would be given over to experts and reformers who would attempt cures based on the latest moral or religious philosophies. “Bethlehem” became “Bedlam.”

The practicalities of warehousing the insane—out of sight of society and out of both its and the patient’s mind—developed along with science and medicine. Patients were eventually put in padded rooms and under restraint, and sometimes lobotomized, shocked, or otherwise physically altered, until the 1950s. Then the drug generically known as chlorpromazine, or Thorazine, was developed as an antipsychotic, an anxiolytic, and—well, actually, it’s a form of sedative. It is hard for a patient to “act out” and become a nuisance to the staff when his or her mind is in a deep fog. We have since developed other and better medications to specifically treat psychosis, depression, mania, anxiety, and the other specific symptoms of recognized diseases. These drugs—often prescribed in “cocktails” of two or three at once—mostly work on imbalances in the brain’s various neurotransmitters, which regulate communication across the synapses between nerve cells. All of these medications have some measurable side effects, although the newer versions are often less intrusive than the older models.

With proper medication over the long term, and with sufficient “talk therapy” to explain the patient’s condition to his or her satisfaction, a moderately to severely ill person can now become essentially “high functioning.”

Also in the 1950s, we began to see a change in the popular culture regarding “insanity” or “madness.” Stories and movies like The Snake Pit, Harvey, and One Flew Over the Cuckoo’s Nest told of innocent, eccentric, and sometimes just careless or carefree persons being railroaded into insane asylums, sometimes with the connivance of an uncaring judge or covetous relatives, and then fighting to get out. The fear of being incarcerated without cause or cure, like the Victorian fear of falling comatose and being prematurely buried alive, drove changes in society.

In California, the legislature passed the Lanterman-Petris-Short Act in 1967, which went into full effect in 1972; other states passed similar legislation at about the same time. The California act regulated involuntary civil commitment to a mental health institution. Basically, a person in any mental condition can only be held against his or her will for 72 hours (three days), and then only if he or she can be considered “a danger to self or others, or gravely disabled.”3 This is the premise of state’s Welfare & Institutions Code (WIC) 5150. After 72 hours, the person must be given a hearing before a judge, where the committing officer or county mental health representative can show through examination that the person is still dangerous or disabled; that may extend the commitment up to 14 days (WIC 5250). And after that, presumably with further treatment, another hearing can extend the hold for an additional 14 days, especially if the person is found to be suicidal (WIC 5260). And then, if the patient does not agree, he or she must promptly be released. This is why most court-ordered, mandatory treatments such as for alcohol or drug abuse are only for 28 days; after that, the person must either be charged under criminal statutes or released.4

This and similar legislation has the effect of going into the mental hospitals and asking, “Who here does not think they’re ill and so should be released?” And since anosognosia, or lack of insight into one’s own condition, is a symptom of most mental illnesses, the patients—whether they were wrongfully committed or not—all asked to be let go. In 1963, during the Kennedy Administration, the Community Mental Health Act was passed to provide local services for the mentally ill, allowing them to live in and become part of their local communities. But the act had no accompanying funding and remained an optimistic and hopeful ideal. In California, mental health services are funded and provided locally at the county level. Of course, it’s a lot more difficult, costly, and time-consuming to provide for people in the community on an individual, meet-and-greet, case-management basis than to assign them to a bed in a state hospital and care for them someplace far off in the hills. But that’s now the law.

Which means we are pretty much back to the pre-urban middle ages. Parents are caring for their severely disabled son or daughter, housing them in a back room, trying to get them the time-limited and inadequate care available under the county system or through private insurance, and eventually providing for their special needs with a modification in their wills that accommodates ongoing care and conservatorship after their own deaths without jeopardizing the disabled person’s access to public services. That, or the mentally disabled relative is turned loose by society to survive on the street as a homeless person, anesthetizing his or her mental condition with drugs and alcohol, scrounging in garbage cans for sustenance, and generally dying young.

But I don’t believe in end states. Every time, every system, every current condition is a point on a curve, in transition to either a better or a worse future state. And I believe that the thrust of science and technology—in this case, medical science, psychological understanding, advances in neurology and physiology, and pharmaceutical discoveries—is upward. That things will get better.

Our science will either improve the level of treatment available on an individual, personalized basis—perhaps with better medications, better understanding of disease processes, or therapies assisted by artificial intelligence—or we will develop more attractive, more socially acceptable, more comforting and caring treatment centers—perhaps on the basis of assisted living communities and therapeutic resorts. In either case, the under-funded, over-stressed, catch-as-catch-can approach of current community services cannot continue: the families caring for their wounded children will fail through exhaustion and revolt, and the society plagued with needles and feces in the streets and tent cities under its freeway overpasses will demand a better solution.5

One way or another, we must recognize, accept and care for the crazy people among us—or slip back into a dark age.

1. That is, as they say in Alcoholics Anonymous and other affected circles, the “chronically [or habitually] normal people.”

2. The one thing that cannot cause psychological injury is the sort of direct emotional and intellectual manipulation suggested by popular stories and movies of the early 20th century and referred to as “gaslighting.” You can intimidate people, confuse them, make them weary and angry, you can break down their social and emotional defenses, even cause them to lash out in fear and frustration—but you cannot “drive them [clinically] insane.”
    Of course, through profound exposure to bad experiences such as a total reversal of a person’s life condition, a violent trauma, cumulative fatigue or injury, especially in wartime situations, and other severe dislocations, a person may suffer continuing emotional distress: post-traumatic stress disorder (PTSD). This is a real psychological condition with recognized symptoms and recommended treatments. It can happen to anyone. But still, the trauma has not driven the person “insane” in the old sense of the word: laughing manically, cackling inanely, and seeing phantasms.

3. And it has been decided judicially that a person who can only feed him- or herself by scrounging in a garbage can does not fit the definition of “gravely disabled.”

4. Two exceptions exist. At the first 14-day hold, a person deemed to be gravely disabled due to mental illness (WIC 5270) may be held for an additional 30 days. And a person who has made a serious threat of substantial physical harm to another person (WIC 5300) may be held for an additional 180 days.

5. We are already seeing a backlash in the form of Laura’s Law, Assembly Bill 1421 from 2002 in California, inspired by a young woman killed by a man who had refused psychiatric treatment. The law allows for court-ordered Assisted Outpatient Treatment (AOT) for people with severe mental illness. It is based on a similar statute in New York State and is being adopted elsewhere.

Sunday, October 20, 2019

Three-Fifths of a Person

White-faced human

It was a compromise made in the drafting of the U.S. Constitution1 that “other persons”—that is, enslaved people—should be counted as three-fifths of a person in the census for the purpose of allotting a state’s members in the House of Representatives, votes in the Electoral College, and direct taxes.2

Note that this clause does not mention “slavery,” “slaves,” or any race by name—except Indians, who didn’t count, having their own tribes and nations and, eventually, their reservations outside the jurisdiction of federal and state governments. But everyone knew whom the drafters were talking about: the Africans, brought to this country in chains and bound to their masters as chattel, the same as any cow or pig. And I am sure that many people in the 18th and 19th century assumed, on the basis of their own prejudice, that the three-fifths referred to the subhuman state of these bound people, not worth a whole white man in terms of spirit, understanding, and mental capacity.

But that is not at all the case. The “three-fifths clause” was intended to preserve the representative structure of the new federal compact. Under the Constitution, each state was apportioned two Senators in Congress, regardless of the state’s size or population. That way, every state got an equal say and share in governance. But then each state received a number of Representatives in proportion to its population. That way, individual citizens would get representation in proportion to their presence in the state. A populous state like New York gets two its Senators and, while it started with six seats in the House in 1789—because the whole nation had a much smaller population back then—that number has waxed to 45 seats in 1933-53 and waned to 27 seats today. Meanwhile, Alaska, with a territory about equal to an entire third of the “Lower 48,” gets its two Senators and just one Representative.

The justification for and the reasoning behind the three-fifths clause was that states with large slave populations would get outsized representation in Congress and in the Electoral College if each of those non-citizen, non-voting persons, living essentially outside the body politic, were counted towards representation. The Old South, plus any new states entering with the institution of slavery, would get an excess number of representatives and electoral votes. Since the slaves themselves would have no vote and therefore no say in the election of these representatives apportioned to their number—would not actually be represented by them—and so would not have their own needs and desires heard and presented, this excess of members in the House and the Electoral College would then be under the control of the voting-age white men. And that would give the latter more power and representation than the Constitution intended.

Without the three-fifths clause, the Old South and the slavery it spread to any new states would dominate the Congress and the election of the President. They would have an unfair advantage over the future, no matter which party was dominant in those states.

One could say that the same holds true for the status of women. They were counted in the census but were non-voting (until the 19th Amendment, passed in 1919) and therefore were impaired as citizens. That was an unfortunate oversight of 18th and 19th century minds and prejudices. But for the purposes of state representation in the federal government, the non-voting status of women did not sway the power of any state one way or the other. Women and men were present in roughly equal numbers, and that proportion was unlikely to change drastically with demographics and thus skew a state’s representation.3

The reasoning behind the three-fifths clause would apply today to the situation of open borders and unrestrained, undocumented immigration. Democrats and Progressives tend to favor open borders on humanitarian grounds. Republicans and Conservatives tend to oppose them on grounds of representation. Some people think the Conservative side fears that immigrants from Mexico and Latin America will generally vote for Progressive and Socialist causes. But that would only happen if the immigrants became citizens and acquired the right to vote—which one hopes would happen eventually but will not, in any numbers, happen right away. The alternative, one might believe, is that certain districts could illegally give these undocumented residents ballots and then happen to count them in a blatant violation of election laws. And that would be wrong.

The worse problem is that these undocumented residents would be counted in the decennial census, which currently has no way to differentiate them from citizens with voting rights and visitors holding valid visas, green cards, or other documentation but without the privilege of voting. That is, these undocumented residents will count towards an adjustment of the seats in Congress and votes in the Electoral College for any state in which they settle, but they will not then have the vote to ensure their own representation. They would increase the power of the legal voters in the state, but the state’s newly seated Representatives would have no real cause to hear and present the concerns and desires of the state’s undocumented residents. The latter would be used in a powerplay, the same as the slaves in the Old South if they had been counted one-for-one.

I have another reason for disliking open borders. People coming into this country without some form of official recognition—visa, green card, citizenship—are and will always be subject to restrictive immigration laws and the prospect of punitive deportation for any wrongdoing. Sometimes those laws will be enforced, sometimes not, but the threat will always be there. And that is a fact. But this situation also puts these people in the shadows, living as second-class residents, vulnerable to being bilked and defrauded, finding valid employment only in sweatshops and the gray market, and unable to go to the authorities for redress of wage and hour violations because of their status. This isn’t right. It represents a new kind of slavery.

And if we are going to apportion representation and electoral votes on the basis of non-voting residency, why stop at people? I can foresee a time when issues of environmental vulnerability and human-caused damage require representation for the non-human element, for endangered species and even trees. If we start counting trees along with people—and these days, it wouldn’t totally surprise me—then the world flops over, and states like Alaska and Vermont become the representational and voting powerhouses. But the presumably human representatives who took their seats in Congress would not actually be held accountable by the trees and so would only increase the power of the human voters in those states.

And that just wouldn’t be right—even if you counted a tree as only three-fifths of a person.

1. Article I, Section 2, Third Clause: “Representatives and direct Taxes shall be apportioned among the several States which may be included within this Union, according to their respective Numbers, which shall be determined by adding to the whole Number of free Persons, including those bound to Service for a Term of Years, and excluding Indians not taxed, three fifths of all other Persons.”

2. Obviously not the federal income tax, which was not imposed until the 16th Amendment, passed in 1909.

3. Unless you counted the territories of the old and wild west, where men usually outnumbered women. But that situation tended to even out with the arrival of civilization—and so of women and families—which generally occurred before a territory could apply for statehood.

Sunday, October 13, 2019

God and the U.S. Constitution

The U.S. Constitution

I am an acknowledged atheist. I don’t wear the label proudly or in rebellion. I know that this admission sets me apart from many of my friends and fellow human beings. I am an atheist not because I know a secret that others don’t, but because I lack a gene or a brain circuit or existential antenna that would let me commune with and feel the presence of God.

In my conception, in this absent state then, God—or Yahweh, Allah, Brahma, or any other name you use—is still a good idea. That is, for most people, the Existential Being that we revere and worship is a conception of goodness personified, something to strive for and emulate, a guide to right action and good thoughts, an inducement to calm and serenity.1

In my conception, this godhead exists in the human mind, is transmitted through spoken and written words, and in utter reality has no existence outside of human thinking, action, and cultural conventions. In other words, do I believe that an immaterial Being, a palpable Force, a Spirit or Intelligence, Creator of the universe as well as of the human form and mind, has an actual presence out beyond the stars and existing for all time and outside of time? For me, no. For others, possibly yes to probably maybe.

But god, godhead, the idea of a loving, creative, affirming presence seems to be part of the human psyche. I credit this to the fact that humans, with our brains and developed skulls larger in volume than the pelvic passages of our birth mothers, are born prematurely, with soft heads. We therefore require the loving attention of our parents through the first couple of years of our being. It’s not for nothing that we think of the godhead then as a either or both Sky Father and Earth Mother. We needed these beings, both distant and close, from our first moments of consciousness.2

In the same way, the U.S. Constitution is a human idea—an ideal, if you will—that has no existence outside of the human mind and the actions it engenders. Like the idea of a god, it is formed in words and written down on parchment and printed in booklets, the same way that God is revealed in the Bible, the Quran, or other sacred texts. But the Constitution has no other physical presence or existence outside of the human mind. It has no force that the human mind does not give it.

And yet the Constitution has a powerful influence in American life. We refer our laws and practices back to it. We see it as the ultimate test of rightness for the American culture and virtues. It has stood for 229 years. It has been amended twenty-seven times—the first ten of those at its very creation and called the “Bill of Rights.”3

Today, we seem to be undergoing either a schism or a reformation with regard to the U.S. Constitution. On the one hand are the originalists, who want to strictly construct its language according to the words on the page and the body of legal practice surrounding the interpretation of such texts. On the other hand are the proponents of a “living Constitution,” who want to interpret the text in terms of the current culture and mores, and pin the words on the page to the presumed intentions and attitudes of 18th-century white Europeans who could not have anticipated our technically advanced, multicultural, and supposedly superior view of law and justice. To the latter, the Constitution is a good start but needs work. The disagreement over interpretation seems to center in that same Bill of Rights and not so much in the basic structure of government laid down in the main text.

Although that governmental structure, too, is under subtle attack. Consider the movement away from laws being written in compact, comprehensible, easily analyzed form by the Legislative branch and merely enforced by the Executive branch—toward a Congress that writes long, abstruse bills full of intentions and to-be-desired end states, which are then left to the Administration and its Cabinet departments and alphabet agencies to interpret into regulations and laws. This isn’t so much a direct and reasoned attack on the Constitution and the government’s structure as a decades-long, bipartisan, and mostly lazy approach to sliding responsibilities around from one branch that has to fight for reelection every two or six years onto the other branch whose staff of bureaucrats and regulators is largely unelected and insulated from public criticism.4

And what happens when the popular belief in the rightness of the Constitution or the power of God goes away?

We can see in the sober words of G. K. Chesterton what abandoning the guiding notion of God and the principles of religion has wrought in our current culture: “When men choose not to believe in God, they do not thereafter believe in nothing, they then become capable of believing in anything.” Socialism, Communism, environmentalism (absent of, and even opposed to, human values), “the arc or history,” and every other -ism, doctrine, cult, and clever notion springs forth as a mainstay of human thinking. It’s not just that many of these doctrines are destructive, unstable, or unsustainable in the long term. They are also not as hopeful and sustaining in a person’s everyday life as the belief in a benign and loving presence. Prayer offers the believer, at the very least, a little daily chat with a presumed intelligence that is stronger, wiser, and more forgiving than the believer him- or herself might personally be. Adherence to the tenets of pure social justice, environmental sacrifice, or some other collective doctrine usually entails a bitter denial of personal hopes.

In the same way, if we abandon the strict construction of the U.S. Constitution, its Bill of Rights, and the various modifying Amendments, we are left with the personal opinions of competing politicians—and those guided by some social, political, or economic theory with which the rest of us might not agree. The Constitution and its constituent Amendments are remarkably silent on issues of social, political, and economic theory, other than support for the individual in particular and the people in general against the tyranny of the state or the majority in control of the government. It’s a pretty lenient and forgiving structure; other systems of government are a lot more aggressive and demanding.

But again, in my conception, none of this is dependent on outside, impersonal forces. Both God and the Constitution are human creations, operating wholly within the scope of the human mind, and having effect only through the interactions among those who so believe in the first place.

These are, ultimately, fragile things.

1. Unless, of course, you worship Satan, the Devil, Baron Samedi, or some other dark force—and then your heart is in a different place from mine.

2. I imagine that a race of intelligent sea turtles would have a very different conception of God. They are abandoned by their mothers as a clutch of eggs in the sand, are warmed by an invisible Sun, hatch by the light of the Moon, and scramble across dry land to find the ocean, pursued all the time by hungry gulls and other predators. I imagine their God would not be caring or life-giving at all, but cold and distant like the Moon itself.

3. That is, the rights that citizens have above, beyond, and preceding the Constitution, as inalienable rights that come from some source—God, perhaps?—greater than the state or the federal union and the document that binds that union.

4. This is a basic problem with democracy. Ultimately, the people who have to run for office must seek approval from the voters. You do this in one of two ways: offer favors, projects, and advantages that other politicians can’t or won’t offer; or avoid association with damaging and restrictive laws and their effects that the voters won’t like. Promise the sky but avoid the whirlwind and the lightning.