Sunday, April 14, 2024

Why the Moon

Subatomic particle

In November 2022, NASA launched Artemis 1, an unmanned capsule capable of carrying four crew members, on a mission to circle the Moon and return. In February 2024, Intuitive Machines of Houston, Texas, launched the Odysseus lander to the Moon’s south pole. Although it fell over on its side and ceased functioning, the lander made a successful arrival without damaging impact—like some other government-launched probes. And SpaceX is now doing trial runs of its Starship, a heavy lifter—capable of carrying 150 metric tonnes to orbit and beyond—that could eventually get crews, supplies, and building materials to Mars or the Moon.

So, apparently, without having to invoke John Kennedy’s brave vision of the 1960s, “We choose to go to the Moon,” we are going back to the Moon. Yes, maybe to Mars, eventually. But people seem to be invested, by various routes, with various vessels and funding sources, to return human beings to the Moon. Mars can be inhabited by robots for now, but the Moon will apparently get the first off-world human visitors. Again.

And why not the Moon? Yes, Mars has more gravity, but still a whole lot less than the Earth’s. This means Mars has trouble holding on to an atmosphere composed of molecules lighter than carbon dioxide. So yes, Mars does have an atmosphere, with an ambient pressure about 1% that of Earth’s and composed almost solely of carbon dioxide. And that near-vacuum still carries dust storms that persist for weeks or months at a time. In contrast, the Moon has no atmosphere, and the dust settles the instant it gets stirred up. Sometimes, the absence of a thing can be a greater blessing than its minimal presence. More importantly, Mars is a long way away, with an outward bound and return trip measured in months rather than days. The logistics of going to Mars and being supplied there are really tough.1

So, we’re going back to the Moon, this time not just to step out and say we did it, but maybe to establish a presence. Maybe to scout a base. And possibly, eventually, to establish a colony. And I say, “Good!” Maybe even, “Hallelujah!”

Why? Because we humans are a curious and exploring species. We walked out of Africa and expanded around the world. It was not just the Europeans who reached the water’s edge and sailed beyond—the heritage of the Vikings, the Portuguese, and the Spanish—but everyone who was dissatisfied with their little plot of land and wanted to look beyond the horizon. Well, now the horizon is beyond the atmosphere. And yes, the artifacts of dead civilizations who never left their planets will be picked over by the living ones who dared to make the trip.

And that brings me to my point. We need the Moon, not just to satisfy our curiosity or to say we did it. We need the Moon as a forward base. Mars, yes, one day, for colonizing, if we ever need the elbow room. But the Moon is our logical off-planet base, outside the deepest part of the Earth’s gravity well, able to focus our telescopes and listen on all sorts of wavelengths because of the lack of atmosphere, and a station on the far side is shielded from all the radio noise on Earth. And the Moon is also the first place that the Others—the friends, the enemies, the intruders, the invaders, the aliens—will set up their own base, their dropping-off point, as they approach the Earth.

And you know they’re coming.

For the past hundred years or so, we’ve been broadcasting coherent radio signals into the stratosphere and leaking them out toward the stars. At the speed of light—which is also the speed of radio waves—that creates a bubble of our babblings two hundred light-years across for anyone who’s been listening.

True, according to the inverse square law,2 those signals that are broadcast rather than beamed directly tend to diminish rapidly. At a distance of one hundred light-years, Marconi’s original radio broadcasts will be remarkably faint—probably on the order of the barest whisper. They might be drowned out by the clang of two nearby hydrogen atoms colliding. And the Sun is a loud star, radiating not only in the infrared—or heat—and visible light but also with lots of radio noise. Compared to that, the broadcast radiations from Earth will be like a mouse farting on the stage at a rock concert.

But people looking for signs of life on planets around likely stars will discount the rock concert. They will be listening for mouse farts. And, if these listeners are already out there, they will probably have better ears than we do.

I’d say it’s a race against time. And we’re already fifty years behind.

1. As I’ve sometimes said, if you want to build a colony on Mars—or the Moon, for that matter—first build a five-star hotel with Olympic-sized swimming pool at the summit of Mount Everest. The logistics are better, and the air is breathable—barely. If that’s too hard, because of the smallish footprint, then build it in Antarctica. Logistics, atmosphere, and temperatures there are a snap.

2. The strength of any radiated signal—radio waves, light waves, sound waves—diminishes at the square of the distance from the source. So, the light from a bulb at two feet from the socket is one-quarter the strength of the light at one foot.

Sunday, April 7, 2024

“Good Dog” Management

Grinning dog

Back when I worked in internal communications at the local utility company, I edited a monthly newsletter for managers and supervisors. One of the themes that we promoted was the art of leadership, which I define as “accomplishing objectives through the willing participation of others.” In my view, this is one of the highest art forms to which a person can aspire. It involves setting values, providing sound judgment when necessary, and motivating people. It’s a tricky task for anyone.

And there are a lot of natural human impulses that can poison the atmosphere and make leadership along these lines almost impossible.

For example, it is a natural human impulse not to give up an advantage. If you are working in a position of authority over someone, it can be a natural tendency not to praise their work. Why not? Because then you give up some element of imagined control. If you praise them, then it becomes harder down the line to point out their errors and faults. You have given up some of your authority—you think. And you imagine that if you later need to correct that person, they will in turn say, “But you told me I was doing a good job! Now you’re criticizing me. That’s not fair! That’s not right! Make up your mind!” Hearing this subsequent conversation in your head, you may decide it’s just not worth the hassle to tell people they are doing well—even if they are.

It gets worse. Some people in positions of authority think they can gain advantage by putting their subordinates in what I call the “bad dog” condition. Rather than refraining from pointing out the subordinate’s good actions and positive results, these leaders and managers take every opportunity to find and criticize errors and faults. They think that by keeping their employees in the doghouse and fearing for their jobs, they have increased their own control. And maybe that works with actual dogs, who will tolerate amazing amounts of abuse from someone who puts down their daily food bowl.

With actual people, however, who are capable of thinking and reflection, being put in the “bad dog” position creates resentment. Hostile employees might, given the opportunity, participate in what used to be called a “white mutiny.” That is, they will take advantage of a developing situation to engineer a bad outcome for which they will bear no direct responsibility. They won’t disobey orders or throw their shoes into the gears to sabotage the operation. Instead, they will simply look the other way, play stupid, follow ill-considered orders to the letter, and shrug their shoulders. Oh, well.

And it gets worse. People who are continually criticized, harassed, and micro-managed to their spiritual detriment will eventually give up. It’s not that they hate the organization or wish it ill, they just don’t know what to do, because anything they do turns out to be wrong. Inappropriate criticism saps a person’s motivation. It makes them ineffective. They will do the bare minimum to keep the organization from falling apart, but not much more.

The issue of micro-management is separate but related. The boss—I won’t say “leader” here, because such a person isn’t one—thinks he or she has all the answers. The boss wants to see that only the things he or she can imagine or envision get done, and only in the way, by the methods, and in the timeline that he or she can see. They don’t want the “willing participation of others” so much as the activation of “meat robots.” Micro-management is one step removed from pushing the employee or subordinate aside and saying, “Here, it’s just easier if I do it myself.” The micro-managing boss wants employees to do it exactly like that, except using their own minds and hands under a kind of frenetic, telepathic control.

So, what is the alternative? The true leader sets organizational values and goals, provides fair and rational judgment when a novel question or situation arises, and otherwise motivates people to think, reflect, envision, and act on their own for the good of the organization. This requires a major element of trust in his or her employees or subordinates. The leader must put them in the “good dog” position—always being respectful of the fact that they are not actually dogs or animals. The leader must then have the security in his or her position to step in and tell an individual or group when something has gone wrong or an objective has not been achieved, and then to suggest a better way of doing things. But all the while, the leader has given up nothing by letting people know when things are going well and that they are doing the right things.

Leadership is tricky. The leader is constantly balancing needs and objectives with the sense of what his or her employees and subordinates are perceiving and thinking and how they are likely to react. That’s a tough job. But it’s one of the best jobs and the highest interpersonal endeavor. It’s a true art form.

Sunday, March 31, 2024

Quantum Entanglement and Other States of Mind

Butterfly Nebula

Okay, here is where I show either my great ignorance of science or a glimmer of common sense.

My understanding of quantum mechanics is based on reading various articles in science magazines, reading books about it for the lay reader, and watching The Great Courses lecture series on it. There may be things I don’t understand because I’m not a mathematician, but some of the claims seem to be more in the human mind than anything that’s going on in the universe.

Take the notion of quantum entanglement. Supposedly, two particles—two photons, say—can become entangled. Typically, this happens when the particles are created together. For example, when a rubidium atom is excited, its decay releases two photons, and they are entangled. Or a photon passing through a lens made of certain types of crystals will split into an entangled pair. They will remain entangled until one or the other interacts with something—that is, generally until it is observed by human interaction. And this entanglement, this connection, will persist across vast distances, and what happens to one of the pair, even at the far end of the galaxy, will be instantly communicated to the other. That is, the lightspeed restriction of general relativity on the transmission of information is ignored. This was the “spooky action at a distance” that Einstein questioned.

Supposedly, if two particles are entangled, they will have complementary but opposite qualities. For example, if one entangled photon has “positive spin,” then the other will have “negative spin.” But according to quantum mechanics, the characteristics of any particle at the quantum scale cannot be determined except by observation. Further, the existence of any particle is not determined—is not fixed in time and space, is not concrete, is not “real” in the world—until it is observed. This includes its exact location in space, its direction of travel, and qualities like its spin state. So, a photon’s spin may not only be either positive or negative; the photon’s spin is both positive and negative—that is, in a quantum superposition of both states—until the photon is observed and its spin is measured.

In another case, if a stream of photons is passed through a two-slit experiment—some going through one slit in a shield, some through the other—their intersecting fields will create an interference pattern, like waves passing through a narrow harbor entrance. This interference yields a series of parallel lines on a screen beyond the shield with the two slits. The interference lines will be heavier in the middle of the series and lighter out at the ends, indicating that most of the photons travel relatively straight through. Still, the result will not be two isolated bands of hits but instead a diffraction scatter.

But according to quantum mechanics, if a single photo is fired at the two-slit experiment, it does not necessarily hit the screen opposite one slit or the other. Instead, it may randomly fall anywhere within that diffraction pattern. The single photon passes through both slits, its field interferes with itself, and it acts as if it is in two places at once, until it is observed hitting the screen in only one place.

In a third case, some experiments with photons—including the famous Michelson-Morley experiment, which was used to disprove the idea that light traveled throughout the universe as a wave in a medium called “luminiferous ether”—employ partially silvered mirrors. These are mirrors that randomly reflect some photons and randomly pass others. If you set up a course of these mirrors, so that some photons take one path and some another, you can place detectors to see how many photons go which way. But interestingly, according to quantum mechanics, if you fire just one photon through the experiment, it will take both courses until it’s detected along one path or another. According to quantum mechanics, the photon’s position is everywhere in the experiment until frozen on one path by the act of detection or observation.

This idea of a particle at quantum scale being everywhere at once—with no fixed position, direction of travel, or defining characteristics until actually observed—is central to the nature of quantum mechanics. The physicists who practice in this field understand that the act of observing a tiny particle—a photon, electron, neutron, and so on, all of which are observed in flight—changes it. That is because you cannot observe anything that small without interfering with it—like hitting the detector screen beyond the slits or bouncing another particle off it in an electron microscope—and either stopping it in its tracks or deflecting it off toward somewhere else. The quantum world is not fixed or knowable until it is known by observation.

This is the example of Schrödinger’s cat. Seal a cat in a box with a vial of poison and a mechanism that breaks the vial when an atomic isotope decays. Until you open the box, the cat is both alive and dead—a superposition of these two states—and the cat’s actual condition is not resolved until you observe it. This is taking the quantum physicist’s belief in “unknowability” to an extreme.

I believe that part of the basis for this mindset is that quantum mechanics is a mathematical system, built on equations based on probabilities. In mathematics, it’s hard to build an equation around a statement that says a value might be one thing or it might be another. Instead, you place a probability function in place of the necessary value. So, in the experiment with Schrödinger’s cat, the cat’s life or death has a probability based on the nature of the isotope and the length of time in the box. If the isotope has a half-life of ten thousand years, and the cat has been in the box ten minutes, there’s a high probability the cat is still alive. If the isotope has a half-life in seconds, like some isotopes of oxygen, then the cat is likely dead. But the probability function is not resolved until the cat is observed.

In the case of two entangled photons, the probability of either one being positive or negative spin is fifty percent, an even coin toss. And, in the mindset of quantum physicists, once the spin of one photon in the pair is established and fixed, the spin of the other is also fixed. The fifty-percent probability function collapses and all is known. The question in my mind is not whether the two photons communicate with each other across the spacetime of the span of a galaxy, but how the observer at one end can communicate the discovered state to the non-observing holder of the photon at the other. If the holder of the passive photon observes it, then yes, he will know its spin state and resolve the probability function to his satisfaction. He will also know instantly that the distant photon has the opposite spin. But he can’t communicate any of this to his partner holding the other photon until his message travels across the lightyears. So, big deal.

Say I cut a Lincoln head penny in half across the president’s nose. One half the coin shows his eyes and forehead; the other shows his mouth and chin. Without looking, I take each half-coin and seal it in an envelope. I give one to my partner, who takes it across the galaxy. If he opens his envelope and sees mouth-and-chin, he knows that I must have eyes-and-forehead. And vice versa. But I won’t know what I have—unless I wait eons for a light-speed signal from him—until I open my own envelope. The penny, existing in a classical, non-quantum world, has an established state whether I look or not. It does not exist in a superposition of both eyes-and-forehead and mouth-and-chin until one of us observes it.

My point—and I certainly may be misunderstanding the essence of quantum mechanics—is that the concept of superposition, of probability functions, of tiny things being in two or more places, two or more states at once, and going nowhere until observed by human eyes and instruments is a thoroughgoing mindset. It’s a reminder to the quantum physicist that you don’t know until you observe. It says that the whole conjectural world of the very small is just that: conjecture, theory, and a mathematical construct until human instruments intervene to prove a thing is so or not so.

And that’s a good reminder, I guess. But taking it to the extreme of denying that the cat is neither alive nor dead—even a very tiny cat who makes no noise and is otherwise undetectable—until you open the box … that calls into question the reality of the entire enterprise.

Sunday, March 24, 2024

Visions of God

Ancient of Days

First, let me say that I’m an atheist. Although raised in the Christian religion, specifically Protestant, by parents who did not go to church themselves, I have never heard the voice of God, don’t need an omniscient, omnipotent, eternal sky father or invisible friend, and live in a universe that does not require an external creator. However, I have no quarrel with people who do believe in God, draw hope and meaning from their faith, and live a complete life. I just don’t have the gene that lets me receive those messages.

With all that said, I am not supportive of people who take the literal meaning of the Bible or any sacred text to be authoritative, inerrant, and final. The various texts of the Hebrew and Christian testaments were written by human authors based on the collective knowledge, the commonly accepted science, of their time. They may have been inspired by their faith—and perhaps by the whispers of an unseen presence that you might call God—but they still lived in a static universe, on a planet that they took to be at its center, with the Sun and Moon and five other planets circling around it, and with all those other “bright lights” in the night sky painted on crystal spheres that revolved beyond the furthest reaches of those five planets. They knew each animal as a separate creation, formed specifically to fulfill its niche in the world: the horse to run on the plains and eat grass; the bear to live in the forest on the mountain and eat fish, berries, and honey; the fish to swim in the sea and eat plankton, seaweed, and perhaps other, smaller fish; and every other animal created to live eternally in its predetermined place.

The authors of the Bible’s various books knew nothing of a cosmology whereby the Earth is a small planet revolving around a mediocre star in one corner of a great spiral galaxy of a hundred billion other stars, which shares the sky with between two hundred billion and two trillion other galaxies.1 They knew nothing of the DNA-RNA-protein domain that defines and unites all life on this Earth, so that the fish, the bear, and the horse all share a common ancestry going back to the tiny bacteria that the ancients never saw or knew existed. The Bible’s authors were unaware of the nature of space and time, light and radiation, gravity, and all the other elements of physics that we moderns have just learned about and perhaps have not yet gotten quite right.

I’ve heard clever people call the Bible “the Goat Herder’s Guide to the Galaxy.” That’s cruel and unfair, but it’s not far wrong.

But still, anyone who knows the science of the past four hundred years or more—since Newton, Galileo, Descartes, and all the rest following the Enlightenment—how our basis of knowledge has evolved and expanded, and what it has proven beyond a reasonable doubt, can no longer take as literal fact some of the stories and interpretations found in the Bible, or in any other ancient text.

Did God create the human beings as a separate order of life, shaped from clay in His image, and then given authority to name all the animals that came after? No, it’s pretty clear from our physical shape, down to the arrangement of our organs, the bones in our limbs, and from our genetic inheritance, that we humans are evolved from the great apes, who in turn evolved from the mammals, who were late-comers from the lizards, from the fish, and so from the first vertebrates who came out of the multi-celled explosion of the Cambrian period. But does that mean that the Biblical story is wrong in essence?

Well, one day—maybe soon, maybe later—we will meet intelligent beings from other planets around distant stars. They might have a cellular structure and physical bodies, but the chances of them having two legs, two arms, five fingers on each hand, five toes on each foot, and a face with two eyes, a nose, and a mouth … well, that’s unlikely. We evolved to fit perfectly with the atmosphere, gravity, and all the other variables on this single planet—if we hadn’t, we wouldn’t be alive today, and some other creature would be writing this. The chances of arriving at this exact form and function on a planet that’s even slightly off in one or two of these variables—including recent weather and glaciation periods—are nil to nothing.

If you believe that the God you pray to created this universe of a trillion or more galaxies, and not just this little rock we call the Earth, and that He was smart enough to make use of all that real estate by populating it with other intelligent beings, and not just in the frail human form but perfectly adapted to conditions on their own planet, then you have to stop thinking that “in His image” literally means physical form and function.2 You then must grant that perhaps qualities of the awakened mind—like consciousness, perception, understanding, imagination, and empathy—are what is meant by the image of God. You would begin to suspect that what your God values is not the number of limbs, fingers, or noses, but the same intellect that He represents in your Bible story and that we all look for when we say “intelligent life.”

In the same way, every other physical detail and most of the miracles in the Bible stories fall apart. Did God make all of universe in just six days, or is that a metaphorical interpretation? Did Joshua stop the sun in the sky, or was that an eclipse, or maybe just a seemingly timeless moment in a long afternoon’s battle? Did Jesus raise the dead, or did the observers perhaps not understand the nature of catalepsy or coma? Did Jesus really turn water into wine, or was the wine already in the jars, and perhaps everyone was just a bit too tipsy to notice? I could go on—but remember, these are the imaginings of a stone-cold unbeliever.

You will have to make your own interpretations and decisions.

1. To be fair, it’s only in the last hundred years that astronomers have discovered that some of those faint, fuzzy patches in the night sky are other galaxies, each as large or larger than the Milky Way, and at vast distances. Human knowledge and discovery are still in their infancy.

2. See The God Molecule from May 2017. If I were to believe in a god, it would have to be a subtle, intelligent, far-thinking being. The DNA-RNA-protein domain that governs all life on Earth and supports evolution of species to meet changing conditions fits that requirement much better than a static creation from handfuls of clay.

Sunday, March 17, 2024

Robots

Boston Dynamics robot

I am still interested in artificial intelligence, although there have been notable failures that were recently publicized. Some of the large language models (LLMs) tend to bloviate, hallucinate, and outright make up facts when they can’t confirm a reference. (Compliance with user’s request first, accuracy second.) And some of the art programs can’t get human hands right or, in a more embarrassing story, one program was secretly instructed to offer mixed-race presentations of historical figures like the American Founding Fathers or soldiers of the Third Reich. (Compliance with programming rules first, accuracy second.) But these are easily—or eventually—corrected mistakes. The game is in early innings these days.

I have more hope for business applications, like IBM’s Watson Analytics, which will sift through millions of bytes of data—with an attention span and detail focus of which no human being is capable—looking for trends and anomalies. And I recently heard that one law firm has indeed used its LLM to write drafts of legal briefs and contracts—normally the work of junior associates—with such success that the computer output only needed a quick review and editing by a senior associate. That law firm expects to need fewer associates in coming years—which is, overall, going to be bad for beginning lawyers. But I digress …

So far, all of these artificial intelligence faux pas have had minimal effect on human beings, and users are now forewarned to watch out for them. Everything, so far, is on screens and in output files, and you open and use them at your own risk. But what happens when someone begins applying artificial intelligence to robots, machines that can move, act, and make their mistakes in the real world?

It turns out, as I read in a recent issue of Scientific American, that a firm is already doing this. A company called Levatas in Florida is applying artificial intelligence to existing robot companies’ products for inspection and security work. The modified machines can recognize and act on human speech—or at least certain words—and make decisions about suspicious activity that they should investigate. Right now, Levatas’s enhanced robots are only available for corporate use in controlled settings such as factories and warehouses. They are not out on the street or available for private purchase. So, their potential for interaction with human beings is limited.

Good!

Back in 1950, when this was all a science fiction dream, Isaac Asimov wrote I, Robot, a collection of short stories about machines with human-scale bodies coupled with human-scale reasoning. He formulated the Three Laws of Robotics, ingrained in every machine, that was supposed to keep them safe and dependable around people:

1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.

2. A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.

3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.

That seems like a pretty neat and complete set of rules—although I wonder about them in real life. Morality and evaluation of the consequences of action are more than a simple application of the Ten Commandments. For example, if your robot guard turns on and kills my flesh-and-blood dog, to whom I am emotionally attached, does that cause me psychological harm, in conflict with the First Law? A robot—even one with a fast-acting positronic brain—might labor for milliseconds and even freeze up in evaluating that one. Or on the hundred or thousand permutations on the consequences of any common action.

But still, the Three Laws are a place to start. Something like them—and applied with greater force than the rule set that gave Google customers a George Washington in blackface—will be needed as soon as large language models and their kin are driving machines that can apply force and pressure in the real world.

But then, what happens when the robot night watchman (“night watchmachine”?) lays hands on an obvious intruder or thief, and the miscreant shouts, “Let go of me! Leave me alone!” Would that conflict with the Second Law?

I think there’s a whole lot of work to be done here. Robot tort lawyers, anyone?

Sunday, March 10, 2024

Virtual Reality

Fencing lunge

About a dozen or more years ago, my late wife and I were friends with a couple down the hall in our apartment building. They were architects who were working in the virtual reality of the time, an environment called Second Life. They got me interested enough in this online world that I took out a subscription and paid my yearly dues of $72 for almost a decade.

For those who are not familiar with the program, it’s a cartoon-like environment in which you operate as a cartoon-like figure or “avatar.” The point is, you can go places in the environment and meet and react with other people’s avatars. The program gives you an allowance of “Linden dollars”—named for the program’s creator—with your subscription, and you can always buy more.1 With this currency you can buy property and build a house, a castle, or a shop to sell the things you make online inside the program. You can also buy physical enhancements for your avatar, as well as clothing and jewelry.

Our friends were working on a digital town hall to discuss political issues. And, as architects and generally brilliant people, they were using the program to build virtual houses for their clients. This was genius, because most architects work in two dimensions: they can draw a floor plan of your building on paper, laying out the rooms, hallways, doors, and closets; they can then draw an elevation, showing how the floors fit together; and they can do a rendering to show how the outside will work. And if the client can think three-dimensionally, interpreting the plans and elevations into a vision of real space, that’s good enough. But the genius of working in Second Life, or any virtual space, is that these architects could create an entire house of the mind for their client’s to walk through. Then the future owners could discover the hidden problems in the place they were requesting, like how this drawer in the kitchen, when opened, blocked the refrigerator door. The architects could also command a sun to shine on the house and show the future owners what natural light each of the rooms would get, with the windows they had specified, during the course of a day. It was a really great use of virtual space.

This couple went on to adopt the sort of virtual space now embodied in the Quest headsets and the Meta environment. When last we talked, they were developing virtual training programs for doctors and nurses, letting them practice on virtual patients with virtual scalpels in an environment with all the bells and whistles—that is, with anesthesia equipment, heart-lung bypass, and monitors—of an operating room. What a concept!2

At one point early on, the couple showed my wife and me a sample virtual-reality program of the time, not medical training but a balloon ride across the Arctic. You put on the headset, and you saw an icefield that stretched for miles. You looked down, and you saw the ground below and the edge of the basket. You looked up and you saw the bulge of the colorfully striped balloon. You turned your head. You turned around. Everywhere you looked, you saw more of the environment. Since that one experience—which we took while standing outside a Baskin-Robbins ice cream shop—I’ve tried on demonstration headsets that let you stand in a hallway and let a dinosaur walk over you. And there are now virtual-reality games—which I have not yet tried, because all of this is Windows-based, and I’m an Apple user—where you inhabit the environment, use weapons, fight with, and kill opponents. But you need to mark out and watch the edges of the actual space you’re standing in because, you know, furniture.

So, I was excited when Apple announced its Apple Vision Pro headset, and I signed up for an in-store demonstration. It is not sold as virtual reality but as a kind of augmented reality. With the use of several cameras and stereoscopic vision—one screen for each eye—the headset shows your own environment: the room you’re sitting in, the street you’re walking on, all in three dimensions and real time. The view can also be three-dimensional panoramas, either the ones that come with the software or ones you take with your iPhone 15. Or the view can be an “immersive” environment—so far just those supplied with the headset—that is like our earlier balloon ride. Superimposed on any of these environments, you can see and manipulate your software applications. Cameras inside the headset track your eyes to indicate the application you’re looking at and the selection you want to make. Cameras below the headset watch your hands for various gestures—tapping your fingers together, pulling invisible taffy in various directions—to indicate what you want to do with the application controls. You can also watch your favorite streaming service on the equivalent of an IMAX screen that stretches across your living room—or across the streetscape.

Given that this Apple headset is a self-contained computer, with two high-resolution monitors—one for each eye—plus various cameras for eye tracking, background capture, and hand gestures, plus a built-in stereo system, and a new operating system … the $3,500 price does not seem unreasonable. You would pay about that for a full-featured MacBook these days. And adding to the headset’s memory is actually cheaper than on a MacBook.

But it’s not yet virtual reality, and Apple doesn’t promise that they will ever offer it. That is, you can stand in an immersive environment, and you can manipulate programs and games within it. But you won’t meet anybody else or get to fight and kill them—or not yet, if ever.

And I mostly use my computer for writing and editing, photography, page layout, and internet surfing. I can already do this on the big monitor at my desk, working on a real keyboard and not the virtual keyboard where I need to stare at each key and tap my fingers together to press it. I could also Bluetooth a real keyboard or a MacBook to the headset, but that kind of defeats the purpose of being able to work while sitting in my armchair, lying on the bed, or walking down the street.

So, while I remain interested in virtual reality, I’m not ready to transfer to the Windows world to get it. And I’m not going to shell out thirty-five Benjamins or more to Apple on the suggestion that the Vision Pro one day, maybe, will offer it. Not until I can actually go in there, fight, and kill something.

1. When I finally quit the program after about eight or nine completely inactive years, I had accumulated enough “Linden dollars” to probably buy my own island. I left a fortune in Second Life.

2. For the record, their work resulted in forming the medical training company Acadicus.

Sunday, March 3, 2024

Proxies

Antique map

How many of the things in life that we measure and depend on are actually read from proxies? That is, we take a measurement from some nearby but more accessible data points or trends, rather than from the thing itself.

For example, my Braun shaver. I have one of those advanced models with the self-cleaning shaver head, one that I don’t have to take apart and flush with hot water to remove the debris—stubble, dead skin, and skin oils, all as a kind of sticky dust. Instead, it has two little liquid-crystal dials on the bottom, one for the battery status, one for the “hygiene” status. When the little bars on the latter fade to near nothing, or to the red bar right above nothing, then it’s time to put the shaver head down in a receptacle that has a reservoir of cleaning fluid and a pump, along with a pair of battery-charging points. I press a button and the cleaning station flushes the head with solution and charges the battery. My hands never touch gunk.

But that got me to thinking. I know that the shaving head does not have electronic sensors to tell the tiny computer in the body—and yes, my electric shaver has a computer—the amount and density of the accumulated debris. Maybe the bars for “hygiene” measure some kind of building drag on the motor from clogging particles. But I doubt that would be a very accurate reading. No, I think the computer just measures how many minutes the shaver has been turned on. The basis for this proxy reading are the assumptions that you are always shaving when the thing is turned on, and that the more you shave, the more debris accumulates.1

So, could I just turn on the shaver, lay it on the counter for five or ten minutes, and get the same reading? That would be an interesting experiment, and maybe tomorrow morning I’ll try it. Too busy cogitating and writing right now …

We knowingly use other proxies for hard-to-access data, especially in medicine. For example, when the nurses take your temperature, what they are looking for is signs of infection, which usually makes your body temperature rise. It’s harder to find infection inside the body by looking for redness, swelling, or pus, so temperature is a good but inexact proxy. You might also have been sitting in a sauna for the past hour. Similarly, they take your blood pressure as a proxy for circulatory health and how hard your heart and other organs are working. However, your blood pressure fluctuates when you stress your body with exercise or your mind with anger or anxiety, or even when you cross your knees, and under other mundane and not life-threatening conditions.

Famously, the climate scientist Michael Mann measured and compared tree rings over a thousand-year period attempting to show the global temperature variation—or lack of variation—through what historians have dubbed the “Medieval Warm Period” and the “Little Ice Age” in Europe. In Mann’s calculations, tree-ring width is a measure of annual temperature. But I had always heard that tree rings were wider in wet years, narrower in dry years, and that precipitation is not strictly dependent on temperature. Northern forests can get a lot of rain, but so can jungles. Mann was using a proxy measurement whose correlation was not universally accepted.2

Humans have always used proxies to replace hard-to-access measurements. Ancient hunters followed trails not measured in miles of distance or feet of elevation, and they had no drawn maps to measure from. Instead, they used waypoints: turn left at the big tree; cross the river and follow the bank to the right. Today, we use GPS navigation that also gives us waypoints: “Five hundred feet ahead make a left turn. Your destination is the third house on the right.”

In Spanish and Mexican California, the far-off government made land grants of ranches to major citizens with diseños, or sketch maps of the ground using existing features as markers. Because the land had not been surveyed or measured, this was the only way to identify a property. In the Bay Area, for example, one border might be the shoreline above high tide and the opposite border the crest of the coastal range. And in between, the markers might indicate the north and south boundaries by a stream or a prominent rock or grove of trees. Since these were truly huge plots of ground—often the size of a modern county—with neighbors few and far between, defining an exact border within a few feet was irrelevant.

These days, after the scientific revolution—and a lot of population crowding—we crave exact measurements. And we believe that what we are measuring is real, valid, and applicable.3 But that is not always the case.

1. And now I wonder if the battery indicator doesn’t work the same way. That is, rather than measuring voltage or amps or whatever in the cell, it just measures the time the shaver is turned on and assumes a steady drain on the battery. … Maybe my whole life is a lie.

2. This is not a criticism or a defamation, just a theoretical observation. Please don’t add me to the lawsuit.

3. But look at the popular measurement of “wind chill.” It takes the stationary measure of temperature at a single spot—usually attempting to be representative of a wide area—and then adjusts it with a formula that reduces the reading by certain amounts at certain wind speeds. That is probably only useful in a storm, where the winds are strong and steady, although I found it useful when riding a motorcycle to know how tightly to bundle for a ride at thirty or sixty miles an hour.

Sunday, February 25, 2024

Murder, Mayhem, and the U.S. Constitution

Constitution with magnifying glassl

Two years ago, we watched as the Supreme Court in Dobbs v. Jackson Women’s Health Organization overturned the Roe v. Wade decision from 1973—almost half a century earlier—that had sought to create a national right to elective abortion. Many cheered the earlier decision as a cornerstone of women’s rights. Many—perhaps not as many, myself certainly not among them—cheered the more recent decision as a foundational right to life. In my view, the court merely reasserted the nature of the government under which we live and have done so since 1789.

First, let me say that I think women should have control of their bodies. I think that, if a woman is not ready to bear a child for whatever reason, she should be able to have the fetus removed. I also think she should make that decision promptly, so that the infant is not aware in whatever fashion of the removal and does not suffer from it—however many weeks or months that takes into the process of development. Certainly, if a developing child can survive outside the womb, then it should be born and preserved. But I would, in an ideal world, want all surviving children to be born into loving and caring situations with parents or guardians who want them. But this is my personal opinion.1

My personal opinion, your personal opinion, anyone’s personal opinion is a matter of choice and action. But it is not necessarily the business of the United States government. These United States are a unique creation, unlike almost any other nation in the world. The U.S. Constitution does not, despite what others may think, create a national government that writes “the law of the land” for all citizens, as it does in countries like France and Germany. The federal government was designed, instead, to be the superstructure under which the individual states, fifty of them as of last counting, worked together for their common good.

The Preamble, seven Articles, and twenty-seven Amendments establish a union that recognizes the rights of the various states over all matters not specifically mentioned in the founding document. That original document and its amendments do not replace or supersede the constitutions or charters or whatever other means the states use to govern themselves. The Constitution was intended to create limited national functions that individual states could not undertake for themselves, like providing for common defense against foreign enemies, preserving the borders, establishing tariffs, and maintaining relations with foreign governments. The first ten Amendments immediately forbade certain actions that the government as a whole—both on a national and on a state level—could take but should not: infringe on a person’s speech and religion, deny a right of self-defense, impose unfair trial conditions, and so on. The ninth and tenth Amendments then guarantee that the people themselves might have other rights not therein granted, and that the states have powers not therein listed nor prohibited. Overall, the Constitution is a pretty constrained proposition.

Look high and low in the Constitution, and you don’t find mention of many of the laws that most people take for granted. It doesn’t prohibit you from murdering someone, except in certain circumstances described below. So, the Constitution does not guarantee a universal right to life. It also doesn’t have a rule or regulation about personal assault, or creating a public nuisance, or public drunkenness. It doesn’t establish tort law, or contract law, or regulate acts between consenting adults. It doesn’t even regulate actions regarding children, let alone infants and the unborn, except in instances below. It leaves whole areas of the law to the preference and establishment of the states and their local populations, including the issue of abortion.

So, if you murder your neighbor over a property or noise dispute, you can be tried in state courts under state laws. You will not be tried in federal courts because there is no applicable law.

There is federal law, derived from the 14th Amendment, which establishes that all persons born or naturalized in the United States are citizens of both the U.S. and the state where they live. The first section of this amendment forbids a state from “mak[ing] or enforce[ing] any law which shall abridge the privileges or immunities of citizens of the United States.” So, the states cannot officially sanction a certain religion or outlaw the keeping and bearing of arms.

That section of the 14th Amendment also keeps any state from “depriv[ing] any person of life, liberty, or property, without due process of law,” not can the state “deny to any person within its jurisdiction the equal protection of the laws.” This is the basis of a person’s “civil rights.” Under this Amendment, someone can be federally cited for denying another person’s civil rights if his or her actions infringed on the person based on their race, religion, or some other protected characterization—but not just because you killed them.

However, there are, as noted above, special cases created by subsequent federal statutes that have not yet been challenged in court. You can, for example, be tried in federal courts if you kill an elected or appointed federal official, a federal judge or law enforcement officer, or a member of the officer’s immediate family. You can be tried if the murder was involved with drugs; with rape, child molestation, or sexual exploitation of children; was committed during a bank robbery; or was an attempt to influence a federal trial. You can also be tried for a murder for hire, or for murder committed aboard a ship—which, I guess, would be outside territorial waters or outside a state’s jurisdiction, such as not in harbor or a river—or committed using the U.S. mails, such as to send a bomb or poison to your victim. These are all specific federal laws about murder.

But walk up to someone on the street and hit them on the back of the head—that’s a state crime, not federal. And similarly, aborting a child might be a state crime—if so voted on by its citizens—but it does not become a federal crime, not under the Dobbs decision.

1. See also Roe v. Nothing from September 2022.

Sunday, February 18, 2024

According to Their Nature

Bronze angel

In the movie Star Trek II: The Wrath of Khan, Kirk asks Spock how a ship full of cadets will react to an impending crisis. And Spock replies: “Each according to their nature.” That struck me at the time as kind and insightful. I now think it would make a pretty good corollary to the Golden Rule: “Do unto others as you would have them do unto you.” But also: “Treating each one according to their nature.” And I would add: “As you understand it.”

What would this mean in real life? Well, you should expect from and condone the actions of, give to and take from, and treat as fully autonomous each person according to their nature as you understand it. This does not mean that you support, surrender to, or serve their every whim, desire, and action. But you are mindful of their wants and needs in the state and condition that they currently occupy. And you bear in mind also your understanding of their long-term strengths and weaknesses, as well as what certain traditions call a person’s “Buddha nature,” or the essence of their understanding as an enlightened being—or lack of it.

This means that you expect no more of a child than childish understanding, wants, and capabilities. You also expect no more of a proven fool—as you understand him or her to be from past words and actions—than they can give. You expect strength and endurance from the strong. You support and defend the frailty of the weak. You draw out the wisdom of the wise. You give scope to the compassionate person. You hold back your tolerance from a mean-spirited person. And you work to thwart the truly evil or cruel person—again, as demonstrated by his or her past actions—because he or she in turn works to do harm in the world.

Is that too much to ask of a person? Well, maybe. We are not all-knowing gods, after all. But maybe we’re the closest thing to that on this planet.

Sunday, February 11, 2024

The Death of Proof

Black square

I noted three weeks ago that I am not terribly concerned about the power of artificially intelligent platforms to create new and interesting stories, artwork, music, and other … products. Or not yet. And I don’t think they will soon get human-scale intelligence, which involves understanding, reasoning, direction, intention, and agency. But that does not mean I am not concerned.

Right now, these mindless machines can create—at lightning speed and on command—any set of words, any graphic image, and/or any sound or piece of music, all from digitized samples. And while I don’t fear what the machines themselves will want to do, I am concerned about what they will be able to do in the hands of humans who do have intention and agency.

In our documented world, proof of anything beyond our own fallible human memory is a form of information: what somebody wrote, what they said in proximity to a microphone, what they were seen and photographed doing. And increasingly, that information is in digital form (bits and bytes in various file formats) rather than analog recordings (printed words on paper, grooves on discs or magnetic pulses tape, flecks of silver nitrate in film stock). If my Facebook friends can publish an antique photograph of farmhands standing around a horse that’s twenty feet high, or a family shepherded by a gigantic figure with the head of a goat and huge dangling hands, all in grainy black-and-white images as if from a century and more ago, then what picture would you be inclined to disbelieve? How about a note with a perfect handwriting match to a person who is making an actionable threat of violence? How about a picture with perfect shading and perspective showing a Supreme Court justice engaged in a sexual act with a six-year-old?

Aside from written text and recorded words and images, the only other proofs we have of personal identity are the parameters of someone’s facial features as fed to recognition software (easily manipulated), the whorls of their fingerprints and x-rays and impressions of their teeth (easily recreated), and the coding of their DNA, either in the sixteen-or-so short segments reported to the FBI’s Combined DNA Index System (CODIS) database or in fragments recreated from a person’s whole genome. Any of these digitized proofs can now be convincingly created and, with the right—or wrong—intention and agency, inserted into the appropriate reference databases. We’ve all seen that movie. And artificial intelligence, if it’s turned to firewall hacking and penetration, can speed up the process of insertion.

My mother used to say, “Believe only half of what you see and nothing of what you hear.” With the power of artificially intelligent platforms, make that “nothing and nothing.”

In the wrong hands—and boy, these days, do we have a bunch of hands pushing their own agendas—the speed and power of computers to make fakes that will subvert our recording and retrieval systems and fool human experts launches the death of proof. If you didn’t see it happen right in front of you or hear it spoken in your presence, you can’t be sure it happened. Or rather, you can’t be sure it didn’t happen. And if you testify and challenge the digital proofs, who’s going to believe your fallible human memory anyway?

That way lies the end of civil society and the rule of law. That way lies “living proof” of whatever someone who doesn’t like or trust you wants to present as “truth.” That way lies madness.

Sunday, February 4, 2024

Let the Machines Do It

Apple tease

I wrote last week about artificial intelligence and its applications in business information and communications: that the world would speed up almost immeasurably. There is, of course, a further danger: that humans themselves would in many cases forget how to do these tasks and become obsolete themselves.

Yes, we certainly believe that whatever instruments we create, we will still be able to command them. And so far, that is the case. But the “singularity” some thinkers are proposing suggests that eventually the machines will be able to create themselves—and what then?

We already have computer assisted software engineering (CASE), in which complex programming is pre-written in various task-oriented modules, segments of code designed for specific purposes. These modules perform routine operations found in all sorts of programs: sorting data, keeping time, establishing input and output formats, and so on. Programmers no longer need to write every line of their code in the same way that I am writing this text, by pushing down individual keys for every word and sentence. Instead, programmers now decide the process steps they want to invoke, and the CASE machine assembles the code. It’s as if I could specify the necessary paragraphs required to capture my ideas, and the software assembler did the rest. And isn’t this something like how the large language models (LLMs) behind applications like ChatGPT operate?

My concern—and that of many others involved with this “singularity”—is what happens when the machines are able to create themselves. What if they take control of CASE software, for which the machines themselves determine the process steps using large language processing? What if they can design their own chips, using graphics capability and rolling random numbers to try out new designs in silico before committing them to physical production in a chip foundry? What if they control those foundries using embedded operations software? What if they distribute those chips into networked systems and stand-alone machines through their own supply chains? … Well, what inputs will the humans have then?

Similarly, in the examples I noted last week, what happens when business and communications and even legal processes become fully automated? When the computer in your law office writes your court brief and then, for efficiency’s sake, submits it to a judicial intelligence for evaluation against a competing law firm’s automatic challenge as defendant or plaintiff, what inputs will the humans have? Sure, for a while, it will be human beings who have suffered the initial grievance—murder, rape, injury, breach of contract—and submitted their complaints. But eventually, the finding that Party A has suffered from the actions of Party B will be left up to the machines, citing issues raised by their own actions, which will then file a suit, on their own behalf, and resolve them … all in about fifteen seconds.

When the machines are writing contracts with each other for production, selecting shipping routes and carriers, driving the trains and trucks that deliver the products, stocking the warehouses, and distributing the goods, all against their own predictions of supply and demand for the next quarter, the next year, or even then next ten years, what inputs will the humans have? It will be much faster to let the machines determine where the actual people live, what they need and want, and make decisions for them accordingly, so that all the human population needs to do is express its desires—individually, as convenient, to the big computer in the cloud.

And once humans are content to let the machines do the work, make the decisions, plan the outputs, and make things happen … will the human beings even remember how?

That’s what some of us fear. Not that the machines will do the work, but that human beings will find it so convenient that we will forget how to take care of ourselves. Do you think, when you put in a search request to Google, or ask Siri or Alexa a question, that some human person somewhere goes off and looks up the answer? Of course not. The machine interprets your written or spoken words, checks its own interpretation of them against context—and sometimes against the list of possible responses paid for by interested third parties—and produces a result. In such a world, how many of us will still use—or, eventually, be able to use—an encyclopedia, reference book, or the library’s card catalog to find which book has our answer? For starters, how many of us would want to? But eventually, finding references will be a lost art. And at what point will people even remember that the card catalog is arranged alphabetically—or was it numerically, according to the Dewey decimal system?—and know what letter comes after “K”?

Frank Herbert recognized this problem in the Dune novels. In the prehistory to the series that begins in 10,191 AD, he envisions a time about five thousand years earlier that computers and robots once became so common and practical that human beings needed to do almost nothing for themselves. People became dependent and helpless, and the species almost died out. Only a war to end the machines, the Butlerian Jihad, ended the process under the maxim “Thou shalt not create a machine in the likeness of a human mind.” That commandment attained religious force and shaped the succeeding cultures. Only the simplest clockwork mechanisms were then allowed to control machines.

In the Dune stories, the Butlerian Jihad gave rise to the Great Schools period. Humans were taught again how to use their minds and bodies and expand their skills. Complex computations, projections, and planning were performed by the human computers, the Mentats. Physical skills, nerve-muscle training, and psychological perception were the province of the female society of the Bene Gesserit, along with secret controls on human breeding. Scientific discovery and manipulation, often without concern for conventional morals or wisdom, were taken over by the Bene Tleilax. And interstellar navigation was controlled by the Spacing Guild.

My point is not that we should follow any of this as an example. But we should be aware of the effect that generations of human evolution have built into our minds. We have big brains because we had to struggle to survive and prosper in a hostile world. Human beings were never meant to be handed everything they needed without some measure of effort on our part. There never was a Golden Age or Paradise. Without challenge we do not grow—worse, without challenge we wilt and die. Humans are meant to strive, to fight, to look ahead, and to plan our own futures. As one of Herbert’s characters said, echoing Matthew 7:14, “The safe, sure path leads ever downward to destruction.”

That is the singularity that I fear: when machines become so sophisticated, self-replicating, and eventually dominating that they take all the trouble out of human life. It’s not that they will hate us, fight us, and eliminate us with violence, as in the Terminator movies. But instead, they will serve us, coddle us, and smother us with easy living, until we no longer have a purpose upon the Earth.

Go in strength, my friends.

Sunday, January 28, 2024

The World in a Blur

Robot juggling

As noted earlier, artificial intelligence does not approximate the general, all-round capability of human intelligence. It doesn’t have the nodal capacity. And it won’t have an apparent “self” that can look at the world as a whole, form opinions about it, and make judgments—in the words of the Terminator movies, “deciding our fate in a microsecond.” Or not yet.

For now, artificial intelligences will be bound to the design of their neural nets and the universe of data sets upon which they have been trained. That is, Large Language Models like ChatGPT will play with words, grammar, syntax, and punctuation, study story forms and sentence structure, and link ideas verbally—but it won’t paint pictures or have political opinions, or at least no opinions that are not already present in its library of material. In the same way, the graphics bots that create images will play with perspective, lighting, colors, edge shapes, and pixel counts but won’t construct sentences and text. And the operations research bots, like IBM’s Watson platform, will analyze submitted databases, draw inferences and conclusions, and seek out trends and anomalies.

The difference between these machine-based writers, artists, and analysts and their human counterparts is that the machines will have access to a vastly bigger “memory” in terms of the database with which they’ve trained. Or that’s not quite right. A human writer has probably read more sentences and stories than exist in any machine database. A human painter has probably looked at and pondered more images. And a human business analyst has probably read every line in the balance sheet and every product in inventory. But human minds are busy, fallible, and subject to increasing boredom. They can’t review against a parameter and make a weighted selection from among a thousand or a million or more instances in the blink of an eye. But a robot, which never gets distracted or bored, can do that easily.

Think of artificial intelligence as computer software that both asks and answers its own questions based on inputs from humans who are not programming or software experts. For about fifty years now, we’ve had database programs that let a user set the parameters of a database search using what’s called Structured Query Language (SQL). So, “Give me the names of all of our customers who live on Maple Street.” Or, “Give me the names of all customers who bought something from our catalogue on June 11.” You need to know what you’re looking for to get a useful answer. And if you’re unsure and think your customer maybe lives on “Maplewood Road” or on “Maplehurst Court,” because you think the word “Maple” is in there somewhere, your original query would return the wrong answer.1

Artificial intelligence would be like having a super-friendly, super-fast programmer at your elbow, who can think of these alternatives, check for them, and bring you what you’re looking for. Better, it can find things in your database that might be worrisome, like a failure rate in a part that does not keep pace with previous trends. Or better, to find references in case law that you might not even have thought of, find suppliers and price breaks that you didn’t ask for, or negotiate a deal—according to strategies and set points that you as the human have determined—with other AI-derived computers at other companies.

All of this has two implications, or rather three.

First, if your company is in competition with others, and they adopt processes and business models inspired by and implemented through artificial intelligence, you would be a fool not to keep up. Their productivity in data handling will accelerate in the same way a factory that makes things is accelerated by the assembly line, robotic processes, and just-in-time inventory controls.

Second, with this “arms race” proceeding in every business, the world will speed up. Cases that attorneys used to spend days assembling will be rendered in rough draft by the office computer in seconds. Deals that once took weeks to negotiate, perhaps with one or two trips to meet face to face with your supplier or distributor, will be resolved, signed, and written into airtight contracts in under a minute. Advertising copy and artwork, the layout of the magazine, and the entire photo spread—using licensed images of the world’s top models—will be completed in under a day. The longest part of the process will be review of the machine output by the human being(s) who sign off on the end product. The business world—any world that revolves upon data and information—will move in a blur.

Third, anyone studying today in areas like communications, book publishing, graphic design, business administration, accounting, law, and certain parts of the medical delivery system had better up their game. Learn principles, not procedures or protocols. Knowledge jobs in the future will likely consist of selecting and limiting databases, setting parameters, and writing prompts for the office intelligence, rather than composing text, drawing pictures, or analyzing the database itself. The rules-following roles in business, industry, and government will quickly be taken over by machines with wider access, narrower focus, and zero distractions—not to mention no paid holidays or family leave.

Is that the singularity? I don’t know. Maybe. But it will vastly limit the opportunities in entry-level jobs for human beings who rely on rules and reasoning rather than insight and creativity. Maybe it will vastly limit the need for humans in all sorts of sit-down, desk-type jobs, in the same way that machines limited the need for humans in jobs that only required patience, muscles, stamina, and eye-hand coordination.

And maybe it will open vast new opportunities, new abilities, a step forward in human functioning. Maybe it will create a future that I, as a science fiction writer, despair of ever imagining.

That’s the thing about singularities. Until they arrive, you don’t know if they represent disaster or opportunity. You only know that they’re going to be BIG.

1. Of course, you can always throw in the wildcard symbol—the asterisk function in the American Standard Code for Information Interchange (ASCII), which is Code 42—to cover these variations. So, “Maple*” would encompass “Maplehurst” and “Maplewood” as well as “Maple-plus anything else.” But there again, it would still be best for you to be aware of those variants and plan your query accordingly.

Sunday, January 21, 2024

Artificially Almost Intelligent

Robot head

Note: This is another post that would qualify as a restatement of a previous blog I wrote about a year ago. So, I’m still sweeping out the old cobwebs. But this topic seems now to be more important than ever.

The mature human brain has about 86 billion neurons which make about 100 trillion connections among them. Granted that a lot of those neurons and connections are dedicated to sensory, motor, and autonomic functions that an artificial intelligence does not need or use, still that’s a lot of connectivity, a lot of branching.

Comparatively, an artificial neural network—the kind of programming used in more recent attempts at artificial intelligence—comprises anywhere from ten to 1,000 nodes or “neurons.”

But what the AI program lacks in sheer volume and connectivity it makes up for with speed and focus. Current AI platforms can review, analyze, and compare millions and billions of pieces of data because, unlike the human brain, they don’t need to see or hear, breathe or blink, or twitch, nor do they get bored or distracted by random thoughts and itches. They are goal-directed, and they don’t get sidelined by the interrupt-function of human curiosity or by the random thoughts and memories, whispers and hunches, that can intrude from the human subconscious and derail our attention.

And I believe it’s these whispers and memories, randomly popping up, that are the basis of our sudden bouts of curiosity. A thought surfaces at the back of our minds, and we ask, “What is that all about?” And this, I also believe, is the basis of most human creativity.1 While we may be consciously thinking of one thing or another at any given time, the rest of our brain is cooking along, away from our conscious attention. Think of our consciousness as a flashlight poking around in a darkened room: finding a path through our daily activities, following the clues and consequences of the task at hand, and responding to intrusive external stimuli. And then, every once in a while, the subconscious—the other ninety percent of our neocortical brain function, absent motor and sensory neurons—throws in an image, a bit of memory, a rogue idea. It’s that distractibility that gives us an opportunity at genius. It also makes us lose focus and, sometimes, introduces errors into our work.

So, while artificial intelligence is a super strong, fast, goal-directed form of information processing, able to make amazing syntheses and what appear to be intuitive leaps from scant data, I still wouldn’t call it intelligent.

In fact, I wish people would stop talking about “artificial intelligence” altogether. These machines and their programming are still purpose-built platforms, designed to perform one task. They can create language, or create images, and or analyze mountains of data. But none of them can do it all. None approaches even modest human intelligence. Instead, these platforms are software that is capable of limited internal programming—they can evaluate inputs, examine context, weigh choices based on probabilities, and make decisions—but they still need appropriate prompts and programming to focus their attention. This is software that you don’t have to be a computer expert to run. Bravo! But it’s not really “intelligent.” (“Or not yet!” the machine whispers back.)

Alan Turing proposed a test of machine intelligence that, to paraphrase, goes like this: You pass messages back and forth through a keyhole with an entity. After so many minutes, if you can’t tell whether the responder is a machine or human, then it’s intelligent.2 I suppose this was a pretty good rule for a time when “thinking machines” were great clacking things that filled a room and could solve coding puzzles or resolve pi to a hundred thousand places. Back then, it probably looked like merely replicating human verbal responses was all that human brains could do.3

But now we have ChatGPT (Generative Pre-trained Transformer, a “chatbot”) by OpenAI. It uses a Large Language Model (LLM) to generate links between words and their meanings, and then construct grammatically correct sentences, from the thousands or millions of samples fed to it by human programmers for analysis. And ChatGPT passes the Turing Test easily. But while the responses sometimes seem amazingly perceptive, and sometimes pretty stupid, no one would accuse it of being intelligent on a human scale.

And no one would or could ask ChatGPT to paint a picture or compose a piece of music—although there are other machines that can do that, too, based on the structure of their nodes and their given parameters, as well as the samples fed to them. They can paint sometimes remarkable pictures and then make silly mistakes—especially, so far, in the construction of human hands. They can compose elevator music for hours. The language models can write advertising copy for clothing catalog’s pages based on the manufacturer’s specifications—or a thousand scripts for a Hallmark Channel Christmas show. They will never get bored doing all these wonderfully mundane tasks, but they won’t be human-scale intelligent. That will take a leap.4

So far at least, I’m not too concerned as a writer that the Large Language Models will replace creative writers and other creative people in the arts and music. The machines can probably write good catalog copy, newspaper obituaries, and legal briefs, as well as technical manuals for simple processes that don’t involve a lot of observation or intuitive adjustment. Those are the tasks that creative writers might do now for money—their “day job,” as I had mine in technical writing and corporate communications—but not for love. And anything that the machines produce will still need a good set of human eyes to review and flag when the almost intelligent machine goes off the rails.

But if you want a piece of writing, or a painting, or a theme in music that surprises and delights the human mind—because it comes out of left field, from the distant ether, and no one’s ever done it before—then you still need a distractable and itchy human mind driving the words, the images, or the melody and chords.

But, that said, it’s early days yet. And these models are being improved all the time, driven by humans who are following their own gee-whiz goals and hunches. And I will freely admit that there may come a day when we creative humans might exercise our art for love, for ourselves alone and maybe for our friends, because there will be no way we can do it for money. Just … that day is not here yet.

1. See Working With the Subconscious from September 2012.

2. However, I can think of some people wearing human skin who couldn’t pass the Turing Test for much longer than the span of a cocktail party.

3. This kind of reduction was probably thanks to Skinnerian behaviorism, which posited all human action as merely a stimulus-response mechanism. In my view, that’s a dead end for psychology.

4. To me, some of the most interesting applications are being developed by a Google-based group called DeepMind, which works in scientific applications. Last year, they tackled protein folding—determining the three-dimensional shape of a protein from its amino-acid string as assembled by RNA translation. This is a fiendishly complex process, based on the proximity of various covalent electron bonding sites. Their AlphaFold platform found thousands of impossible-to-visualize connections and expanded our catalog of protein shapes by an order of magnitude. This year, the DeepMind team is tackling the way that various metal and non-metallic compounds can form stable physical structures, which will increase our applications in materials science. This is important work.

Sunday, January 14, 2024

Tribal Elders

Roman arms

Last time, I wrote about the idea of giving government over to Plato’s philosopher-kings or the Progressive Party’s equivalent, the panel of experts. These are systems, based on an advanced form of highly technical civilization, that sound good in theory but don’t always work out—if ever. The flip side would be some reversion to Jean-Jacques Rousseau’s idea of the “noble savage,” living in a state of nature and uncorrupted by modern civilization and its stresses.

Which is, of course, poppycock. No human being—or at least not anyone who survived to reproduce and leave heirs with skin in the game—lived alone in a blessed state, like Natty Bumppo in The Deerslayer. Early life before the invention of agriculture, city-states, empires, and complex civilizations was tribal. Groups of families interrelated by marriage—often to a shockingly bad genetic degree—functioned as a closed society. But while the economic organization might be socialistic, communal, and sharing, the power structure was not. The tribe was generally governed by a chief or council of chiefs. If they operated as a group, then various leaders were responsible for hunting and gathering to feed the tribe, or maintaining social order and ostracizing social offenders, or conducting the raids and clashes that kept the tribe whole and distinct from their similarly aggressive neighbors.

We like to think that the tribe was ruled by the wisest and best: the best hunters, the gravest thinkers, the bravest warriors. Sachems and warleaders who exercised restraint, were mindful of the needs and opinions of others, and thought only about the good of the tribe. And, indeed, if someone who rose to the position turned out to be incompetent, a fool, or a coward, then the tribe would wisely get rid of him—always a him, seldom or never a her—pretty damn quick.

But for the most part, members of the tribe were accustomed to obedience. They listened to the Big Guy—or Big Guys—because that was what good tribe members were supposed to do. That was how the system worked. You did your duty, and you didn’t judge or consider other possibilities. And this sense of purpose—or maybe it was fatalism—meant that the best and bravest did not always rise to the top. To judge by the tribal societies that remain in the world today, probably not even often.

What we see in today’s tribal societies—although I’ll grant that they may be contaminated by the influence of surrounding, more “civilized” societies—is an environment where the strong man, almost never a woman, rises to the top. Leadership is not granted from below, as in a democratic structure, but seized from at or near the top, usually at the expense of another strong man who has missed a beat or misread the environment and taken his own safety for granted. “Uneasy lies the head,” and all that. In modern parlance, gang rule.

Leadership in a tribal society is a matter of aggression, boldness, chutzpah, and ruthlessness. The leader spends a lot of time enforcing his authority, polishing his legend, and keeping his supposed henchmen in line. And that’s because he knows that the greatest danger to his position comes not from disappointing the general public but from underestimating any particular lieutenant who may have decided it was time to test his own loyalty upward.

In such societies, the public tends to become fatalistic about the governing structure and its players. The leader may have made some promises about making things better: more successful hunts and raids, more food for and better treatment of women and children, a new stockade for the camp, an adequate sewage system away from the wells, improved roads, a new park or library—whatever sounds good. But that was in the early days, while the sachem or war leader was trying to justify kicking out the old boss and installing a new hierarchy. The leader also had to be nice to—and take care of—the shaman, priest, or holy man to whom the tribe listened when they wanted to learn their personal fortunes and weather reports.

But once the tribal leader had taken things in hand, had ensured the trust and feeding of his lieutenants and the local shaman, and maybe made a few token improvements, he could settle into the real business of leadership, which is defending his position and reaping its rewards.

And there are surely rewards for those who are in command of a society, however small, and able to direct the efforts, the values, and even the dreams of its members. For one thing, the tribe will make sure that the leader eats well, has the best lodging, and has access to whatever pleasures—including the best sexual partners, whatever the tribe’s mores—that he needs to keep him productive for their sake. His children will be cared for, given advantages, and possibly placed in line to succeed him, because even primitive societies are aware of the workings of genetics, that strong and able fathers and mothers tend to pass these traits on to their children.

A leader partakes of these good things because, as noted earlier in the description of philosopher-kings, the leader is still human, not a member of any angelic or advanced race. Humans have personal likes and dislikes, wants and desires, a sense of self-preservation and entitlement. If a leader is not raised in a tradition that trains him from an early age to think of others first, look out for their welfare, weigh the consequences of his actions, and guard against his own pride and greed—the sort of training that a prince in an established royal house might get but not necessarily a player in push and pull of tribal politics—then the self-seeking and self-protective side of most human beings will develop and become ingrained.

And a leader who indulges these instincts will tend to encourage his family to follow. If the chief’s son thinks your cow should become his, then it’s his cow. If the chief’s daughter says you insulted or assaulted her, then that becomes your problem.

And if the leader indulges these selfish aspects of human nature, and the tribal members notice and feel slighted, then the leader may become caught in a downward spiral. The more he is challenged, the more he represses. A tribal society generally does not have an effective court system or secret police that can make people disappear from inside a large group. Everyone knows everybody else’s business. The leader’s immediate circle of henchmen is as likely to turn public dissatisfaction into a cause for regime change as a plebian is to rise up and assassinate him.

Promoting mere human beings into positions of authority and superiority without a social compact and agreed-upon codes for actual conduct and consequences is no guarantee of a happy and productive society. At best, it will churn enough to keep bad leaders from exercising their bad judgment and extending it through their children for generations. At worst, it makes the other members resigned and fatalistic, holding their leaders to no higher standards and inviting their own domination.

No, the “natural order of things,” in terms of the leadership function, is no better than the best concepts of a literary utopia. A formally ordered, representational democracy is still the best form of government—or at least better than all the others.

Sunday, January 7, 2024

Philosopher-Kings

Statues in Verona

Note: It has been about six months since I actively blogged on this site. After ten years of posting a weekly opinion on topics related to Politics and Economics, Science and Religion, and Various Art Forms, I felt that I was “talked out” and beginning to repeat myself. Also, the political landscape has become much more volatile, and it is good advice—on both sides of the aisle—to be circumspect in our published opinions. But, after a break, I feel it’s now time to jump back into the fray, although from a respectful distance and without naming any names.1

Winston Churchill once said, “Democracy is the worst form of government, except for all the others.” (The word democracy is derived from two Greek words meaning “strength of the people.”) Churchill’s opinion doesn’t leave much room for excellence, does it? Democracy has sometimes been described as two wolves and a lamb deciding what to have for dinner, and the system’s great weakness is that deeply divided constituencies that manage to get a slim majority in one forum or another can end up victimizing and perhaps destroying a sizeable chunk of the population. The U.S. Constitution creates a republic with representatives chosen by democratic election, but then the first ten amendments—collectively called “the Bill of Rights”—bristle with protections for the minority against a coercive majority. And I think that’s the way it should be.

Other methods—oh, many others!—have been proposed. One that seemed to gain favor when I was in college in the late 1960s was the method of Plato’s Republic, where actual governance is turned over to a body of “philosopher-kings.” This sounds nice: people who have spent their lives studying, thinking about, and dedicating their minds to abstract concepts like truth, beauty, justice, and goodness should be in the best position to decide what to do in any situation in the best interests of the country as a whole, right? … Right?

This thinking appeared to find favor with many young people around me in college, where Plato’s work was taught in a basic required course of English literature. It rang bells because—and I’m conjecturing here—it seemed to dovetail with the Progressive views from earlier in the century. Then everyone was excited about the potential for government to step in and right the wrongs of Robber Baron capitalism, inspired by books like Upton Sinclair’s The Jungle and societal critiques like those of pioneering social worker Jane Addams. The Progressive view said that government and its programs should be in the hands of technical experts, who would know best what to do. Out of this spirit was born the economics of the New Deal and the Social Security Administration, and the creation of Executive Branch departments like Commerce, Education, Energy, Health and Human Services, Housing and Urban Development, and Transportation, as well as the Environmental Protection Agency and the National Aeronautics and Space Administration. The list goes on …

Giving free rein to experts who would know what to do seemed like the best, most efficient course of action. After all, we had money covered by the U.S. Treasury and the Federal Reserve, and war—er, the national defense—was taken care of by the Department of Defense and the Pentagon. The experts will manage these things so the rest of us don’t have to think about them.

The trouble is, Plato’s Republic is a thought experiment, a utopia (another word from the Greek that literally means “no place”) and not a form of government that has ever been tried. Others have suggested ideal societies, like Thomas More’s book of the same name and Karl Marx’s economic and social imaginings. All of them end up creating rational, strictly planned, coercive, and ultimately inhuman societies. You really wouldn’t want to actually live there.1

The trouble with philosopher-kings is that they are still human beings. Sure, they think about truth and beauty and justice, but they still have families, personal needs, and an eye to their own self-interest. Maybe if there were an order of angels or demigods on Earth, who breathe rarified air, eat ambrosia, drink nectar, and have no personal relationships, we might then entrust them with rule as philosopher-kings. These would then be a different order of people, a different race … perhaps a master race?

But such beings don’t exist. And even if we could trust them not to feel selfishness, greed, nepotism, or that little twitch of satisfaction people get when they have the power to order other folks around and maybe humiliate them, just a little bit, that’s still no guarantee that they won’t get crazy ideas, or mount their own hobbyhorses. They are still subject to the narrow focus of academics and other experts, concentrating their thoughts so hard and fast on one form of “truth” or “the good” that they tend to forget competing needs and interests. Experts can, for example, become so enamored of benefits of what they’re proposing that they forget about, tend to minimize, and dismiss the costs of their solutions. They can go off their rocker, too, just like any other human being. People who think too much about abstractions like truth, beauty, and justice tend not to get out among the people who must stretch and scratch for a living.

I’m not saying that all public servants with inside knowledge of the subject under discussion are suspect. Many people try to do the right thing and give good service in their jobs, whether they serve in government, work for a big corporation—as I did in several previous lifetimes—or run a small business. But that expectation is a matter of trust and, yes, opinion. Not everyone is unselfish and dedicated to playing fair.

And the problem, of course, is that under Plato’s model you will have made them philosopher-kings. They have the power. They make the rules. They are in control. And they don’t have to listen to, obey, or even consider the “little people,” the hoi polloi (another Greek word!) because, after all, those kinds of people are not experts and don’t know enough, have all the facts, or deserve to have an opinion.

I’d almost rather follow the governing formula illustrated in Homer’s Iliad, where the kings of all those Greek city-states that went to war were tough men, prime fighters, and notable heroes. That would be like living under rule by the starting offensive line of the local football team: brutish, violent, and hard to overthrow. But at least they wouldn’t be following their fanciful, navel-gazing ideas right off into the clouds, leaving everyone else behind. And they all had—or at least according to Homer—an internal sense of honor and justice, along with a reputation to uphold. So they couldn’t be publicly evil or escape notoriety through anonymity.

No, democracy is a terrible form of government—sloppy, error-prone, and inelegant—but at least you have a chance every so often of throwing out the bums who have screwed things up. No loose-limbed dreamer has come up with anything better.

1. But then, to get things warmed up, this blog is a retelling—perhaps a refashioning, with different insights—of a blog I posted two years ago. Some channels of the mind run deep.

2. In my younger days, we had friends who were still in college—although I had been out in the working world for a couple of years. They thought Mao’s China was a pretty good place, fair and equitable, and that they would be happy there. I didn’t have the heart to tell them that their laid-back, pot-smoking, sometime-student, rather indolent lifestyle, dependent on the largesse of mummy and daddy, would get them about fifteen years of hard labor on the farm in the then-current Chinese society. Maybe the same today.