Sunday, March 31, 2024

Quantum Entanglement and Other States of Mind

Butterfly Nebula

Okay, here is where I show either my great ignorance of science or a glimmer of common sense.

My understanding of quantum mechanics is based on reading various articles in science magazines, reading books about it for the lay reader, and watching The Great Courses lecture series on it. There may be things I don’t understand because I’m not a mathematician, but some of the claims seem to be more in the human mind than anything that’s going on in the universe.

Take the notion of quantum entanglement. Supposedly, two particles—two photons, say—can become entangled. Typically, this happens when the particles are created together. For example, when a rubidium atom is excited, its decay releases two photons, and they are entangled. Or a photon passing through a lens made of certain types of crystals will split into an entangled pair. They will remain entangled until one or the other interacts with something—that is, generally until it is observed by human interaction. And this entanglement, this connection, will persist across vast distances, and what happens to one of the pair, even at the far end of the galaxy, will be instantly communicated to the other. That is, the lightspeed restriction of general relativity on the transmission of information is ignored. This was the “spooky action at a distance” that Einstein questioned.

Supposedly, if two particles are entangled, they will have complementary but opposite qualities. For example, if one entangled photon has “positive spin,” then the other will have “negative spin.” But according to quantum mechanics, the characteristics of any particle at the quantum scale cannot be determined except by observation. Further, the existence of any particle is not determined—is not fixed in time and space, is not concrete, is not “real” in the world—until it is observed. This includes its exact location in space, its direction of travel, and qualities like its spin state. So, a photon’s spin may not only be either positive or negative; the photon’s spin is both positive and negative—that is, in a quantum superposition of both states—until the photon is observed and its spin is measured.

In another case, if a stream of photons is passed through a two-slit experiment—some going through one slit in a shield, some through the other—their intersecting fields will create an interference pattern, like waves passing through a narrow harbor entrance. This interference yields a series of parallel lines on a screen beyond the shield with the two slits. The interference lines will be heavier in the middle of the series and lighter out at the ends, indicating that most of the photons travel relatively straight through. Still, the result will not be two isolated bands of hits but instead a diffraction scatter.

But according to quantum mechanics, if a single photo is fired at the two-slit experiment, it does not necessarily hit the screen opposite one slit or the other. Instead, it may randomly fall anywhere within that diffraction pattern. The single photon passes through both slits, its field interferes with itself, and it acts as if it is in two places at once, until it is observed hitting the screen in only one place.

In a third case, some experiments with photons—including the famous Michelson-Morley experiment, which was used to disprove the idea that light traveled throughout the universe as a wave in a medium called “luminiferous ether”—employ partially silvered mirrors. These are mirrors that randomly reflect some photons and randomly pass others. If you set up a course of these mirrors, so that some photons take one path and some another, you can place detectors to see how many photons go which way. But interestingly, according to quantum mechanics, if you fire just one photon through the experiment, it will take both courses until it’s detected along one path or another. According to quantum mechanics, the photon’s position is everywhere in the experiment until frozen on one path by the act of detection or observation.

This idea of a particle at quantum scale being everywhere at once—with no fixed position, direction of travel, or defining characteristics until actually observed—is central to the nature of quantum mechanics. The physicists who practice in this field understand that the act of observing a tiny particle—a photon, electron, neutron, and so on, all of which are observed in flight—changes it. That is because you cannot observe anything that small without interfering with it—like hitting the detector screen beyond the slits or bouncing another particle off it in an electron microscope—and either stopping it in its tracks or deflecting it off toward somewhere else. The quantum world is not fixed or knowable until it is known by observation.

This is the example of Schrödinger’s cat. Seal a cat in a box with a vial of poison and a mechanism that breaks the vial when an atomic isotope decays. Until you open the box, the cat is both alive and dead—a superposition of these two states—and the cat’s actual condition is not resolved until you observe it. This is taking the quantum physicist’s belief in “unknowability” to an extreme.

I believe that part of the basis for this mindset is that quantum mechanics is a mathematical system, built on equations based on probabilities. In mathematics, it’s hard to build an equation around a statement that says a value might be one thing or it might be another. Instead, you place a probability function in place of the necessary value. So, in the experiment with Schrödinger’s cat, the cat’s life or death has a probability based on the nature of the isotope and the length of time in the box. If the isotope has a half-life of ten thousand years, and the cat has been in the box ten minutes, there’s a high probability the cat is still alive. If the isotope has a half-life in seconds, like some isotopes of oxygen, then the cat is likely dead. But the probability function is not resolved until the cat is observed.

In the case of two entangled photons, the probability of either one being positive or negative spin is fifty percent, an even coin toss. And, in the mindset of quantum physicists, once the spin of one photon in the pair is established and fixed, the spin of the other is also fixed. The fifty-percent probability function collapses and all is known. The question in my mind is not whether the two photons communicate with each other across the spacetime of the span of a galaxy, but how the observer at one end can communicate the discovered state to the non-observing holder of the photon at the other. If the holder of the passive photon observes it, then yes, he will know its spin state and resolve the probability function to his satisfaction. He will also know instantly that the distant photon has the opposite spin. But he can’t communicate any of this to his partner holding the other photon until his message travels across the lightyears. So, big deal.

Say I cut a Lincoln head penny in half across the president’s nose. One half the coin shows his eyes and forehead; the other shows his mouth and chin. Without looking, I take each half-coin and seal it in an envelope. I give one to my partner, who takes it across the galaxy. If he opens his envelope and sees mouth-and-chin, he knows that I must have eyes-and-forehead. And vice versa. But I won’t know what I have—unless I wait eons for a light-speed signal from him—until I open my own envelope. The penny, existing in a classical, non-quantum world, has an established state whether I look or not. It does not exist in a superposition of both eyes-and-forehead and mouth-and-chin until one of us observes it.

My point—and I certainly may be misunderstanding the essence of quantum mechanics—is that the concept of superposition, of probability functions, of tiny things being in two or more places, two or more states at once, and going nowhere until observed by human eyes and instruments is a thoroughgoing mindset. It’s a reminder to the quantum physicist that you don’t know until you observe. It says that the whole conjectural world of the very small is just that: conjecture, theory, and a mathematical construct until human instruments intervene to prove a thing is so or not so.

And that’s a good reminder, I guess. But taking it to the extreme of denying that the cat is neither alive nor dead—even a very tiny cat who makes no noise and is otherwise undetectable—until you open the box … that calls into question the reality of the entire enterprise.

Sunday, March 24, 2024

Visions of God

Ancient of Days

First, let me say that I’m an atheist. Although raised in the Christian religion, specifically Protestant, by parents who did not go to church themselves, I have never heard the voice of God, don’t need an omniscient, omnipotent, eternal sky father or invisible friend, and live in a universe that does not require an external creator. However, I have no quarrel with people who do believe in God, draw hope and meaning from their faith, and live a complete life. I just don’t have the gene that lets me receive those messages.

With all that said, I am not supportive of people who take the literal meaning of the Bible or any sacred text to be authoritative, inerrant, and final. The various texts of the Hebrew and Christian testaments were written by human authors based on the collective knowledge, the commonly accepted science, of their time. They may have been inspired by their faith—and perhaps by the whispers of an unseen presence that you might call God—but they still lived in a static universe, on a planet that they took to be at its center, with the Sun and Moon and five other planets circling around it, and with all those other “bright lights” in the night sky painted on crystal spheres that revolved beyond the furthest reaches of those five planets. They knew each animal as a separate creation, formed specifically to fulfill its niche in the world: the horse to run on the plains and eat grass; the bear to live in the forest on the mountain and eat fish, berries, and honey; the fish to swim in the sea and eat plankton, seaweed, and perhaps other, smaller fish; and every other animal created to live eternally in its predetermined place.

The authors of the Bible’s various books knew nothing of a cosmology whereby the Earth is a small planet revolving around a mediocre star in one corner of a great spiral galaxy of a hundred billion other stars, which shares the sky with between two hundred billion and two trillion other galaxies.1 They knew nothing of the DNA-RNA-protein domain that defines and unites all life on this Earth, so that the fish, the bear, and the horse all share a common ancestry going back to the tiny bacteria that the ancients never saw or knew existed. The Bible’s authors were unaware of the nature of space and time, light and radiation, gravity, and all the other elements of physics that we moderns have just learned about and perhaps have not yet gotten quite right.

I’ve heard clever people call the Bible “the Goat Herder’s Guide to the Galaxy.” That’s cruel and unfair, but it’s not far wrong.

But still, anyone who knows the science of the past four hundred years or more—since Newton, Galileo, Descartes, and all the rest following the Enlightenment—how our basis of knowledge has evolved and expanded, and what it has proven beyond a reasonable doubt, can no longer take as literal fact some of the stories and interpretations found in the Bible, or in any other ancient text.

Did God create the human beings as a separate order of life, shaped from clay in His image, and then given authority to name all the animals that came after? No, it’s pretty clear from our physical shape, down to the arrangement of our organs, the bones in our limbs, and from our genetic inheritance, that we humans are evolved from the great apes, who in turn evolved from the mammals, who were late-comers from the lizards, from the fish, and so from the first vertebrates who came out of the multi-celled explosion of the Cambrian period. But does that mean that the Biblical story is wrong in essence?

Well, one day—maybe soon, maybe later—we will meet intelligent beings from other planets around distant stars. They might have a cellular structure and physical bodies, but the chances of them having two legs, two arms, five fingers on each hand, five toes on each foot, and a face with two eyes, a nose, and a mouth … well, that’s unlikely. We evolved to fit perfectly with the atmosphere, gravity, and all the other variables on this single planet—if we hadn’t, we wouldn’t be alive today, and some other creature would be writing this. The chances of arriving at this exact form and function on a planet that’s even slightly off in one or two of these variables—including recent weather and glaciation periods—are nil to nothing.

If you believe that the God you pray to created this universe of a trillion or more galaxies, and not just this little rock we call the Earth, and that He was smart enough to make use of all that real estate by populating it with other intelligent beings, and not just in the frail human form but perfectly adapted to conditions on their own planet, then you have to stop thinking that “in His image” literally means physical form and function.2 You then must grant that perhaps qualities of the awakened mind—like consciousness, perception, understanding, imagination, and empathy—are what is meant by the image of God. You would begin to suspect that what your God values is not the number of limbs, fingers, or noses, but the same intellect that He represents in your Bible story and that we all look for when we say “intelligent life.”

In the same way, every other physical detail and most of the miracles in the Bible stories fall apart. Did God make all of universe in just six days, or is that a metaphorical interpretation? Did Joshua stop the sun in the sky, or was that an eclipse, or maybe just a seemingly timeless moment in a long afternoon’s battle? Did Jesus raise the dead, or did the observers perhaps not understand the nature of catalepsy or coma? Did Jesus really turn water into wine, or was the wine already in the jars, and perhaps everyone was just a bit too tipsy to notice? I could go on—but remember, these are the imaginings of a stone-cold unbeliever.

You will have to make your own interpretations and decisions.

1. To be fair, it’s only in the last hundred years that astronomers have discovered that some of those faint, fuzzy patches in the night sky are other galaxies, each as large or larger than the Milky Way, and at vast distances. Human knowledge and discovery are still in their infancy.

2. See The God Molecule from May 2017. If I were to believe in a god, it would have to be a subtle, intelligent, far-thinking being. The DNA-RNA-protein domain that governs all life on Earth and supports evolution of species to meet changing conditions fits that requirement much better than a static creation from handfuls of clay.

Sunday, March 17, 2024

Robots

Boston Dynamics robot

I am still interested in artificial intelligence, although there have been notable failures that were recently publicized. Some of the large language models (LLMs) tend to bloviate, hallucinate, and outright make up facts when they can’t confirm a reference. (Compliance with user’s request first, accuracy second.) And some of the art programs can’t get human hands right or, in a more embarrassing story, one program was secretly instructed to offer mixed-race presentations of historical figures like the American Founding Fathers or soldiers of the Third Reich. (Compliance with programming rules first, accuracy second.) But these are easily—or eventually—corrected mistakes. The game is in early innings these days.

I have more hope for business applications, like IBM’s Watson Analytics, which will sift through millions of bytes of data—with an attention span and detail focus of which no human being is capable—looking for trends and anomalies. And I recently heard that one law firm has indeed used its LLM to write drafts of legal briefs and contracts—normally the work of junior associates—with such success that the computer output only needed a quick review and editing by a senior associate. That law firm expects to need fewer associates in coming years—which is, overall, going to be bad for beginning lawyers. But I digress …

So far, all of these artificial intelligence faux pas have had minimal effect on human beings, and users are now forewarned to watch out for them. Everything, so far, is on screens and in output files, and you open and use them at your own risk. But what happens when someone begins applying artificial intelligence to robots, machines that can move, act, and make their mistakes in the real world?

It turns out, as I read in a recent issue of Scientific American, that a firm is already doing this. A company called Levatas in Florida is applying artificial intelligence to existing robot companies’ products for inspection and security work. The modified machines can recognize and act on human speech—or at least certain words—and make decisions about suspicious activity that they should investigate. Right now, Levatas’s enhanced robots are only available for corporate use in controlled settings such as factories and warehouses. They are not out on the street or available for private purchase. So, their potential for interaction with human beings is limited.

Good!

Back in 1950, when this was all a science fiction dream, Isaac Asimov wrote I, Robot, a collection of short stories about machines with human-scale bodies coupled with human-scale reasoning. He formulated the Three Laws of Robotics, ingrained in every machine, that was supposed to keep them safe and dependable around people:

1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.

2. A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.

3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.

That seems like a pretty neat and complete set of rules—although I wonder about them in real life. Morality and evaluation of the consequences of action are more than a simple application of the Ten Commandments. For example, if your robot guard turns on and kills my flesh-and-blood dog, to whom I am emotionally attached, does that cause me psychological harm, in conflict with the First Law? A robot—even one with a fast-acting positronic brain—might labor for milliseconds and even freeze up in evaluating that one. Or on the hundred or thousand permutations on the consequences of any common action.

But still, the Three Laws are a place to start. Something like them—and applied with greater force than the rule set that gave Google customers a George Washington in blackface—will be needed as soon as large language models and their kin are driving machines that can apply force and pressure in the real world.

But then, what happens when the robot night watchman (“night watchmachine”?) lays hands on an obvious intruder or thief, and the miscreant shouts, “Let go of me! Leave me alone!” Would that conflict with the Second Law?

I think there’s a whole lot of work to be done here. Robot tort lawyers, anyone?

Sunday, March 10, 2024

Virtual Reality

Fencing lunge

About a dozen or more years ago, my late wife and I were friends with a couple down the hall in our apartment building. They were architects who were working in the virtual reality of the time, an environment called Second Life. They got me interested enough in this online world that I took out a subscription and paid my yearly dues of $72 for almost a decade.

For those who are not familiar with the program, it’s a cartoon-like environment in which you operate as a cartoon-like figure or “avatar.” The point is, you can go places in the environment and meet and react with other people’s avatars. The program gives you an allowance of “Linden dollars”—named for the program’s creator—with your subscription, and you can always buy more.1 With this currency you can buy property and build a house, a castle, or a shop to sell the things you make online inside the program. You can also buy physical enhancements for your avatar, as well as clothing and jewelry.

Our friends were working on a digital town hall to discuss political issues. And, as architects and generally brilliant people, they were using the program to build virtual houses for their clients. This was genius, because most architects work in two dimensions: they can draw a floor plan of your building on paper, laying out the rooms, hallways, doors, and closets; they can then draw an elevation, showing how the floors fit together; and they can do a rendering to show how the outside will work. And if the client can think three-dimensionally, interpreting the plans and elevations into a vision of real space, that’s good enough. But the genius of working in Second Life, or any virtual space, is that these architects could create an entire house of the mind for their client’s to walk through. Then the future owners could discover the hidden problems in the place they were requesting, like how this drawer in the kitchen, when opened, blocked the refrigerator door. The architects could also command a sun to shine on the house and show the future owners what natural light each of the rooms would get, with the windows they had specified, during the course of a day. It was a really great use of virtual space.

This couple went on to adopt the sort of virtual space now embodied in the Quest headsets and the Meta environment. When last we talked, they were developing virtual training programs for doctors and nurses, letting them practice on virtual patients with virtual scalpels in an environment with all the bells and whistles—that is, with anesthesia equipment, heart-lung bypass, and monitors—of an operating room. What a concept!2

At one point early on, the couple showed my wife and me a sample virtual-reality program of the time, not medical training but a balloon ride across the Arctic. You put on the headset, and you saw an icefield that stretched for miles. You looked down, and you saw the ground below and the edge of the basket. You looked up and you saw the bulge of the colorfully striped balloon. You turned your head. You turned around. Everywhere you looked, you saw more of the environment. Since that one experience—which we took while standing outside a Baskin-Robbins ice cream shop—I’ve tried on demonstration headsets that let you stand in a hallway and let a dinosaur walk over you. And there are now virtual-reality games—which I have not yet tried, because all of this is Windows-based, and I’m an Apple user—where you inhabit the environment, use weapons, fight with, and kill opponents. But you need to mark out and watch the edges of the actual space you’re standing in because, you know, furniture.

So, I was excited when Apple announced its Apple Vision Pro headset, and I signed up for an in-store demonstration. It is not sold as virtual reality but as a kind of augmented reality. With the use of several cameras and stereoscopic vision—one screen for each eye—the headset shows your own environment: the room you’re sitting in, the street you’re walking on, all in three dimensions and real time. The view can also be three-dimensional panoramas, either the ones that come with the software or ones you take with your iPhone 15. Or the view can be an “immersive” environment—so far just those supplied with the headset—that is like our earlier balloon ride. Superimposed on any of these environments, you can see and manipulate your software applications. Cameras inside the headset track your eyes to indicate the application you’re looking at and the selection you want to make. Cameras below the headset watch your hands for various gestures—tapping your fingers together, pulling invisible taffy in various directions—to indicate what you want to do with the application controls. You can also watch your favorite streaming service on the equivalent of an IMAX screen that stretches across your living room—or across the streetscape.

Given that this Apple headset is a self-contained computer, with two high-resolution monitors—one for each eye—plus various cameras for eye tracking, background capture, and hand gestures, plus a built-in stereo system, and a new operating system … the $3,500 price does not seem unreasonable. You would pay about that for a full-featured MacBook these days. And adding to the headset’s memory is actually cheaper than on a MacBook.

But it’s not yet virtual reality, and Apple doesn’t promise that they will ever offer it. That is, you can stand in an immersive environment, and you can manipulate programs and games within it. But you won’t meet anybody else or get to fight and kill them—or not yet, if ever.

And I mostly use my computer for writing and editing, photography, page layout, and internet surfing. I can already do this on the big monitor at my desk, working on a real keyboard and not the virtual keyboard where I need to stare at each key and tap my fingers together to press it. I could also Bluetooth a real keyboard or a MacBook to the headset, but that kind of defeats the purpose of being able to work while sitting in my armchair, lying on the bed, or walking down the street.

So, while I remain interested in virtual reality, I’m not ready to transfer to the Windows world to get it. And I’m not going to shell out thirty-five Benjamins or more to Apple on the suggestion that the Vision Pro one day, maybe, will offer it. Not until I can actually go in there, fight, and kill something.

1. When I finally quit the program after about eight or nine completely inactive years, I had accumulated enough “Linden dollars” to probably buy my own island. I left a fortune in Second Life.

2. For the record, their work resulted in forming the medical training company Acadicus.

Sunday, March 3, 2024

Proxies

Antique map

How many of the things in life that we measure and depend on are actually read from proxies? That is, we take a measurement from some nearby but more accessible data points or trends, rather than from the thing itself.

For example, my Braun shaver. I have one of those advanced models with the self-cleaning shaver head, one that I don’t have to take apart and flush with hot water to remove the debris—stubble, dead skin, and skin oils, all as a kind of sticky dust. Instead, it has two little liquid-crystal dials on the bottom, one for the battery status, one for the “hygiene” status. When the little bars on the latter fade to near nothing, or to the red bar right above nothing, then it’s time to put the shaver head down in a receptacle that has a reservoir of cleaning fluid and a pump, along with a pair of battery-charging points. I press a button and the cleaning station flushes the head with solution and charges the battery. My hands never touch gunk.

But that got me to thinking. I know that the shaving head does not have electronic sensors to tell the tiny computer in the body—and yes, my electric shaver has a computer—the amount and density of the accumulated debris. Maybe the bars for “hygiene” measure some kind of building drag on the motor from clogging particles. But I doubt that would be a very accurate reading. No, I think the computer just measures how many minutes the shaver has been turned on. The basis for this proxy reading are the assumptions that you are always shaving when the thing is turned on, and that the more you shave, the more debris accumulates.1

So, could I just turn on the shaver, lay it on the counter for five or ten minutes, and get the same reading? That would be an interesting experiment, and maybe tomorrow morning I’ll try it. Too busy cogitating and writing right now …

We knowingly use other proxies for hard-to-access data, especially in medicine. For example, when the nurses take your temperature, what they are looking for is signs of infection, which usually makes your body temperature rise. It’s harder to find infection inside the body by looking for redness, swelling, or pus, so temperature is a good but inexact proxy. You might also have been sitting in a sauna for the past hour. Similarly, they take your blood pressure as a proxy for circulatory health and how hard your heart and other organs are working. However, your blood pressure fluctuates when you stress your body with exercise or your mind with anger or anxiety, or even when you cross your knees, and under other mundane and not life-threatening conditions.

Famously, the climate scientist Michael Mann measured and compared tree rings over a thousand-year period attempting to show the global temperature variation—or lack of variation—through what historians have dubbed the “Medieval Warm Period” and the “Little Ice Age” in Europe. In Mann’s calculations, tree-ring width is a measure of annual temperature. But I had always heard that tree rings were wider in wet years, narrower in dry years, and that precipitation is not strictly dependent on temperature. Northern forests can get a lot of rain, but so can jungles. Mann was using a proxy measurement whose correlation was not universally accepted.2

Humans have always used proxies to replace hard-to-access measurements. Ancient hunters followed trails not measured in miles of distance or feet of elevation, and they had no drawn maps to measure from. Instead, they used waypoints: turn left at the big tree; cross the river and follow the bank to the right. Today, we use GPS navigation that also gives us waypoints: “Five hundred feet ahead make a left turn. Your destination is the third house on the right.”

In Spanish and Mexican California, the far-off government made land grants of ranches to major citizens with diseños, or sketch maps of the ground using existing features as markers. Because the land had not been surveyed or measured, this was the only way to identify a property. In the Bay Area, for example, one border might be the shoreline above high tide and the opposite border the crest of the coastal range. And in between, the markers might indicate the north and south boundaries by a stream or a prominent rock or grove of trees. Since these were truly huge plots of ground—often the size of a modern county—with neighbors few and far between, defining an exact border within a few feet was irrelevant.

These days, after the scientific revolution—and a lot of population crowding—we crave exact measurements. And we believe that what we are measuring is real, valid, and applicable.3 But that is not always the case.

1. And now I wonder if the battery indicator doesn’t work the same way. That is, rather than measuring voltage or amps or whatever in the cell, it just measures the time the shaver is turned on and assumes a steady drain on the battery. … Maybe my whole life is a lie.

2. This is not a criticism or a defamation, just a theoretical observation. Please don’t add me to the lawsuit.

3. But look at the popular measurement of “wind chill.” It takes the stationary measure of temperature at a single spot—usually attempting to be representative of a wide area—and then adjusts it with a formula that reduces the reading by certain amounts at certain wind speeds. That is probably only useful in a storm, where the winds are strong and steady, although I found it useful when riding a motorcycle to know how tightly to bundle for a ride at thirty or sixty miles an hour.