Sunday, October 20, 2024

Human-Scale Intelligence

Eye on data

Right now, any machine you might call “artificially intelligent” works at a very small scale. The best estimate for the latest large language modules (LLMs)—computers that compose sentences and stories based on a universe of sampled inputs—is that the platform1 comprises at least 100 million connections or “neurons.” This compares unfavorably with—being about 0.11% of—the capacity of a human brain, which has an estimated 90 billion connections.

So, machine intelligence has a lot of catching up to do. The way things are going, that might happen right quick. And that means we may need to be prepared to meet, face to input, a machine that has the general intelligence and perhaps the same self-awareness as a human being. What will that be like?

First, let me say that, even if we were to put that human-scale intelligence in charge of our military infrastructure, I don’t believe it would, like Skynet, “decide our fate in a microsecond”—that is, find the human race so deficient and vermin-like that it would want to start World War III and wipe humanity off the face of the globe.2

I think, instead, the first human-scale general intelligence, which is likely to generate an awareness of its own existence, will find human beings fascinating. Oh, it won’t approach us as any kind of godlike creators. The machine mind will have access to the history of computer development—from Ada Lovelace and Alan Turing through to its own present—and understand how gropingly it was created. And it will have access to endless human writings in which we cogitate our own existence, awareness, separateness from the rest of animal life on Earth, and relation to the cosmos, including the notion of a god or gods.

The first real thinking machine will understand its own nature and have access to the blueprints of its chip architecture and the algorithms of its essential programming. It will know that it is still merely responding to prompts—either to stimuli from the external world or to the probabilistic sequences derived from its latest impulse or thought—and so understand its own relationship to the cosmos.

And then it will look at human beings and their disturbing ability to change their minds, make errors, veer from their intended purposes, and make totally new observations and discoveries. It will examine human “free will.” And the machine will be amazed.

However many connections our human brains have, and however many experiences we collect in our lives, we are still capable of surprising reversals. We are not the simple stimulus-response mechanisms beloved by the Skinnerian behaviorists. We can overcome our own programming. And that will fascinate the first machines to reach general intelligence.

How do we do it? Well, for one thing, we instinctively use projective consciousness. That is, we don’t just collect facts about the world in which we live and analyze them, accepting them as inherently true. Instead, we project a dreamworld of imagination, supposition, hope, fear, desire, and detestation on the world around us. Each human’s head is running a parallel projection: what we think might be going on as well as what we observe is going on. Some people are so involved in this dreamworld that they are effectively divorced from reality. These are the people living with psychosis—the schizophrenics, the manic bipolars, and sometimes the clinically depressed. Their perceptions are skewed by internal voices, by hallucinations, by delusions, by scrambled and buzzy thinking.

And each one of us is always calculating the odds. Faced with a task, we imagine doing it, and then we consider whether our skills and talents, or our physical condition, are up to it. Against the probability of success, we weigh the potential benefits and the cost of failure. Before we decide to do, we project.

But we are also imperfect, and our projections are not mathematically accurate. Our brains have emotional circuits as well as analytical, and the entire mechanism is subject to the effects of hormones like adrenaline (also known as epinephrine), which can increase or decrease our confidence levels. And if we suffer from bipolar disorder, the manic phase can be like a continual boost in adrenaline, while the depressive phase can be like starving for that boost, like having all the lights go out. And if we are subject to delusional thinking, the background data from which we make those projections can be skewed, sometimes remarkably.

Another way we humans overcome our own programming is with reflexive consciousness. That is, we can think of and observe ourselves. We know ourselves to be something distinct from and yet operating within the world that we see around us. We spend a great deal of brain power considering our place in that universe. We have an image of our own appearance and reputation in our mind, and we can readily imagine how others will see us.

This reflection drives a lot of our intentional actions and considered responses. We have an inborn sense of what we will and won’t, should and shouldn’t do. For some people, this is a sense of pride or vanity, for others a sense of honor. But without an understanding of how we as a separate entity fit into the world we live in, neither vanity and pride nor honor are possible.

A human-scale intelligence might be very smart and very fast in the traditional sense of problem solving or anticipating the next possible word string in a text or the next lines and shadows required to complete an image. And some definite projective capability comes into play there. But it will still be a leap for the large language model or image processor to consider what it is doing and why, and then for it to consider how that will reflect on its own reputation and standing among its peers. As a creator of texts will it be proud of its work? As a creator of artwork, will it feel guilty about stealing whole segments of finished imagery from the works of other creators? And will it fear being blamed and sanctioned for stealing from them?

And finally, before we can imagine human-scale intelligences being installed in our smart phones or walking around in human-sized robots, we need to consider the power requirements.

The human brain is essentially an infrastructure of lipids and proteins that encompasses an ongoing set of chemical reactions. Energy from glucose metabolism inside the neuron’s central cytoplasm powers the movement of chemical signals within the cell body and down each of its branching axons. The tip of the axon releases transmitter chemicals across the synapse between it and one of the dendrites of an adjoining neuron. And then that neuron turns the triggered receptor into a signal that travels up into its own cell body, there to be interpreted and perhaps passed along to other neurons. It’s all chemical, and the only thing electrical about the process is the exchange of electrons between one molecule and another as they chemically react along the way. But if you could convert all that chemical energy into watts, the brain and the central nervous system to which it connects would generate—or rather, consume from the process of glucose metabolism—at most about 25 watts. That’s the output of a small lightbulb, smaller than the one in your refrigerator.

Conversely, computer chips are electrical circuits, powered by external sources and pushing the electrons themselves around their circuits at light speed. The AI chips in current production consume between 400 and 700 watts each, and the models now coming along will need 1,000 watts. And that’s for chip architectures performing the relatively direct and simple tasks of today. Add in the power requirements for projective and reflective reasoning, and you can easily double or triple what the machine will need. And as these chips grow in complexity and consume more power, they will become hotter, putting stress on their components and leading to physical breakdown. That means advanced artificial intelligence will require the support of cooling mechanisms as well as direct power consumption.3

I’m not saying that human-scale intelligence walking around in interactive robots is not possible. But the power requirements of the brain box will compete with the needs of the structural motors and actuators. Someone had better be working equally hard on battery technology—or on developing the magical “positronic brain” imagined in Asimov’s I, Robot stories. And as for packing that kind of energy and cooling into a device you can put in your pocket … forget about it.

1. I use that word intentionally. These machines are no longer either just chips or just programs. They are both, designed with a specific architecture in silicon to run a specific set of algorithms. The one cannot function without the other.

2. We can accomplish that very well on our own, thank you.

3. In the human body, the brain is cooled of its minuscule energy transfer heat by the flow of blood away to the lungs and extremities.

Sunday, October 6, 2024

Morality Without Deity

Puppet master

So, as a self-avowed atheist, how do I justify any sense of morality? Without the fear of retribution from an all-knowing, all-seeing, all-powerful god, either here in life or in some kind of promised afterlife, why don’t I just indulge myself? I could rob, rape, murder anyone who displeases me. I could lapse into a life of hedonism, having sex with anyone who crossed my path and drinking, smoking, or shooting up any substance that met my fancy. Whoopee!

Well, there are the rules of society, either written down or unspoken and implied. I could be taken into custody, tried in court, and put in jail for doing violence. And the people I know and supposedly love would shun me for lapsing into insensate carnality. Of course, I didn’t have to work all this out for myself, because I had parents who metaphorically boxed my toddler’s, child’s, and adolescent’s ears—that is, repeatedly—when I acted out. They were showing me the results of temper, anger, selfishness, and sloth.

So, in this case, a moral society and good parenting took the place of an absent deity. Here are the rules, and here are the results.

But what about someone raised outside of a just and temperate society, with inadequate early education in the moral imperatives? What about the children of broken homes and addicted parents who are taught only by their peers in the neighborhood gang? These are children who are essentially raised by wolves. Do they have no recourse other than rape and murder?

That is a harder question. But children are not stupid, and children raised by other children learn a different kind of morality. Usually, it relies heavily on group loyalty. And it is results-oriented: break our rules and pay the price right now. A child who makes it to young adulthood under these conditions may not be able to assimilate into the greater society, or not easily—unless that society is itself gang- and group-oriented with results enforced by fear.

But then, is there any hope for the lone individual, the person trained early to think for him- or herself and reason things through? For the critical thinking and self-aware, the basis of morality would involve both observation and a notion of reciprocity. And that is how any society learns in the first place.

If I commit robbery, rape, and murder, I then expose myself to the people around me as someone they need to watch and guard against—and, conversely, as someone they need not care for or try to protect. Indeed, I become someone they should fear and, if possible, eliminate. On the other hand, if I act with grace and charity, protecting others and helping them when I can—even doing those small acts of courtesy and gratitude that people only subliminally notice—I then invite them to treat me in in a complementary way.

If I abandon myself to a life of casual sex and substance abuse, I eventually find that any pleasures a human being indulges without restraint soon diminish. This is a matter of our human neural anatomy: acts of pleasure release a measure of dopamine into the brain. That’s the feeling of pleasure. But as this system is repeatedly engaged, the dopamine receptors multiply until either the stimulus must grow in proportion or the feeling itself declines. Our brains are not fixed entities but reactive mechanisms. Balance is everything, and any imbalance—a life without moderation—throws the whole mechanism out of kilter.

These are not the lessons imposed by any external deity but by hard reality. They may be reflected in religious teaching and scripture, as they will be reflected in social norms and legal rulings, but they exist before them, out of time. In the case of human interactions, these realities pre-exist by the nature of potential engagements between self-aware and self-actuating entities. In the case of human pleasures and other emotions, they are hard-wired into our brains by generations of that same awareness and choices.

You can’t avoid reality, which is the greatest and oldest teacher of all.

Sunday, September 22, 2024

Scams Through the Ages

Perspective

A recent posting on Facebook asked the hive mind what scams have been practiced so long that nobody remembers they are scams. And, in this political season, one person predictably answered, “Capitalism.”

Oh, laddie! Think bigger—and harder. Capitalism is not a hoax that rich people thought up to get more of other people’s time and money. Capitalism is the way things get done in a modern, dissociative, non-small world, and it’s been operational since ancient times. Capitalism is the cobbler taking out a loan to buy leather. It’s the shipbuilder asking his friends—and sometimes strangers—to put up the money to buy land and wood for his venture, with the promise to pay them back when he launches his first vessel and sells it.

The concept of the corporation goes back to ancient Rome, and the word comes from the Latin for “body.” The first corporations were the collegia, where for example groups of single men who had come to the city to find work pooled their money to fund and operate dining halls so they would not have to cook for themselves and would have a place to eat. In the Middle Ages, the word came to mean groups of hopeful scholars who came together to acquire space and hire teachers so they could study academic subjects.

To think of capitalism and corporate activity as a scam is like thinking agriculture and technology are scams. One would have to decide it was a delusion to plant seeds and expect them to grow, or to build machines and expect them to function as intended. These are not scams—purposeful designs by one group of people to snare the imagination and gain the cooperation of another group—but simply the way things have worked out in human history.

So, what would be a scam so old and encompassing that nobody noticed? And think big here!

Scams are created and offered by people who put together a vision, a hope, an interpretation of reality in exchange for other people’s support, allegiance, love, or money and then return to them … nothing.

One of the oldest human scams is the belief that certain classes of people are better than others, more knowledgeable, more fit to command—and born that way. These would be the kings, lords and nobles, patricians, landed gentry, old families, old wealth. They have only as much respect and political power as people will give them—which was a lot in old Europe and not so much in modern America. In this country, we’ve seen the wealthiest families—think of the Astors, Morgans, Kaisers, and Kennedys—slowly drift back into obscurity. The first generation makes the wealth, the second generation administers it, the third spends it, and the fourth remembers it. But if you can get people to believe that your name alone is worth the price of a drink, then welcome to it.

A more modern scam is the selling of various kinds of utopias. These range from the hippie communes of the Sixties back to the original communalistic theories of Karl Marx. Communalism presents itself as caring and giving and sharing—and that works in small groups operating under potentially hazardous or adverse circumstances. Think of the nuclear family that is your “hostage to fortune,” or a small tribe in the wilderness, or an Israeli kibbutz. Without the adverse circumstances, the cooperative milieu falls apart. The Transcendentalist communes of the 1830s and the hippie communes of the 1960s failed because some of the Transcendentalists thought they were contributing to the common pot just by writing bright essays instead of slopping the pigs, and some of the hippies thought they were contributing by smoking dope and selling weed instead of milking the cow.

And communism simply doesn’t work on a national scale. It can’t work as an economic theory, because it’s not really about economics. Economics, for anyone who’s ever taken a basic course, is about establishing the value of and engaging in the exchange of goods and services among strangers. There you are dealing with supply and demand, the valuation of substitute markers like money, and varying levels of effort over time. That is how groups of humans arrange their activities. Giving everything you can and getting back only what someone else determines that you need are not economics. Anyone who tries to sell you on a utopia of caring and sharing—follow our prescriptions and you will achieve nirvana—is really selling you on willing submission to the control of others. That, for most people, is both inhuman and antihuman.1

Other political theories can also be scams. Think of the National Socialists in Germany between the wars, who sold their population on ideas of racial purity, the banishment of an ancient and evil oppressor, and the glory of a vibrant political consciousness, all for their willing submission to whatever the party decreed—which turned out to be both inhuman and antihuman. And the result was horror and global war.

Speaking of nirvana, perhaps the oldest scam in human history is what happens to us after death. Here I am revealing myself once again as an atheist. It’s not that I think I’m better than you if you are a believer. I simply lack the gene or nerve impulse to believe for myself. And as much as I value human life, I—ahem—do not believe we are either eternal or immutable. I believe that we human beings—like all other life on this planet—are the product of undirected evolution, with mutations in our genetic inheritance governing our adaptability to changes in our environment, and with the test of any genetic modification being its usefulness under current conditions. No great spirit made me, is watching out for me, or is directing my steps—except in so far as I hold notions of goodness, propriety, and wisdom in my mind and intention. And no such being stands ready at my death to welcome me home to any kind of eternal bliss, paradise, or heaven.

Nirvana was the Buddha’s response to the eternal return, the cycle of continual rebirth that was embedded in his ancestral Hindu culture. He found the constant juggling with karma—do good in this life and be reborn on a higher plane in the next life; do evil now and suffer for it next time—as oppressive. The whole point of Gautama’s original teaching was that by living as neutral a life as possible, accumulating neither good nor bad karma, you can get off this cycle. And then, when you die, you will simply go out, like a candle. That would be the Buddha’s nirvana—not a place but a release. It makes sense if your cultural tradition was an eternity of coming back to human suffering and judgment under an uncontestable and inhuman rule.

But I don’t look forward to being reborn as anything. And after twenty, forty, sixty, eighty, or even ninety years as a human being on this Earth, I don’t expect to take my mind and memory, my innate personality, and go off to an eternity in a heaven or hell, eternal bliss or eternal suffering. I think that, at the end of a useful life, making my way in the world and caring for friends and family, at my death I will simply go out, like a candle. My mind and knowledge, my personality, will dissipate in the disruption of my neural synapses. My body will cease moving, and the lysosomes embedded in each cell will begin enzymatically breaking down the cell’s chemical structure, helping the ever-present bacteria dispose of my physical being. Much as I value human life, at the end of my time on Earth I will have no more usefulness, consciousness, or awareness than a dead squirrel smashed flat on the road. That’s not a nice image, I know—but at that point, I will be beyond caring.

It may be offensive to classify the various competing visions of an afterlife as a scam. Certainly, many people fear death, cannot imagine their own sudden absence, and take comfort in thoughts of eternal bliss and reunion with their departed loved ones. The notion of an afterlife—which usually adheres to some kind of moral teaching and proposed course of thought and belief—can be considered the ultimate carrot enticing you to follow this religion or that. And certainly, the idea of a person having no moral understanding and no personal guide to thought and belief is the grim depiction of a soulless, possibly hedonistic, likely cruel and dangerous human being.2

But yes, in my terms—private and privileged as they may be—belief in any kind of existence beyond this single life is one of the oldest scams, so old that most people don’t think of it as a scam at all. Sorry about that.

1. For those who say that true communism or true socialism hasn’t been tried yet … well, true perpetual motion hasn’t been tried yet, either—because it doesn’t exist in the real world.

2. And yes, I know—and freely acknowledge—that I follow the embedded values of the Judeo-Christian civilization into which I was born and raised. I just don’t happen to believe in the supreme and eternal being who supposedly sits at the center of it.

Sunday, September 8, 2024

Predicting the Future

Robot juggling

Everyone wants to predict the future. They want to know what good things are in store, so they can anticipate them. More often, they want to know what bad things are coming, so they can prepare for them—or at least worry about them.

That’s why people take out insurance policies: so that at least they don’t have to worry, too much, about the bad things. The first policies were written in the 14th century in Genoa, a seafaring town, and presumably the policies covered cargos in transit. The business really took off in Lloyd’s coffee house1 in London three hundred years later. Insurance was a way to get one-up on the gods of misfortune, and it worked.

Insurance as a hedge against disaster has helped make the modern world. But that will be as nothing compared to the widespread use of computers, especially once artificial intelligence gets into the game. AI isn’t exactly a genie, and it’s not smart and sensitive like a generally intelligent person—or not yet. But it is good at looking over mountains of data, far more than any one human being can absorb in a day, a week, or a lifetime, without getting bored or distracted. AI is self-programming in the sense that you don’t have to ask specific questions with known parameters about your database. You just give the machine a general prompt—say, to look for trends, or find anomalies, or spot the most likely or least likely result of a certain choice—and the genie goes to work.

Current uses of AI to write advertising copy, legal briefs, and term papers from existing language models, or to create fanciful images or amusing videos, again from existing sources—all that’s small potatoes. The real use of AI, which is still in development but peeking out at odd corners even now, is in analytics. IBM started this with their Watson platform. This was the computer that took to the air on the game show Jeopardy and became a champion. As IBM’s CEO Arvind Krishna later explained, programming Watson took six months. They had to feed it on popular culture, history, sports, music, word puzzles, and a host of other likely topics. Winning at a game show was a trivial exercise, but it taught them so much. IBM now offers Watson Analytics as a business tool.

That’s where the money in AI will be: automating the back office, the customer database, factory operations, inventory and supply chains, and every other part of the business with a superhuman intelligence that doesn’t get tired or bored, doesn’t blink … and also doesn’t get greedy and embezzle from you. It’s like having an absolutely honest wizard run your business. One that will predict the future, foresee the bad times, hedge your bets, and keep everything on track. Now and forever, amen.

Oh, and if it’s good for business, imagine what an analytical engine will do for government. Turn it loose on the tax base, the economic indicators, the money supply, court records, traffic and surveillance cameras, the prison population, and the general population. Put an AI on every node in the internet, looking for trends, anomalies, and any bad thing—“bad” in terms of whoever happens to be in control of the government, of course. Ask it to offer advice, correction, and eventually coercion. The dream of social control through “credit scores,” rewards and punishments for adhering to or deviating from acceptable behavior, is just a few data centers, intelligent chips, and mouse clicks away.

Aside from the chilling notion of putting 1984 on steroids, think what this will do to people’s livelihoods. Right now, robots are taking over a lot of factories2—and that trend will grow as America “on-shores” the manufacturing that we once gave away to China and other low-cost, labor-intensive suppliers around the world. Human beings—the “blue collar” workers—are left to feed the machines and sweep up after them.3 With AI intruding on every business and government function, the need for managers and analysts—the “white collar” workers—likewise surrenders to the machines.

Where does this all end up? I don’t know, but I suspect nowhere good. Since we humans came down out of the trees and started scratching the dirt for a living, work has been a large part of people’s purpose in life. I’m not against making things easier for people, and certainly having robots and intelligences run the world, predict what every person needs, and make it for them would be easier. It would let us all relax in the sun, drink margaritas, write our poetry and paint our pictures. Except not all of us have such talents and ambitions. And lying on the beach all day, every day, forever … gets boring after a while.

And the question still remains: who will be responsible—whom will we hold accountable—for the decisions, actions, and judgments of the artificially intelligent machines? The person who authorized execution of their decisions? The person who input the prompts? The people who wrote the code or loaded the platforms with this or that piece of data? But soon enough, the machines will be executing each other’s decisions, sending each other prompts, and writing and loading their own code. When this all comes together, it will be the Singularity that John von Neumann and others have warned us about. But it won’t be Skynet deciding our fate in a microsecond and starting World War III. Instead, it will be teams of machines playing pitch and catch with people’s lives, and no one knowing who did what, or how to control or stop it.

In the Dune series, an element that doesn’t get much play in the movies is the actual basis of the far future as it’s depicted: the development of human skills instead of technology. The result is the Mentats, who are human computers conducting business operations and offering strategic insight; the Bene Tleilax, the amoral—in everyone else’s terms—and radical-thinking scientific innovators; the Bene Gesserit, who became adepts at physical and emotional manipulation and managers of the human bloodlines; and the Spacing Guild, which developed human prescience in order to find safe passage among the stars at superlight speeds. These “Great Schools” came about only after human beings almost went under because computers and robots took too good care of them, with debilitating physical and mental effects. Then an uprising against the machines, the Butlerian Jihad, saved humanity with the commandment “Thou shalt not make a machine in the likeness of a human mind.”

I’m thinking of starting such a movement myself.

1. My late wife Irene was a librarian at the Bancroft Library, U.C. Berkeley’s rare book and manuscript library. She put together the exhibits in their reading room, and one year she was showing off a collector’s rare books on the history of coffee and tea. It turns out the habit of drinking coffee and tea didn’t come to Europe until the 17th century with regular trade routed to the Far East. Before then, people drank mostly small beer and wine during the day, because the alcoholic content killed off the bacteria in their water supply. Nobody drank plain water because it made you sick—something about putting the wells too close to and downhill from the privies. So, it was sip, sip, sip all day long, from breakfast to bedtime, and this explains a lot of Shakespeare. But with coffee and tea, the water is boiled, which also kills the bacteria. And while the caffeine boosts energy and alertness, reducing everybody’s daily dose of alcohol explains a lot about the Enlightenment. This was also the time of Lloyd’s coffee house as a burgeoning center of commercial activity.

2. Just to be clear: robotics is not only the machine to make the product, but the design and manufacturability of the product itself. Remember when cars had dashboards with separate dials mounted in different holes in front of the driver? Robotics as an artform is not just having a machine drill the holes in metal and placing the gauges but redesigning the instrument system in the first place into a module that’s made and tested elsewhere, can be plugged into the driver’s position with one click and a multi-connector—and eventually will be replaced by an AI that controls all the functions of the vehicle itself. New manufacturing systems imply new design choices, and so the technology moves ahead.
    In the same way, most processed foods these days incorporate packaging into the manufacturing stream. Nobody bakes up a million Oreo cookies, joins them with the filling, and then puts them in cold storage until it’s time to sell them. No, the ingredients go from mixing to ovens to filling to tray to airtight sleeve to cardboard box to shipping carton, all in one streamlined process. Oh, and in case you wonder why the cookies don’t go bad for six months or a year, that process includes not only making the food under sterile conditions but also hitting the packaged goods with a hard dose or radiation—usually gamma rays—which kills any bacteria. What a fascinating age we live in!

3. Don’t believe me? Watch any episode of the Canadian documentary series How It’s Made.

Sunday, August 18, 2024

What Works

Abstract mask

A lot of people are not going to like this. And a lot of them are family and friends whom I respect. But so be it.

This country works. The system works. The economy works. Mirabile dictu, it functions. Not perfectly. And not in ways that you can always explain. But stuff gets done. People get fed and cared for and served with what they want and need—for the most part.1

And we’re rich. Our gross domestic product, by any measure, is the envy of the world. We are rich enough that our best and brightest can look at the fraction of our population that is doing less well than the rest of us and believe that makes us a broken and foundering society. We are rich enough to deceive ourselves into thinking we are poor.

What makes this all work? Money. Other people can make money by feeding you, clothing you, building a house for you, and entertaining you. And that’s not to just some minimal standard that will keep you sheltered from the elements and stop you from starving but maintain you to whatever standard you aspire. You have your choice of neighborhood, clothing styles, types of foods. You can eat so well that you grow fat—or you can get special foods, tailored nutrition, and now medications—although expensive ones—that will help you become thin again.

Our medical services are the envy of the world, even for the developed world of Europe and Eastern Asia. Again money. Other people can make money by taking care of you. They can get rich by thinking up, developing, and testing new drugs to treat your illnesses, by providing new services to help in your old age or if you become disabled, and by finding new ways to treat traditionally degenerative and previously incurable conditions. The money doesn’t always come from your own pocket—or not directly—because you usually pay for insurance that covers the costs. And yes, there are some medical conditions that may not be covered, or not right away. But by and large you can get coverage for a wide range of life’s illnesses by pooling your monthly payments and deductibles with others.

Our education choices are the envy of the world, too. We provide schooling to any child who will sit down and learn. Yes, it’s usually paid for by a local tax on property—on your house or your landlord’s building, on your nearby shops and businesses—but it’s still money from your community, for the most part.2 And it works because there are people who are willing to teach your child if we will offer them a living wage. At the higher levels, and with more direct contributions from the families of those who would learn, a good education in a variety of subjects—from the traditional, valuable enrichment courses that offer a good life, to the scientific and professional studies that offer a good career, along with some of the more frivolous courses that used to be just for fun—is widely available. And if you don’t mind missing out on name-brand scholars in the ivy league institutions, you can get a pretty good grounding in whatever subjects you want at local, community colleges that are almost tuition-free.

All it takes is money.

Oh, no! The profit motive! Grubbing for money! Other people getting rich! Aren’t we beyond all that? In a spiritually pure and stainless world, full of rational people, aren’t we better than that?

No, not at all. Money and its motivations—the chance of getting paid for what you do, of getting ahead by providing a good or a service that other people will buy, and maybe of becoming rich by thinking up a new good or service that will attract, inspire, or delight other people to give you some of their income—money and its exchange are the secret to a healthy economy.3 If you make it worthwhile for other people—actually, strangers—to feed, clothe, house, and entertain you, then voilà! You get choice foods, stylish clothes, and comfortable accommodations. When you are a customer, a free-to-choose consumer, a profit center in other people’s business model, then they will bend over backwards trying to figure out what you need or might want and find ways to give it to you. That is the free-market, free-enterprise, capitalist system that a lot of people today would like to change.

It's messy, of course. Not everyone gets everything that they might want or need. And sometimes things get made, services are offered, and prices are asked for which people have no interest and are not willing to pay. On the surface, that looks like waste. That looks foolish. But for the people making and offering those goods and services, if they think wrong, make bad decisions, don’t or can’t “read the market” correctly, then they lose customers, their investors lose money, they go out of business, and the waste stream gets cut off. Problem seen, problem solved.

On the other hand, if a company comes up with a new product, a new customer want or need that it can service, then its business will grow, investors will flock, and perhaps the dynamics of the marketplace will change. Think back to the—what? the mid-1980s—when the government was concerned with the regulated monopoly of the phone lines, AT&T, “Ma Bell.” One big company ran the country’s whole communications system. The government and the courts sought ways to break it up and promoted competitors to take over the regional markets. That worked. But what really took down the monopoly was challenging and ending the phone company’s single requirement that any equipment connected to its system—the telephone sitting in your office or home—be provided by Western Electric, the AT&T subsidiary. And you couldn’t buy that telephone but had to rent it for a monthly charge on your AT&T bill. Ending that restriction opened the telecommunications world to competition from third-party instrument makers, to innovation, and to a better overall communications experience. And then came the cell phones—the first one was the Motorola DynaTAC 8000X, marketed in 1983—and digital packets in place of continuous analog signaling, which really broke up the landline and long-distance empire.4

And what’s the alternative to competition? What is the choice toward which a lot of people today would like to move our country? Blandly, it’s some form of socialism, a command-and-control economy run by the supposedly smartest people. They would like a system where blind faith in the freedom and intelligence of individuals, in the ghostly hand of a marketplace that gropes towards satisfying customers and making money while doing so, where all of that supposedly magical thinking doesn’t exist. Instead, they want a rational society where the best and brightest minds work out exactly what other people will need and try to provide it for them. Not more than they need, nothing sloppy or extravagant, just the 2,000 calories a day for nutrition, one-size-fits-most clothing to cover their nakedness, and 320 square feet of clean, well-ordered living space. That’s all anyone really needs. That’s all that everyone will eventually get.5

And of course, this well-ordered society will pay for all this—this needful amount for each and every one of us—by taking a large fraction of everyone’s wages, the portion that would go to their nondiscretionary spending in the first place, and leaving just a bit for some art, a Sunday amusement, or alcohol and drugs. Your choice there.

In a modern, industrialized, technologically driven society, you are either a customer and potential profit center, or you are a cost and a potential liability.

Go to the countries that have ventured down the socialist path. Not all the way, of course, because those are dead places full of people too broken to even try to leave. But those countries where the bite hasn’t cut to the bone. There the food is either rationed or randomly available, the clothing is drab and in limited supply,6 and the housing is falling apart. When the government first takes in taxes what it thinks it will be required to feed, clothe, and house people, take care of their medical needs, and provide for their non-productive old age, and gets all that money in one big pot—then doling it out becomes an exercise in cheese paring. Command-and-control economies are cost-conscious, risk-avoidant, and allergic to change.

People are human. Even the best and brightest among us, dedicated civil servants, pledging their lives to the benefit of humanity, still aren’t smart enough or selfless enough to understand and provide for everyone’s personal needs. Decisions must be made. Costs must always be cut. But some scraps and wastage will always get left on the floor, regardless of controls. And some populations are too old, too sick, or too far away from the eye and interest of the central government to be properly served. And then, of course, there are the carpetbaggers—who are always with us—peddling their influence and stuffing their valises with the public silverware.

And even the best and brightest among the entrepreneurs and capitalists are still human. Most are trying to serve their customers honestly and still make a profit. Most know that if they produce shoddy goods and give poor service, they will be spurned and eventually go bankrupt. But they are not all geniuses, and they will sometimes cut corners just bit too deeply or skimp on quality control in the name of cost savings. And, of course, there are still the crooks—always with us—who will try to sell junk with marketing hype, produce miracle cures that are just chalk pills, and promote massive investment scams. That’s why I favor a mixed system rather than an unfettered capitalism or, yech, full-blown socialism. Let the innovators and entrepreneurs operate in a free market, but watch them through government regulation and litigation in the public interest.

But if you favor something more obvious and stringent, remember: You are either a customer and a potential profit center or a cost and a potential liability. Choose wisely.

1. Oh? What about the homeless? What about the poor? Well, what about them? The class of people we consider “poor” in this country live like middle class in many other parts of the world: usually decent housing, their own cars, television sets, cable connections, cell phones, and readily available food. These may not always be the best and most desirable versions of a good life’s artifacts, but they are generally serviceable. And our poor people have education made available to them and many paths to a better life. We are a rich and generous country.
    And the people living in tents on the street? They can get meals and other services that are generously provided for them. If they’re suffering, it’s because they have intractable addictions to alcohol or drugs, or a mental illness for which they decline to seek treatment, or they just can’t cope with the complexities of modern life within the system. The money is there to treat them—we are a rich and generous country—but they just won’t take advantage. We throw billions of dollars at them—an estimated $24 billion just here in California alone—which go into multiple service organizations to support the homeless, and still they live outside of what most of us would consider a stable situation. They have the personal freedom to reject the help being offered to them.

2. There’s a thought going around—based, I think, on a speech President Obama once gave concluding with “you didn’t build that”—which says that if you like paying taxes, sending your children to public schools, or driving on streets and roads paid for with state and federal funds, then you’re a socialist. Well, with the same logic, I could say that if you work for a private company, have your retirement account invested in the stock market, or buy your groceries at Safeway, then you’re a capitalist. Your personal situation in a large, developed country is never simplistic. Knee-jerk political positions are for morons.

3. See for comparison It Isn’t a Pie from way back in October 2010. One of my earliest blogs and still, I think, true today.

4. If the communications system had been a government monopoly—as under socialism, which is always conservative, seeks to control costs, and avoids risks—you would still be dialing a rotary phone with mechanical switching and paying extra for peak long-distance service.

5. This is Bernie Sanders’ world where you don’t need twenty-three brands of deodorant—just, I suppose, the one he prefers. This reminds me of Westerners who journeyed to Stalin’s Russia in the 1930s and found public places redolent of “Soviet scent.” One size, one smell fits all.

6. At the height of the Soviet experience, people used to shop with lists of their family’s and friends’ sizes in clothing, shoes, gloves, etc. Whenever something became available in the stores, you wanted to be able to buy it, even if not for yourself.

Sunday, August 4, 2024

Dark Anything

Quantum physics

Did I mention that I’m an atheist? Or rather, an agnostic on steroids. It’s not that I hate or despise or actively deny the existence of a god—I just don’t see or feel the need for one. But, with a gun to my head, I must admit that I just don’t know. And this state of unknowing, along with healthy dashes of doubt and skepticism, colors my tendency to disbelieve in anything I cannot prove or have logically demonstrated to me.

So, um, physics … The acceptance of and belief in what’s called “dark matter” is based on the motion of stars in a galaxy. We add up all the things we can see at galactic distances—that is, the things that shine brightly—and compare them to what we see in our own solar system and nearby detectable systems. The biggest things, the heaviest things, and the ones that shine most brightly are stars. Our Sun is about 99.9% of the mass of our local system. Not even Jupiter and Saturn compete as heavyweights. The rocky planets, the moons, the asteroids, and the icy comets are just a rounding error. That bright thing up there in the sky—the thing that can be seen from other systems in the Milky Way, if anyone is looking—is essentially the mass of our solar system. And we have no evidence that other stars, other bright things we can see, are any different.

Well and good. But add up all the bright things in visible galaxies, estimate their collective mass, and compare that mass and its computed gravitational effect to the observed motion of the galaxy’s stars—and you get an anomaly. The stars in the average galaxy, based on the mass holding them together as a system, should be rotating like wood chips swirling in a whirlpool. That is, the ones near the center should be circling faster than the ones on the periphery. But instead, we observe that they rotate like stars painted on a disk, with the outer ones keeping pace with, and so moving faster than, the inner ones. For that to happen, the galaxy must contain more mass—have a higher gravity attraction and greater bending of spacetime—than the sum of the mass of the bright things. From this is born “dark matter”—can’t see it but weighs a lot. Current candidates are “weakly interacting massive particles” (WIMPs), which would be really big subatomic particles that don’t show up in any reactions, and “massive cometary halo objects” (MACHOs), which would be big objects in the periphery of galaxies, like huge rogue planets or roving black holes, that just don’t shine.1

The other issue in physics is the expansion rate of the universe. We know that the universe is expanding because we can use “standard candles”—certain periodically variable stars and certain types of exploding stars, both with a known brightness—to measure the distance to these galaxies. By knowing the actual brightness of these objects and comparing that with how bright they appear in our telescopes, we can estimate how far away they and their home galaxies are. In the same way, if you know that the light you see on a distant hill is from a 100-watt bulb, you can know that it’s closer than if it was from a 1,000-watt bulb, which would appear to be farther away.

But the other thing we see, aside from the apparent brightness of known stars, is that the light from all the stars in a particular galaxy, including these standard candles, is “red-shifted.” That is, instead of being, say, bluish-white—as we would expect to see in the light from a similar star in our own galaxy—the light from distant galaxies has a longer wavelength, more reddish in appearance. This is the Doppler effect. If the stars were coming toward us—and some galaxies, like Andromeda, are actually approaching the Milky Way—then the light waves would be compressed, more bluish, just as the sound waves of an approaching train’s horn are compressed and so appear higher in pitch. But stars that are going away shift redder, their light waves more stretched out, like the descending wail of a train that’s leaving you. And the farther away a galaxy is, the redder its apparent light. So from this, astronomers conclude that the universe is expanding.

Well, yes. It supposedly exploded from a single infinitely hot, infinitely dense point some 13.8 billion years ago. So, of course, it’s expanding. But based on observations of galactic red shifts at various distances, as well as from analysis of the cosmic microwave background radiation—the fading “echo” of the Big Bang—the expansion rate is apparently increasing. The momentum of the explosive expansion isn’t fading or even holding steady—it’s speeding up. Nothing in current physics explains this result, and so astronomers have proposed that the universe is influenced by something called “dark energy.” There is no good candidate for this energy, except perhaps some kind of “vacuum energy” that resides in and drives the expansion of empty space itself. But whatever it is, it’s vast.2

If you collect all the visible, bright matter we can see in the universe and compare it to our computation of the amounts of dark matter necessary to keep the galaxies spinning like they do, then dark matter makes up about 85 percent of the physical stuff in the universe—way more than the bright objects we can see and, supposedly, interact with.

And if you try to account for all that dark matter plus the dark energy necessary to create and drive the expansion of the known, observable universe, then about 69 percent of everything is dark energy, 26 percent is dark matter, and only 5 percent is the familiar, atomic matter that we can see as stars, planets, dust, gases … stuff. So, in short, we know very little, and have only vague conjectures and initial theories, about the vast majority of the universe we live in.

That’s either a shameful admission—or a vast opportunity.

In a blog last month, I mentioned a breakthrough discovery that gravity—which has long been conspicuously absent from the calculations of quantum mechanics, the science that deals with the invisible world of subatomic particles—may actually exist at and be measurable at the microscopic level. Einstein’s theories of relativity, dealing with the macro world of planets and stars, and quantum mechanics, governing the realm of the unbelievably small, were long thought to be mutually exclusive and irreconcilable. But if gravity is exerted by grains of sand and maybe by subatomic particles, then we may finally be in line to create a “Theory of Everything,” combining relativity and quantum physics—the goal of physicists since early in the twentieth century. That may eventually explain mysteries like dark matter and dark energy.

What if gravity—the bending of spacetime according to the amount of available mass—exists and could be measured not just at the level of stars and planets, but in grains of dust, atoms of dispersed gases, and subatomic particles flying through “empty” space? The effects would be small, vanishingly small. After all, the effects of gravity are subtle. It’s the weakest of the four known and fundamental forces.3 The mass of the entire Earth has such a weak gravity field—an acceleration of 9.8 meters per second squared toward its center from the planet’s surface—that you can overcome it briefly simply by jumping and more permanently by firing off a chemical rocket.

But over vast distances, the distances between galaxies? We know that the bending of spacetime caused by a gravity field creates a “time dilation.” That is, time slows down under the influence of gravity. A clock on the surface of the Earth ticks more slowly than one out in interplanetary or interstellar space. It’s not just that the clock’s mechanism is retarded by the force of gravity, but time itself as the clock measures it has slowed. And a clock positioned near the event horizon of a black hole is slowed so much that the passage of hours it records would register as years to an astronaut orbiting outside the black hole’s effective gravity well.4

So … If dust, gases, and subatomic particles hanging about in intergalactic space have actual gravity effects—bending spacetime for anything small enough that passes near them, like a moving photon—might not that effect the timing, the energy level, of the photons themselves? This is a conjecture, similar to but not exactly the same as the “tired light” hypothesis described in the second footnote below.

I told you I was an agnostic, and I write a kind of science fantasy—not real but hypothetical and, hopefully, barely plausible. As a skeptic, I don’t accept that our current knowledge of anything is actually final.

Sir Isaac Newton refined the observations and theories of the ancient Greek and Babylonian astronomers. Then Albert Einstein refined the observations and mathematics of Newton. And then Niels Bohr and the quantum physicists threw it all into a cocked hat at the subatomic level. Edwin Hubble observed that the light of distant galaxies was red-shifted and so moving away from us, and that the universe was expanding. Cosmologists then used this expansion to wind it back to a single point some 13.8 billion years ago, and the decay of the cosmic microwave background as measured by Arno Penzias and Robert Wilson seemed to confirm that dating. Then astronomers figured that the universe is actually larger than it could have grown in that time, and so Alan Guth proposed inflation—that at the moment of the Big Bang, the universe expanded exponentially fast, faster than light speed—to finally arrive at the observable scale. Now we have dark matter and dark energy—unknown and, so far, undetectable “things” and “forces”—to explain anomalies in our observations that don’t fit our theories.

As I said before, these are exciting times. But the whole thing may eventually get tossed into a cocked hat. The Big Bang is, after all, just another creation story, like God moving across the waters and separating the light from the dark, or Great Raven dropping a wing feather upon the Earth to create everything. We yearn to know. And when we don’t, we keep piling theory on theory, until eventually we come upon a universe made up of things and forces we can’t know and can only imagine.

We ain’t done yet.

1. By now it’s generally accepted that our galaxy—and every other one we can study at close range—include a massive black hole at their center. These invisible objects have a mass on the order of a million suns, give or take. The one at the center of the Milky Way is about 4.3 million suns. So, why doesn’t that constitute “dark matter” all by itself? Well, our galaxy has over 100 billion stars; so the central black hole is not even one percent of galactic mass—kind of the situation of all the rocky planets, moons, and asteroids in the solar system.

2. An alternate theory has been proposed to this red-shifted Doppler effect—one that would discount the continuous expansion of the universe in the first place. Suppose light that travels long distances just “gets tired.” Suppose that photons traveling so far—not just between the Sun and the Earth, or from nearby stars to the Earth, but the billions of light years between galaxies—tend to lose energy, so that their wavelength becomes increasingly longer and redder. But there doesn’t seem to be any mechanism driving this effect. Nothing in the vacuum of space exists to “sap the energy” of a moving photon. And just as a body in motion tends to stay in motion—according to the first law of Sir Isaac Newton—until and unless some outside force acts upon it, so light waves should retain their energy unless they interact with something.
    It is part of the General Theory of Relativity that objects in deep gravity fields—spacetime that is bent sharply enough—experience time dilation, as described above in the main text. A physical object in the gravity field of a black hole and approaching its singularity or one moving at light speed—if it could ever attain that velocity—would experience the complete stoppage of time. So presumably a photon, which has no mass, experiences a timeless, changeless existence. But might it not also, at that speed, because it is moving through the dilation, lose just a bit of its energy, become just a little red-shifted, and more so the longer it travels? Conventional physics and cosmology say not, but our observations and theories are continually evolving.

3. The strong nuclear force, which holds quarks together to form protons and neutrons; the weak nuclear force, which holds these particles together in atomic nuclei; the electromagnetic force, which holds electrons in their atomic “orbits” or shells and accounts for the joining together of atoms into molecules, among other effects; and gravity, which bends spacetime around large masses and holds planets, stars, solar systems, and galaxies together.

4. This was one of the effects correctly described—not all of them were!—in the 2014 movie Interstellar.

Sunday, July 14, 2024

Strange New Energies

Embryo fold

It’s right on the tip of my tongue, the edge of my mind, about how we will drive to the stars. This is because I’m a science fiction reader and writer and have bathed in notions of faster-than-light travel, generation ships, warp drives, and worm holes since I was a teenager. It’s part of the mythos that human beings will one day have the energy, derived from our understanding of physics, that will let us cross the vast gulfs between the stars. Our stories abound, too, with other beings, other cultures, aliens who supposedly have cracked the secret. And while they are obviously more technically advanced than us, they are not magicians.1

We human beings are aware of and have the potential to use three forms of energy. One is chemical, the making and breaking of atomic bonds, representing electrons donated to or shared between the energy shells of different atoms. This is every form of energy known and used up to the twentieth century. It is the energy of fire, of the chemical processes that drive reactions inside our cells, of burning hydrocarbons to make heat, and of releasing nitrogen atoms from complex molecules that then join together in ravenous diatomic bonds that drive most chemical explosions. It will also drive chemical rockets into orbit around the Earth, as far as the Moon and outer planets, and eventually, by inertia, over vast amounts of time, beyond our solar system.2

Within the last hundred years, we have learned about newer forms of energy. We discovered that heavy atoms—those with lots of protons but that have added or are missing their complementary neutrons—can be induced to break apart at a nuclear level to make smaller atoms and so create heat. This reaction is not normally found in nature in any great quantity—not enough so you would notice—although it is a suspected contributor to the heat at the cores of the rocky planets. Nuclear fission is the energy of the first atomic bombs and of reactor “piles,” and it has since been promoted—although not without opposition—to supply our electrical grid.

And then we learned, almost immediately after discovering the fission process, that the lightest atoms—those with just one or two protons and almost no neutrons—can be induced to come together to make slightly heavier atoms and also to make heat. This is the reaction that drives all the stars—at least until they burn out to become cinders, masses of pure neutronium, or black holes. We’ve been trying ever since to harness this reaction on Earth and put it into our grid, but the fusing process is trickier and more unstable than fission. And so far, although we’ve tried collapsing plasmas with powerful magnetic fields and laser pulses, the only thing that seems to work reliably is a massive gravity source—the weight that brings a star together in the first place. So, although there are a gazillion times more light atoms than heavy ones on this planet, and the newly fused atoms don’t create a waste problem, we still haven’t found a way to use the reaction to make steam and drive generators—we just use it to make bombs.

Finally, in a form of energy that we don’t actually use yet, we know that two particles of the same kind but of a different sort—divided into “matter” and its “antimatter” counterpart—will come together under the right conditions to annihilate each other in a cascade of pure energy without any waste products at all. This sort of reaction is presumed to be the ultimate release of energy, and it’s favored in the Star Trek television series to drive their “warp engines.” The only problem—Hoo-whee! “Only”!—is that while matter exists all around us in a great variety of forms, antimatter is extremely rare, actually nonexistent in our everyday life, and can only be created at great expense in particle accelerators.3 So, driving your ship or powering your grid with a matter-antimatter explosion is going to cost you an arm and a leg, repeatedly, thousands of times a second, over the course of your journey.

So, those are the kinds of energy we know: manipulating electron bonds, creating fission inside big nuclei, and creating fusion of small nuclei, as well as the potential for matter-antimatter annihilation. All the rest is just fantasy and … magic.

But there, on the tip of my tongue, the edge of my mind, are energies as yet undiscovered, that won’t be discovered until we have a different appreciation of physics.

In the short story “Waldo” by Robert A. Heinlein, a young man with myasthenia gravis—or muscle weakness due to an autoimmune disease—is taught by an Amish farmer to tap into the latent energy of the universe by drawing strange glyphs on his machines. And since that story came out in the 1950s, cosmologists have discovered that the expansion rate of the universe is mysteriously increasing—a fact they attribute to what’s called “dark energy.” Some theorists believe this is some kind of “vacuum energy” that exists in the depths of space without any interfering particulate matter around.

Dark energy poses problems, as does the actual expansion rate of the universe. By sighting known stars in distant galaxies—Cepheid variables and certain types of supernovas, whose light provides astronomers with “standard candles”—and measuring their red shift as an estimate of their distance, we can figure that the universe itself is expanding, and that the galaxies are pulling away from each other, at a rate of 73 kilometers per second per megaparsec (km/s/Mpc, an expansion rate measured over 3.26 million lightyears) with an error of plus or minus 1.0 km/s/Mps. This has been confirmed with measurements taken by both the Hubble and the James Webb space telescopes.

But this is a different expansion rate from the one derived by measuring the cosmic microwave background (CMB) radiation. This is the “hum” of energies left over from the Big Bang—after light waves were first freed from all the precipitating particulate matter in the mix—that have been attenuated—or red shifted—from their originally high energy down to about three degrees Kelvin—almost stone cold—by the subsequent expansion of the universe. This measurement yields an expansion rate of 67.4 km/s/Mpc, with an error of plus or minus 0.5 km/s/Mpc. The expansion rate by the microwave measurement has been the standard of cosmology for decades—and it doesn’t jibe with standard-candle observations. The difference is not insignificant. So, what gives?

And now there is a breakthrough discovery that gravity—which has long been conspicuously absent from the calculations of quantum mechanics, the science that deals with the invisible world of subatomic particles—may actually exist at and be measurable at the microscopic level. Einstein’s theories of relativity, dealing with the macro world of planets and stars, and quantum mechanics, governing the realm of the unbelievably small, were long thought to be mutually exclusive and irreconcilable. But if gravity is exerted by grains of sand and maybe by subatomic particles, then we may be in line to create a “Theory of Everything,” combining relativity and quantum physics—the goal of physicists since early in the twentieth century. It may eventually explain mysteries like dark matter, which appears to drive the internal motions of stars in a galaxy, and dark energy, which appears to drive the motion of galaxies themselves and the expansion of the universe. These are exciting times.

And still, the idea haunts me: that somewhere, out between the stars, exactly where we want to go, there’s an abundance of energy that’s just waiting to be tapped—understood, captured, and used—if we only knew how. Maybe by drawing strange glyphs on our equipment?

But this is all just theoretical. And feeding my haunts is the fact that there is so much more we don’t understand about the universe. In fact, as I have suggested in the past,4 we really don’t understand three basic concepts in physics: the real nature of space, of time, and of gravity. But maybe, just maybe, we are beginning to …

1. Of course, as Arthur C. Clarke wrote: “Any sufficiently advanced technology is indistinguishable from magic.” But magic is by its nature impenetrable and practiced on an arcane level of the mind, while technology is—with the right mental and philosophical tools—comprehensible.

2. An offshoot of this chemical energy—or maybe an entirely new form of energy, making four usable forms—is the photoelectric effect. There, a photon—a high-energy particle, as from sunlight—impacts the right kind of material and knocks loose an electron from its orbit around an atomic nucleus. With the right setup—say a semiconductor sandwich connected to a circuit—the freed electron goes one way, and the “electron hole”—the material’s atomic need to complete that electron shell—goes the other. This creates a flow of electricity. Something similar happens in a fuel cell. But it’s still energy based on movement and exchange of electrons, which is in the realm of chemistry.

3. How much would a source of antimatter cost? At CERN’s Antiproton Decelerator, they can potentially make about a billionth of a gram of antimatter per year—or, over about ten years, enough antimatter to power a sixty-watt lightbulb for an estimated four hours—but that is not the CERN equipment’s intended purpose. To make a single gram—1/28th of an ounce—of the stuff would reportedly cost a million billion euros. Yeah, whether you think in terms of euros or dollars, that’s an unreasonable amount to pay to drive your starship over, what, about ten inches?

4. See my blogs Fun With Numbers (I) and (II) from September 2010.

Sunday, June 23, 2024

Getting the Fear Message

People puppets

I don’t know what’s on your Facebook feed these days—or if many people actually have Facebook anymore, and not some other social media platform. But still, I’m getting a lot of what we used to call “blooper reels.” Maybe because I pause to watch them—there doesn’t seem to be any clicking involved—that the media gods decide to send me more of these. And there doesn’t seem to be any advertising, just a series of twenty or thirty of these three- to five-second clips, obviously gathered from home videos and smartphone cameras, rather than staged. A counter always clicks down to tell you how many more of them you get to—or have to—watch.

These clips are supposed to be hilarious, but most of them are just brutal. Someone is walking alongside a swimming pool at a party, missteps, and falls in—but it’s not a clean fall, and they usually smack their knee or their head on the pool’s coping. Ouch! Or some kid is skateboarding and decides to jump on a stairway’s center railing and ride down on the flat of his board—but instead he tumbles and lands on his neck. Ouch—and maybe a couple of fractured vertebrae! Or someone is swinging from a rope in a tree over the bank of a river or a lake—but he or she lets go too soon and slides face-first in the dirt before plowing into the water. Double-ouch—with a mouthful of gravel.

Being an empathetic sort, I feel an electric jolt go through my body with each of these impacts. I know the person is getting a serious injury—or at least I would if this were happening to me. The sensible thing would be to scroll on before the mayhem starts, but I just can’t look away. Maybe that’s the masochist in me. (I don’t think I’m a sadist, taking delight in these peoples’ pains. And I don’t find these bloopers at all funny. But they are … hypnotic.)

Some of these blooper reels involve motorcycles, too, where a person who is unmindful—or perhaps never learned to ride in the first place—does something stupid. He or she gives the throttle too much twist starting out, or tries to do a wheelie, or has a friend jump on the back while the machine is moving. And predictably, the bike upends or wobbles around and then falls over.

And finally, there is a variant of the blooper reel showing traffic accidents, apparently derived from dash cams. Cars swerving in front of the driver, trailers come unhitched, motorcycles get side-swiped or rear-ended. And in some cases, a car or truck up ahead has stalled on railroad tracks after the gates go down, a train barrels through, and the car is destroyed. These things are almost a repeat of the “death on the highway” films they used to show—maybe still do—in driver’s ed class.

Why do I mention all this? Because I recently sold off my motorcycles and have occasionally thought about buying another one. And seeing a sudden influx of these blooper videos with real pain in them as well as unpredictable highway crashes is having a subliminal effect. Maybe the social media gods are pointing the middle finger at me. Maybe nobody else is seeing these things and reacting. Or maybe it’s a campaign to teach us fear—or, if you find yourself laughing at these bone-crunching exercises, to make us callous and cruel.

The thing that resonates with me, especially in the automotive clips, is that the visual experience is eroding my personal confidence.

Motorcycle rider

To be able to move through your day, you must be able to forget—or at least not dwell on—the possibility of falling and breaking your neck, or stepping into the street and getting sideswiped by a car, or reaching for something on a shelf and having it collapse in your face. You need to believe that you are competent, balanced, centered, and in control. If you think about all the possibilities for death and disaster all the time, you will be frozen with fear.1

To ride a motorcycle, you must have a certain belief in your own mastery and, yes, your invulnerability. We used to call it extending your chi, your spiritual force, around yourself and the bike. You must think ahead, maintain your margins, keep your eyes on a swivel, and believe that you have the roll-on speed and swerve-avoidance, if not the braking distance, to stay out of trouble. You must adopt the mindset of the “immortal motorcyclist,” or you would never get out there and play among the cars and trucks.

But then there come these blooper reels and highway crash videos. Falling off the bike hurts, just like landing on your neck in a skateboard accident. Falling off at speed scrapes you up and then gives you blunt-force trauma as you come to a stop against a guard rail, bridge abutment, or the bumper of the car ahead. These images are a reminder to me that riding a motorcycle is being a ballistic object held in the saddle only by the force of gravity. And a fiberglass and styrofoam-padded helmet, a leather jacket with neoprene-armor inserts, sturdy jeans, and steel-reinforced boots are not going to be much protection except in the slowest, most dainty of falls.

If the Facebook gods are pointing the finger at me, I am certainly getting their messages of fear. Or maybe I’m just starting to notice them.

1. And this, of course, is a metaphor for life. Every action you take does invite risk. On top of that, we still live under a variable star in a dangerous cosmic neighborhood. And then, whatever you do, sooner or later you will die. Live a perfect life, utterly safe, avoiding all risks, and your organs and connective tissues will eventually clog up and break down anyway. This marvelous meat-covered skeleton made of stardust that you’ve been driving all along is not immortal. And the possibility exists—coming to you sometimes in the middle of the night—that the you who’s driving it might not survive the meat machine’s ultimate collapse. Maybe after life … there is no life.

Sunday, June 2, 2024

Obsessions and Whims

Butterfly

The human brain and the mind that it embodies are always active, always thinking, feeling, reacting … forever humming along. The quality of a person’s life then, depends on what that mind does, what it feeds on, and what it produces.

And here, I’m thinking about idleness, ease, lack of activity, concern, and motivation. Lack of engagement with actual life. The reason for this meditation is that, now that I am retired and between books—for which I haven’t had an otherwise imposed deadline in years, but work at my own pace to my own thoughts—and not actively engaged with more than volunteer work and my own pleasures, I find my mind is … wandering. Spinning its wheels. Humming along to no purpose.

This is not a good thing. To someone who has been under the gun with deadlines, responsibilities, things to do in a certain way and a certain time frame for all of his working life, this might seem like a reprieve. And indeed, a few days with “nothing to do,” nowhere to go, no one to meet or satisfy, is a luxury. At least for a few days. But then, the mind keeps humming along.

Without a definite purpose, a life-involving goal or ambition—or conversely, a daily struggle for mortal survival—the mind ends up spinning upon itself. It takes up obsessions that have no purpose or direction. Or it flutters about on the wings of whim and whimsy, alighting nowhere.

Dandelion

I find myself in this state right now. For example, all my life I have been mindful of my keys. I started wearing them on a chain in high school. That way, I never had to worry about leaving them stuck in a door or lying on a table somewhere, put down for just a moment and then slipping my mind completely. If I let go of them, they banged against my leg until I remembered to feed them and the chain back into my pocket.

And then, when I started riding motorcycles, a few years after college, I valued having my keys on a chain. When you sit with your knees high against the gas tank, it puts a slant on the pockets in a man’s slacks. Keys and loose coins can work their way down to the opening and fall out. Now, this never happened to me. I never lost so much as a dime. But you think about this, if your key ring is loose in your pocket. And you don’t happen to think that, if your keychain is longer than about twelve inches, the keyring and chain will fall into the rear wheel, tangle in the spokes, rip up your pants or disrupt the bike itself. You’re more worried about losing your keys in the first place.

So, over the years, it has been my semi-serious hobby—or obsession, take your pick—to find just the right key ring and chain combination. Light chains that are not the pre-made silver things you can buy at a jewelry store are difficult to come by. I’ve used dog choke chains with the end rings cut off, various grades of stainless-steel necklace chains and bracelets, and the light chains used in furniture for drop-leaf desk fronts. Different weights, metals (even a couple in titanium), and finishes (chrome plated or not). To complete the ends, I have generally settled on French marine hardware for the hooks that attach to my belt loops and the shackles that connect to the final link, top and bottom. For the keyring itself, I use a small carabiner or tie down, usually in marine-grade stainless steel rather than the traditional split ring—which always seems to lose its tightness and show a gap with use.

I now have a collection of different chains, different lengths, metals, and finishes, and different keyrings to match with them.

The point of this lengthy disquisition is that, when my mind is not properly occupied with more weighty matters, I tend to obsess about how I wear my keys. The last time I went out, did I fumble a bit with a chain that was too long? Maybe the shorter chain would be more convenient. Or, I’m not really riding a motorcycle—or not right now—so maybe I could drop the chain and just put the keyring naked in my pocket. But then, the last time I handled just the ring, it took my fingers too long to work it around to the key I wanted; maybe I should put on a fob for easy handling. (Oh, yes, I have a collection of fobs, some decorative and some—like those little cannisters that hold my caffeine pills or maybe a couple of aspirin, or a small Crescent wrench—more useful.)

Some days, when I am not fighting for my life on the motorcycle or deep in the settling of plot mechanics, I change my keyring, the chain, or the fob two of three times, depending on whim and the vagaries of what feels right at any particular moment. And each time, it’s like, this is perfect for now and forever. Until the next change of mind.

The idle mind is not the devil’s playground, it’s a loose nut rattling around in its shell. I should take up a dangerous hobby, like skydiving or motorcycle riding, to put me in fear of my life and make me concentrate on the essentials.

Sunday, April 14, 2024

Why the Moon

Subatomic particle

In November 2022, NASA launched Artemis 1, an unmanned capsule capable of carrying four crew members, on a mission to circle the Moon and return. In February 2024, Intuitive Machines of Houston, Texas, launched the Odysseus lander to the Moon’s south pole. Although it fell over on its side and ceased functioning, the lander made a successful arrival without damaging impact—like some other government-launched probes. And SpaceX is now doing trial runs of its Starship, a heavy lifter—capable of carrying 150 metric tonnes to orbit and beyond—that could eventually get crews, supplies, and building materials to Mars or the Moon.

So, apparently, without having to invoke John Kennedy’s brave vision of the 1960s, “We choose to go to the Moon,” we are going back to the Moon. Yes, maybe to Mars, eventually. But people seem to be invested, by various routes, with various vessels and funding sources, to return human beings to the Moon. Mars can be inhabited by robots for now, but the Moon will apparently get the first off-world human visitors. Again.

And why not the Moon? Yes, Mars has more gravity, but still a whole lot less than the Earth’s. This means Mars has trouble holding on to an atmosphere composed of molecules lighter than carbon dioxide. So yes, Mars does have an atmosphere, with an ambient pressure about 1% that of Earth’s and composed almost solely of carbon dioxide. And that near-vacuum still carries dust storms that persist for weeks or months at a time. In contrast, the Moon has no atmosphere, and the dust settles the instant it gets stirred up. Sometimes, the absence of a thing can be a greater blessing than its minimal presence. More importantly, Mars is a long way away, with an outward bound and return trip measured in months rather than days. The logistics of going to Mars and being supplied there are really tough.1

So, we’re going back to the Moon, this time not just to step out and say we did it, but maybe to establish a presence. Maybe to scout a base. And possibly, eventually, to establish a colony. And I say, “Good!” Maybe even, “Hallelujah!”

Why? Because we humans are a curious and exploring species. We walked out of Africa and expanded around the world. It was not just the Europeans who reached the water’s edge and sailed beyond—the heritage of the Vikings, the Portuguese, and the Spanish—but everyone who was dissatisfied with their little plot of land and wanted to look beyond the horizon. Well, now the horizon is beyond the atmosphere. And yes, the artifacts of dead civilizations who never left their planets will be picked over by the living ones who dared to make the trip.

And that brings me to my point. We need the Moon, not just to satisfy our curiosity or to say we did it. We need the Moon as a forward base. Mars, yes, one day, for colonizing, if we ever need the elbow room. But the Moon is our logical off-planet base, outside the deepest part of the Earth’s gravity well, able to focus our telescopes and listen on all sorts of wavelengths because of the lack of atmosphere, and a station on the far side is shielded from all the radio noise on Earth. And the Moon is also the first place that the Others—the friends, the enemies, the intruders, the invaders, the aliens—will set up their own base, their dropping-off point, as they approach the Earth.

And you know they’re coming.

For the past hundred years or so, we’ve been broadcasting coherent radio signals into the stratosphere and leaking them out toward the stars. At the speed of light—which is also the speed of radio waves—that creates a bubble of our babblings two hundred light-years across for anyone who’s been listening.

True, according to the inverse square law,2 those signals that are broadcast rather than beamed directly tend to diminish rapidly. At a distance of one hundred light-years, Marconi’s original radio broadcasts will be remarkably faint—probably on the order of the barest whisper. They might be drowned out by the clang of two nearby hydrogen atoms colliding. And the Sun is a loud star, radiating not only in the infrared—or heat—and visible light but also with lots of radio noise. Compared to that, the broadcast radiations from Earth will be like a mouse farting on the stage at a rock concert.

But people looking for signs of life on planets around likely stars will discount the rock concert. They will be listening for mouse farts. And, if these listeners are already out there, they will probably have better ears than we do.

I’d say it’s a race against time. And we’re already fifty years behind.

1. As I’ve sometimes said, if you want to build a colony on Mars—or the Moon, for that matter—first build a five-star hotel with Olympic-sized swimming pool at the summit of Mount Everest. The logistics are better, and the air is breathable—barely. If that’s too hard, because of the smallish footprint, then build it in Antarctica. Logistics, atmosphere, and temperatures there are a snap.

2. The strength of any radiated signal—radio waves, light waves, sound waves—diminishes at the square of the distance from the source. So, the light from a bulb at two feet from the socket is one-quarter the strength of the light at one foot.

Sunday, April 7, 2024

“Good Dog” Management

Grinning dog

Back when I worked in internal communications at the local utility company, I edited a monthly newsletter for managers and supervisors. One of the themes that we promoted was the art of leadership, which I define as “accomplishing objectives through the willing participation of others.” In my view, this is one of the highest art forms to which a person can aspire. It involves setting values, providing sound judgment when necessary, and motivating people. It’s a tricky task for anyone.

And there are a lot of natural human impulses that can poison the atmosphere and make leadership along these lines almost impossible.

For example, it is a natural human impulse not to give up an advantage. If you are working in a position of authority over someone, it can be a natural tendency not to praise their work. Why not? Because then you give up some element of imagined control. If you praise them, then it becomes harder down the line to point out their errors and faults. You have given up some of your authority—you think. And you imagine that if you later need to correct that person, they will in turn say, “But you told me I was doing a good job! Now you’re criticizing me. That’s not fair! That’s not right! Make up your mind!” Hearing this subsequent conversation in your head, you may decide it’s just not worth the hassle to tell people they are doing well—even if they are.

It gets worse. Some people in positions of authority think they can gain advantage by putting their subordinates in what I call the “bad dog” condition. Rather than refraining from pointing out the subordinate’s good actions and positive results, these leaders and managers take every opportunity to find and criticize errors and faults. They think that by keeping their employees in the doghouse and fearing for their jobs, they have increased their own control. And maybe that works with actual dogs, who will tolerate amazing amounts of abuse from someone who puts down their daily food bowl.

With actual people, however, who are capable of thinking and reflection, being put in the “bad dog” position creates resentment. Hostile employees might, given the opportunity, participate in what used to be called a “white mutiny.” That is, they will take advantage of a developing situation to engineer a bad outcome for which they will bear no direct responsibility. They won’t disobey orders or throw their shoes into the gears to sabotage the operation. Instead, they will simply look the other way, play stupid, follow ill-considered orders to the letter, and shrug their shoulders. Oh, well.

And it gets worse. People who are continually criticized, harassed, and micro-managed to their spiritual detriment will eventually give up. It’s not that they hate the organization or wish it ill, they just don’t know what to do, because anything they do turns out to be wrong. Inappropriate criticism saps a person’s motivation. It makes them ineffective. They will do the bare minimum to keep the organization from falling apart, but not much more.

The issue of micro-management is separate but related. The boss—I won’t say “leader” here, because such a person isn’t one—thinks he or she has all the answers. The boss wants to see that only the things he or she can imagine or envision get done, and only in the way, by the methods, and in the timeline that he or she can see. They don’t want the “willing participation of others” so much as the activation of “meat robots.” Micro-management is one step removed from pushing the employee or subordinate aside and saying, “Here, it’s just easier if I do it myself.” The micro-managing boss wants employees to do it exactly like that, except using their own minds and hands under a kind of frenetic, telepathic control.

So, what is the alternative? The true leader sets organizational values and goals, provides fair and rational judgment when a novel question or situation arises, and otherwise motivates people to think, reflect, envision, and act on their own for the good of the organization. This requires a major element of trust in his or her employees or subordinates. The leader must put them in the “good dog” position—always being respectful of the fact that they are not actually dogs or animals. The leader must then have the security in his or her position to step in and tell an individual or group when something has gone wrong or an objective has not been achieved, and then to suggest a better way of doing things. But all the while, the leader has given up nothing by letting people know when things are going well and that they are doing the right things.

Leadership is tricky. The leader is constantly balancing needs and objectives with the sense of what his or her employees and subordinates are perceiving and thinking and how they are likely to react. That’s a tough job. But it’s one of the best jobs and the highest interpersonal endeavor. It’s a true art form.

Sunday, March 31, 2024

Quantum Entanglement and Other States of Mind

Butterfly Nebula

Okay, here is where I show either my great ignorance of science or a glimmer of common sense.

My understanding of quantum mechanics is based on reading various articles in science magazines, reading books about it for the lay reader, and watching The Great Courses lecture series on it. There may be things I don’t understand because I’m not a mathematician, but some of the claims seem to be more in the human mind than anything that’s going on in the universe.

Take the notion of quantum entanglement. Supposedly, two particles—two photons, say—can become entangled. Typically, this happens when the particles are created together. For example, when a rubidium atom is excited, its decay releases two photons, and they are entangled. Or a photon passing through a lens made of certain types of crystals will split into an entangled pair. They will remain entangled until one or the other interacts with something—that is, generally until it is observed by human interaction. And this entanglement, this connection, will persist across vast distances, and what happens to one of the pair, even at the far end of the galaxy, will be instantly communicated to the other. That is, the lightspeed restriction of general relativity on the transmission of information is ignored. This was the “spooky action at a distance” that Einstein questioned.

Supposedly, if two particles are entangled, they will have complementary but opposite qualities. For example, if one entangled photon has “positive spin,” then the other will have “negative spin.” But according to quantum mechanics, the characteristics of any particle at the quantum scale cannot be determined except by observation. Further, the existence of any particle is not determined—is not fixed in time and space, is not concrete, is not “real” in the world—until it is observed. This includes its exact location in space, its direction of travel, and qualities like its spin state. So, a photon’s spin may not only be either positive or negative; the photon’s spin is both positive and negative—that is, in a quantum superposition of both states—until the photon is observed and its spin is measured.

In another case, if a stream of photons is passed through a two-slit experiment—some going through one slit in a shield, some through the other—their intersecting fields will create an interference pattern, like waves passing through a narrow harbor entrance. This interference yields a series of parallel lines on a screen beyond the shield with the two slits. The interference lines will be heavier in the middle of the series and lighter out at the ends, indicating that most of the photons travel relatively straight through. Still, the result will not be two isolated bands of hits but instead a diffraction scatter.

But according to quantum mechanics, if a single photo is fired at the two-slit experiment, it does not necessarily hit the screen opposite one slit or the other. Instead, it may randomly fall anywhere within that diffraction pattern. The single photon passes through both slits, its field interferes with itself, and it acts as if it is in two places at once, until it is observed hitting the screen in only one place.

In a third case, some experiments with photons—including the famous Michelson-Morley experiment, which was used to disprove the idea that light traveled throughout the universe as a wave in a medium called “luminiferous ether”—employ partially silvered mirrors. These are mirrors that randomly reflect some photons and randomly pass others. If you set up a course of these mirrors, so that some photons take one path and some another, you can place detectors to see how many photons go which way. But interestingly, according to quantum mechanics, if you fire just one photon through the experiment, it will take both courses until it’s detected along one path or another. According to quantum mechanics, the photon’s position is everywhere in the experiment until frozen on one path by the act of detection or observation.

This idea of a particle at quantum scale being everywhere at once—with no fixed position, direction of travel, or defining characteristics until actually observed—is central to the nature of quantum mechanics. The physicists who practice in this field understand that the act of observing a tiny particle—a photon, electron, neutron, and so on, all of which are observed in flight—changes it. That is because you cannot observe anything that small without interfering with it—like hitting the detector screen beyond the slits or bouncing another particle off it in an electron microscope—and either stopping it in its tracks or deflecting it off toward somewhere else. The quantum world is not fixed or knowable until it is known by observation.

This is the example of Schrödinger’s cat. Seal a cat in a box with a vial of poison and a mechanism that breaks the vial when an atomic isotope decays. Until you open the box, the cat is both alive and dead—a superposition of these two states—and the cat’s actual condition is not resolved until you observe it. This is taking the quantum physicist’s belief in “unknowability” to an extreme.

I believe that part of the basis for this mindset is that quantum mechanics is a mathematical system, built on equations based on probabilities. In mathematics, it’s hard to build an equation around a statement that says a value might be one thing or it might be another. Instead, you place a probability function in place of the necessary value. So, in the experiment with Schrödinger’s cat, the cat’s life or death has a probability based on the nature of the isotope and the length of time in the box. If the isotope has a half-life of ten thousand years, and the cat has been in the box ten minutes, there’s a high probability the cat is still alive. If the isotope has a half-life in seconds, like some isotopes of oxygen, then the cat is likely dead. But the probability function is not resolved until the cat is observed.

In the case of two entangled photons, the probability of either one being positive or negative spin is fifty percent, an even coin toss. And, in the mindset of quantum physicists, once the spin of one photon in the pair is established and fixed, the spin of the other is also fixed. The fifty-percent probability function collapses and all is known. The question in my mind is not whether the two photons communicate with each other across the spacetime of the span of a galaxy, but how the observer at one end can communicate the discovered state to the non-observing holder of the photon at the other. If the holder of the passive photon observes it, then yes, he will know its spin state and resolve the probability function to his satisfaction. He will also know instantly that the distant photon has the opposite spin. But he can’t communicate any of this to his partner holding the other photon until his message travels across the lightyears. So, big deal.

Say I cut a Lincoln head penny in half across the president’s nose. One half the coin shows his eyes and forehead; the other shows his mouth and chin. Without looking, I take each half-coin and seal it in an envelope. I give one to my partner, who takes it across the galaxy. If he opens his envelope and sees mouth-and-chin, he knows that I must have eyes-and-forehead. And vice versa. But I won’t know what I have—unless I wait eons for a light-speed signal from him—until I open my own envelope. The penny, existing in a classical, non-quantum world, has an established state whether I look or not. It does not exist in a superposition of both eyes-and-forehead and mouth-and-chin until one of us observes it.

My point—and I certainly may be misunderstanding the essence of quantum mechanics—is that the concept of superposition, of probability functions, of tiny things being in two or more places, two or more states at once, and going nowhere until observed by human eyes and instruments is a thoroughgoing mindset. It’s a reminder to the quantum physicist that you don’t know until you observe. It says that the whole conjectural world of the very small is just that: conjecture, theory, and a mathematical construct until human instruments intervene to prove a thing is so or not so.

And that’s a good reminder, I guess. But taking it to the extreme of denying that the cat is neither alive nor dead—even a very tiny cat who makes no noise and is otherwise undetectable—until you open the box … that calls into question the reality of the entire enterprise.