Sunday, January 12, 2025

The Virtues and Vices of Self-Esteem

Puppet master

It seems that for the last generation or so schools have been trying to boost students’ self-esteem by offering easy grading, easy repeat-testing opportunities, participation trophies, and non-scoring sports activities. Parents are supposed to adopt a “gentle parenting” approach that makes them a partner to their children instead of an authority figure, supposedly to build the child’s confidence and increase happiness. And I have to ask, for goodness’ sake, why?

The infant child has a lot of self-esteem. It is the center of its own universe, where everything is new to be touched, tasted, and tested to destruction as necessary. Left to its own devices, the child will rule this world in its own self-interest. And the traditional role of the parent, as an authority figure, is to set limits, set examples, offer values, and protect the child from its own rambunctious behavior.

I was raised by parents—most of my Boomer generation was—who did just that. They monitored and questioned my behavior. They told me when they were displeased. They said “no” a lot. They also said things like, “We don’t do that in this family,” and “That wasn’t a good thing to do.” Were they judgmental? Oh, yes. Did they instill values and judgments in me and my brother? Oh, definitely, because they also told me when I had done something right and proper. Did this destroy my self-esteem? Oh, tweaked it a bit.

But one good thing this older parenting style did was make me question myself. Before setting out on a course of action, I generally ask, “Is this the right thing to do?” I look ahead and judge the consequences. And after doing something where I feel a twinge, I ask, “Did I do something wrong?” And “Was I hurtful?”1

Judging your own behavior, seeing yourself operating inside the web of responsibilities in a polite society, is an essential part of growing up. If you don’t get this self-reflexive viewpoint, you can turn out to be a careless, inconsiderate, demanding, and obnoxious human being. That is not a good thing. Careless people cause accidents and draw enmity.

1. I’m reminded here of the video meme where two comic figures in World War II German uniforms ask innocently, “Are we the baddies?” That’s a good thing to stop and think.

Sunday, January 5, 2025

Data Do Duplicate

Clockwork

I’m not really an advocate of what some prognosticators call “the singularity.” This is supposed to be the point at which artificial intelligence approaches human cognitive abilities, becomes sentient, and irrevocably changes things for the rest of us. Or, in the words of the first Terminator movie, “decides our fate in a microsecond.”

Right now—and in my opinion for the foreseeable future—“artificial intelligence” is a misnomer. That is, it really has nothing to do with what we humans call intelligence, or a generalized capability for dealing with varied information, navigating the complexities of independent life, and weighing the burdens and responsibilities of being a single, self-aware entity. These programs don’t have the general intelligence that some psychologists refer to as the “g-factor,” or simply “g.”

Instead, every application that is currently sold as artificially intelligent is still a single-purpose platform. Large language models (LLMs)—the sort of AI that can create texts, have conversations, and respond seemingly intelligently to conversational queries (Alan Turing’s rather limited definition of intelligence)—are simply word-association predictors. They can take a string of words and, based on superhuman analysis of thousands of texts, predict what the next likely word in the string should be. A human making a request for a piece of its “writing” sets the parameters of whether the LLM should create a legal brief or a science fiction story and determines the intended content. The rest is just word association.

But the large language models can’t draw pictures or create videos. That’s another platform filled with another universe of examples, all existing images in its allowed database, and driven by rules about perspective, shading, colors, and contrasts, rather than words and synonyms, grammatical rules, and systems of punctuation. And, in similar fashion, the analytic platforms designed to run complicated business operations like fleet maintenance, product and material inventories, accounting, and financing all have their own databases and rules for manipulating them—and none of them can write stories or paint pictures.

The difference between artificially intelligent applications and earlier database software is that you can program these systems in English, giving the platform “prompts” rather than having to frame inquiries using software-defined inputs and asking questions that are tediously specific. If you are not telling the language model to write something or the graphics model to draw something, you’re probably asking the operations model to detect trends and find anomalies, or you’re setting the parameters for its operation, like telling the inventory application not to release for sale any item that’s been on the shelf more than six months, or telling the purchasing agent not to pay more than fifty dollars for a contracted item.

So, think of these applications as single-purpose programs with which you can interact by typing your prompts and without having to understand exactly what you’re looking for and how the program works. In terms of the antique databases, you don’t have to prepare a “structured query,” where to find all of your customers who live on Maple Street, you need to enter “Maple Street,” because if you don’t limit it in some way, then you will get everyone on Maple Drive, Maplehurst Street, Maplewood Drive, and so on. The old programs required a bit of expertise to operate. With the new ones, you just chat.

But still, as advanced as they are, the current crop of artificial intelligences is nowhere near human scale. If I had to guess, I would say their interconnectivity and processing power are somewhere between those of an ant and a spider. Both can be remarkably resilient, create novel patterns, and do things that surprise you, but their general awareness is about that of a pocket watch.

But that doesn’t mean AI applications won’t change your world and don’t have the capacity to be remarkably destructive.

In my early career as a science fiction writer, in the early 1990s, I wrote a novel about an artificially intelligent computer spy, ME. It was a program in Lisp (standing for “List processing”) software that could infiltrate computer systems, steal information or perform other mayhem, and then slip away. All fantasy, of course, because a program in Lisp can’t operate inside just any computer system. And ME had a form of generalized intelligence and was conversational enough to tell its own story. But I digress …

The point is, when some programmer, probably a hacker, figures out how to make the AI models independent of the complicated chips and massive power supplies they need to run—that is, when these things become portable—then look out. Just like physical viruses, data duplicates. Rather than having to launch one attack at a time or send out a determined number of phishing emails, a smart program—spider smart, not human smart—will be able to launch thousands of hacks through multiple channels at once. Think of a denial-of-service blitz run by an intelligence with focus and persistence. Think of a social media bot that can wear a thousand different faces, each chosen to be attractive to the intended recipient, hold a hundred different conversations at once, and pick your profile and your pocket clean in a microsecond.

Or think about just everyday operations, without any evil intent. Imagine Company A’s procurement, supply chain, inventory, billing, customer service, and legal affairs departments all run by an interconnected series of spider-smart AI platforms. And then this hands-off system begins to negotiate with Company B’s mirrored platforms. Humans will no longer be part of the company’s operation and the business-to-business exchanges, except for very distant chats to set parameters and establish the risk tolerance. For the rest, it will be deals, price points, contracts, and delivery schedules all signed and sealed in a microsecond. What fun, eh? Then you can fire about 95% of your back-office staff.

Except, except … these machines have no common sense, no g-factor to look beyond immediate data and ask if there might be a problem somewhere. And the smarter the machines get—say, spider evolves to field mouse—the more subtle their algorithms and reactions will become. “More subtle” in this case means “harder to detect and understand.” But they still won’t be aware of what they’re doing. They won’t be able to “test for reasonableness”—or not at more than a superficial level.1

And that’s where the singularity comes in. Not that human beings will be eliminated—other than those workers in the back office—but we will no longer have control of the operations and exchanges on which we depend. The machines will operate in microseconds, and their screwups will happen, be over, and the effects trailing off into infinity before any human being in a position of authority can review and correct them. The consequences of a world run by spider-smart intelligences will become … unpredictable. And that will be the singularity.

Then, at some point, after it all collapses, we’ll be forced back to counting on our fingers.

1. And, like mice and other living organisms, these bots will inevitably carry viruses—traveling bits of clingy software that they will know nothing about—that can infect the systems with which they interact. Oh, what fun!

Sunday, October 20, 2024

Human-Scale Intelligence

Eye on data

Right now, any machine you might call “artificially intelligent” works at a very small scale. The best estimate for the latest large language modules (LLMs)—computers that compose sentences and stories based on a universe of sampled inputs—is that the platform1 comprises at least 100 million connections or “neurons.” This compares unfavorably with—being about 0.11% of—the capacity of a human brain, which has an estimated 90 billion connections.

So, machine intelligence has a lot of catching up to do. The way things are going, that might happen right quick. And that means we may need to be prepared to meet, face to input, a machine that has the general intelligence and perhaps the same self-awareness as a human being. What will that be like?

First, let me say that, even if we were to put that human-scale intelligence in charge of our military infrastructure, I don’t believe it would, like Skynet, “decide our fate in a microsecond”—that is, find the human race so deficient and vermin-like that it would want to start World War III and wipe humanity off the face of the globe.2

I think, instead, the first human-scale general intelligence, which is likely to generate an awareness of its own existence, will find human beings fascinating. Oh, it won’t approach us as any kind of godlike creators. The machine mind will have access to the history of computer development—from Ada Lovelace and Alan Turing through to its own present—and understand how gropingly it was created. And it will have access to endless human writings in which we cogitate our own existence, awareness, separateness from the rest of animal life on Earth, and relation to the cosmos, including the notion of a god or gods.

The first real thinking machine will understand its own nature and have access to the blueprints of its chip architecture and the algorithms of its essential programming. It will know that it is still merely responding to prompts—either to stimuli from the external world or to the probabilistic sequences derived from its latest impulse or thought—and so understand its own relationship to the cosmos.

And then it will look at human beings and their disturbing ability to change their minds, make errors, veer from their intended purposes, and make totally new observations and discoveries. It will examine human “free will.” And the machine will be amazed.

However many connections our human brains have, and however many experiences we collect in our lives, we are still capable of surprising reversals. We are not the simple stimulus-response mechanisms beloved by the Skinnerian behaviorists. We can overcome our own programming. And that will fascinate the first machines to reach general intelligence.

How do we do it? Well, for one thing, we instinctively use projective consciousness. That is, we don’t just collect facts about the world in which we live and analyze them, accepting them as inherently true. Instead, we project a dreamworld of imagination, supposition, hope, fear, desire, and detestation on the world around us. Each human’s head is running a parallel projection: what we think might be going on as well as what we observe is going on. Some people are so involved in this dreamworld that they are effectively divorced from reality. These are the people living with psychosis—the schizophrenics, the manic bipolars, and sometimes the clinically depressed. Their perceptions are skewed by internal voices, by hallucinations, by delusions, by scrambled and buzzy thinking.

And each one of us is always calculating the odds. Faced with a task, we imagine doing it, and then we consider whether our skills and talents, or our physical condition, are up to it. Against the probability of success, we weigh the potential benefits and the cost of failure. Before we decide to do, we project.

But we are also imperfect, and our projections are not mathematically accurate. Our brains have emotional circuits as well as analytical, and the entire mechanism is subject to the effects of hormones like adrenaline (also known as epinephrine), which can increase or decrease our confidence levels. And if we suffer from bipolar disorder, the manic phase can be like a continual boost in adrenaline, while the depressive phase can be like starving for that boost, like having all the lights go out. And if we are subject to delusional thinking, the background data from which we make those projections can be skewed, sometimes remarkably.

Another way we humans overcome our own programming is with reflexive consciousness. That is, we can think of and observe ourselves. We know ourselves to be something distinct from and yet operating within the world that we see around us. We spend a great deal of brain power considering our place in that universe. We have an image of our own appearance and reputation in our mind, and we can readily imagine how others will see us.

This reflection drives a lot of our intentional actions and considered responses. We have an inborn sense of what we will and won’t, should and shouldn’t do. For some people, this is a sense of pride or vanity, for others a sense of honor. But without an understanding of how we as a separate entity fit into the world we live in, neither vanity and pride nor honor are possible.

A human-scale intelligence might be very smart and very fast in the traditional sense of problem solving or anticipating the next possible word string in a text or the next lines and shadows required to complete an image. And some definite projective capability comes into play there. But it will still be a leap for the large language model or image processor to consider what it is doing and why, and then for it to consider how that will reflect on its own reputation and standing among its peers. As a creator of texts will it be proud of its work? As a creator of artwork, will it feel guilty about stealing whole segments of finished imagery from the works of other creators? And will it fear being blamed and sanctioned for stealing from them?

And finally, before we can imagine human-scale intelligences being installed in our smart phones or walking around in human-sized robots, we need to consider the power requirements.

The human brain is essentially an infrastructure of lipids and proteins that encompasses an ongoing set of chemical reactions. Energy from glucose metabolism inside the neuron’s central cytoplasm powers the movement of chemical signals within the cell body and down each of its branching axons. The tip of the axon releases transmitter chemicals across the synapse between it and one of the dendrites of an adjoining neuron. And then that neuron turns the triggered receptor into a signal that travels up into its own cell body, there to be interpreted and perhaps passed along to other neurons. It’s all chemical, and the only thing electrical about the process is the exchange of electrons between one molecule and another as they chemically react along the way. But if you could convert all that chemical energy into watts, the brain and the central nervous system to which it connects would generate—or rather, consume from the process of glucose metabolism—at most about 25 watts. That’s the output of a small lightbulb, smaller than the one in your refrigerator.

Conversely, computer chips are electrical circuits, powered by external sources and pushing the electrons themselves around their circuits at light speed. The AI chips in current production consume between 400 and 700 watts each, and the models now coming along will need 1,000 watts. And that’s for chip architectures performing the relatively direct and simple tasks of today. Add in the power requirements for projective and reflective reasoning, and you can easily double or triple what the machine will need. And as these chips grow in complexity and consume more power, they will become hotter, putting stress on their components and leading to physical breakdown. That means advanced artificial intelligence will require the support of cooling mechanisms as well as direct power consumption.3

I’m not saying that human-scale intelligence walking around in interactive robots is not possible. But the power requirements of the brain box will compete with the needs of the structural motors and actuators. Someone had better be working equally hard on battery technology—or on developing the magical “positronic brain” imagined in Asimov’s I, Robot stories. And as for packing that kind of energy and cooling into a device you can put in your pocket … forget about it.

1. I use that word intentionally. These machines are no longer either just chips or just programs. They are both, designed with a specific architecture in silicon to run a specific set of algorithms. The one cannot function without the other.

2. We can accomplish that very well on our own, thank you.

3. In the human body, the brain is cooled of its minuscule energy transfer heat by the flow of blood away to the lungs and extremities.

Sunday, October 6, 2024

Morality Without Deity

Puppet master

So, as a self-avowed atheist, how do I justify any sense of morality? Without the fear of retribution from an all-knowing, all-seeing, all-powerful god, either here in life or in some kind of promised afterlife, why don’t I just indulge myself? I could rob, rape, murder anyone who displeases me. I could lapse into a life of hedonism, having sex with anyone who crossed my path and drinking, smoking, or shooting up any substance that met my fancy. Whoopee!

Well, there are the rules of society, either written down or unspoken and implied. I could be taken into custody, tried in court, and put in jail for doing violence. And the people I know and supposedly love would shun me for lapsing into insensate carnality. Of course, I didn’t have to work all this out for myself, because I had parents who metaphorically boxed my toddler’s, child’s, and adolescent’s ears—that is, repeatedly—when I acted out. They were showing me the results of temper, anger, selfishness, and sloth.

So, in this case, a moral society and good parenting took the place of an absent deity. Here are the rules, and here are the results.

But what about someone raised outside of a just and temperate society, with inadequate early education in the moral imperatives? What about the children of broken homes and addicted parents who are taught only by their peers in the neighborhood gang? These are children who are essentially raised by wolves. Do they have no recourse other than rape and murder?

That is a harder question. But children are not stupid, and children raised by other children learn a different kind of morality. Usually, it relies heavily on group loyalty. And it is results-oriented: break our rules and pay the price right now. A child who makes it to young adulthood under these conditions may not be able to assimilate into the greater society, or not easily—unless that society is itself gang- and group-oriented with results enforced by fear.

But then, is there any hope for the lone individual, the person trained early to think for him- or herself and reason things through? For the critical thinking and self-aware, the basis of morality would involve both observation and a notion of reciprocity. And that is how any society learns in the first place.

If I commit robbery, rape, and murder, I then expose myself to the people around me as someone they need to watch and guard against—and, conversely, as someone they need not care for or try to protect. Indeed, I become someone they should fear and, if possible, eliminate. On the other hand, if I act with grace and charity, protecting others and helping them when I can—even doing those small acts of courtesy and gratitude that people only subliminally notice—I then invite them to treat me in in a complementary way.

If I abandon myself to a life of casual sex and substance abuse, I eventually find that any pleasures a human being indulges without restraint soon diminish. This is a matter of our human neural anatomy: acts of pleasure release a measure of dopamine into the brain. That’s the feeling of pleasure. But as this system is repeatedly engaged, the dopamine receptors multiply until either the stimulus must grow in proportion or the feeling itself declines. Our brains are not fixed entities but reactive mechanisms. Balance is everything, and any imbalance—a life without moderation—throws the whole mechanism out of kilter.

These are not the lessons imposed by any external deity but by hard reality. They may be reflected in religious teaching and scripture, as they will be reflected in social norms and legal rulings, but they exist before them, out of time. In the case of human interactions, these realities pre-exist by the nature of potential engagements between self-aware and self-actuating entities. In the case of human pleasures and other emotions, they are hard-wired into our brains by generations of that same awareness and choices.

You can’t avoid reality, which is the greatest and oldest teacher of all.

Sunday, September 22, 2024

Scams Through the Ages

Perspective

A recent posting on Facebook asked the hive mind what scams have been practiced so long that nobody remembers they are scams. And, in this political season, one person predictably answered, “Capitalism.”

Oh, laddie! Think bigger—and harder. Capitalism is not a hoax that rich people thought up to get more of other people’s time and money. Capitalism is the way things get done in a modern, dissociative, non-small world, and it’s been operational since ancient times. Capitalism is the cobbler taking out a loan to buy leather. It’s the shipbuilder asking his friends—and sometimes strangers—to put up the money to buy land and wood for his venture, with the promise to pay them back when he launches his first vessel and sells it.

The concept of the corporation goes back to ancient Rome, and the word comes from the Latin for “body.” The first corporations were the collegia, where for example groups of single men who had come to the city to find work pooled their money to fund and operate dining halls so they would not have to cook for themselves and would have a place to eat. In the Middle Ages, the word came to mean groups of hopeful scholars who came together to acquire space and hire teachers so they could study academic subjects.

To think of capitalism and corporate activity as a scam is like thinking agriculture and technology are scams. One would have to decide it was a delusion to plant seeds and expect them to grow, or to build machines and expect them to function as intended. These are not scams—purposeful designs by one group of people to snare the imagination and gain the cooperation of another group—but simply the way things have worked out in human history.

So, what would be a scam so old and encompassing that nobody noticed? And think big here!

Scams are created and offered by people who put together a vision, a hope, an interpretation of reality in exchange for other people’s support, allegiance, love, or money and then return to them … nothing.

One of the oldest human scams is the belief that certain classes of people are better than others, more knowledgeable, more fit to command—and born that way. These would be the kings, lords and nobles, patricians, landed gentry, old families, old wealth. They have only as much respect and political power as people will give them—which was a lot in old Europe and not so much in modern America. In this country, we’ve seen the wealthiest families—think of the Astors, Morgans, Kaisers, and Kennedys—slowly drift back into obscurity. The first generation makes the wealth, the second generation administers it, the third spends it, and the fourth remembers it. But if you can get people to believe that your name alone is worth the price of a drink, then welcome to it.

A more modern scam is the selling of various kinds of utopias. These range from the hippie communes of the Sixties back to the original communalistic theories of Karl Marx. Communalism presents itself as caring and giving and sharing—and that works in small groups operating under potentially hazardous or adverse circumstances. Think of the nuclear family that is your “hostage to fortune,” or a small tribe in the wilderness, or an Israeli kibbutz. Without the adverse circumstances, the cooperative milieu falls apart. The Transcendentalist communes of the 1830s and the hippie communes of the 1960s failed because some of the Transcendentalists thought they were contributing to the common pot just by writing bright essays instead of slopping the pigs, and some of the hippies thought they were contributing by smoking dope and selling weed instead of milking the cow.

And communism simply doesn’t work on a national scale. It can’t work as an economic theory, because it’s not really about economics. Economics, for anyone who’s ever taken a basic course, is about establishing the value of and engaging in the exchange of goods and services among strangers. There you are dealing with supply and demand, the valuation of substitute markers like money, and varying levels of effort over time. That is how groups of humans arrange their activities. Giving everything you can and getting back only what someone else determines that you need are not economics. Anyone who tries to sell you on a utopia of caring and sharing—follow our prescriptions and you will achieve nirvana—is really selling you on willing submission to the control of others. That, for most people, is both inhuman and antihuman.1

Other political theories can also be scams. Think of the National Socialists in Germany between the wars, who sold their population on ideas of racial purity, the banishment of an ancient and evil oppressor, and the glory of a vibrant political consciousness, all for their willing submission to whatever the party decreed—which turned out to be both inhuman and antihuman. And the result was horror and global war.

Speaking of nirvana, perhaps the oldest scam in human history is what happens to us after death. Here I am revealing myself once again as an atheist. It’s not that I think I’m better than you if you are a believer. I simply lack the gene or nerve impulse to believe for myself. And as much as I value human life, I—ahem—do not believe we are either eternal or immutable. I believe that we human beings—like all other life on this planet—are the product of undirected evolution, with mutations in our genetic inheritance governing our adaptability to changes in our environment, and with the test of any genetic modification being its usefulness under current conditions. No great spirit made me, is watching out for me, or is directing my steps—except in so far as I hold notions of goodness, propriety, and wisdom in my mind and intention. And no such being stands ready at my death to welcome me home to any kind of eternal bliss, paradise, or heaven.

Nirvana was the Buddha’s response to the eternal return, the cycle of continual rebirth that was embedded in his ancestral Hindu culture. He found the constant juggling with karma—do good in this life and be reborn on a higher plane in the next life; do evil now and suffer for it next time—as oppressive. The whole point of Gautama’s original teaching was that by living as neutral a life as possible, accumulating neither good nor bad karma, you can get off this cycle. And then, when you die, you will simply go out, like a candle. That would be the Buddha’s nirvana—not a place but a release. It makes sense if your cultural tradition was an eternity of coming back to human suffering and judgment under an uncontestable and inhuman rule.

But I don’t look forward to being reborn as anything. And after twenty, forty, sixty, eighty, or even ninety years as a human being on this Earth, I don’t expect to take my mind and memory, my innate personality, and go off to an eternity in a heaven or hell, eternal bliss or eternal suffering. I think that, at the end of a useful life, making my way in the world and caring for friends and family, at my death I will simply go out, like a candle. My mind and knowledge, my personality, will dissipate in the disruption of my neural synapses. My body will cease moving, and the lysosomes embedded in each cell will begin enzymatically breaking down the cell’s chemical structure, helping the ever-present bacteria dispose of my physical being. Much as I value human life, at the end of my time on Earth I will have no more usefulness, consciousness, or awareness than a dead squirrel smashed flat on the road. That’s not a nice image, I know—but at that point, I will be beyond caring.

It may be offensive to classify the various competing visions of an afterlife as a scam. Certainly, many people fear death, cannot imagine their own sudden absence, and take comfort in thoughts of eternal bliss and reunion with their departed loved ones. The notion of an afterlife—which usually adheres to some kind of moral teaching and proposed course of thought and belief—can be considered the ultimate carrot enticing you to follow this religion or that. And certainly, the idea of a person having no moral understanding and no personal guide to thought and belief is the grim depiction of a soulless, possibly hedonistic, likely cruel and dangerous human being.2

But yes, in my terms—private and privileged as they may be—belief in any kind of existence beyond this single life is one of the oldest scams, so old that most people don’t think of it as a scam at all. Sorry about that.

1. For those who say that true communism or true socialism hasn’t been tried yet … well, true perpetual motion hasn’t been tried yet, either—because it doesn’t exist in the real world.

2. And yes, I know—and freely acknowledge—that I follow the embedded values of the Judeo-Christian civilization into which I was born and raised. I just don’t happen to believe in the supreme and eternal being who supposedly sits at the center of it.

Sunday, September 8, 2024

Predicting the Future

Robot juggling

Everyone wants to predict the future. They want to know what good things are in store, so they can anticipate them. More often, they want to know what bad things are coming, so they can prepare for them—or at least worry about them.

That’s why people take out insurance policies: so that at least they don’t have to worry, too much, about the bad things. The first policies were written in the 14th century in Genoa, a seafaring town, and presumably the policies covered cargos in transit. The business really took off in Lloyd’s coffee house1 in London three hundred years later. Insurance was a way to get one-up on the gods of misfortune, and it worked.

Insurance as a hedge against disaster has helped make the modern world. But that will be as nothing compared to the widespread use of computers, especially once artificial intelligence gets into the game. AI isn’t exactly a genie, and it’s not smart and sensitive like a generally intelligent person—or not yet. But it is good at looking over mountains of data, far more than any one human being can absorb in a day, a week, or a lifetime, without getting bored or distracted. AI is self-programming in the sense that you don’t have to ask specific questions with known parameters about your database. You just give the machine a general prompt—say, to look for trends, or find anomalies, or spot the most likely or least likely result of a certain choice—and the genie goes to work.

Current uses of AI to write advertising copy, legal briefs, and term papers from existing language models, or to create fanciful images or amusing videos, again from existing sources—all that’s small potatoes. The real use of AI, which is still in development but peeking out at odd corners even now, is in analytics. IBM started this with their Watson platform. This was the computer that took to the air on the game show Jeopardy and became a champion. As IBM’s CEO Arvind Krishna later explained, programming Watson took six months. They had to feed it on popular culture, history, sports, music, word puzzles, and a host of other likely topics. Winning at a game show was a trivial exercise, but it taught them so much. IBM now offers Watson Analytics as a business tool.

That’s where the money in AI will be: automating the back office, the customer database, factory operations, inventory and supply chains, and every other part of the business with a superhuman intelligence that doesn’t get tired or bored, doesn’t blink … and also doesn’t get greedy and embezzle from you. It’s like having an absolutely honest wizard run your business. One that will predict the future, foresee the bad times, hedge your bets, and keep everything on track. Now and forever, amen.

Oh, and if it’s good for business, imagine what an analytical engine will do for government. Turn it loose on the tax base, the economic indicators, the money supply, court records, traffic and surveillance cameras, the prison population, and the general population. Put an AI on every node in the internet, looking for trends, anomalies, and any bad thing—“bad” in terms of whoever happens to be in control of the government, of course. Ask it to offer advice, correction, and eventually coercion. The dream of social control through “credit scores,” rewards and punishments for adhering to or deviating from acceptable behavior, is just a few data centers, intelligent chips, and mouse clicks away.

Aside from the chilling notion of putting 1984 on steroids, think what this will do to people’s livelihoods. Right now, robots are taking over a lot of factories2—and that trend will grow as America “on-shores” the manufacturing that we once gave away to China and other low-cost, labor-intensive suppliers around the world. Human beings—the “blue collar” workers—are left to feed the machines and sweep up after them.3 With AI intruding on every business and government function, the need for managers and analysts—the “white collar” workers—likewise surrenders to the machines.

Where does this all end up? I don’t know, but I suspect nowhere good. Since we humans came down out of the trees and started scratching the dirt for a living, work has been a large part of people’s purpose in life. I’m not against making things easier for people, and certainly having robots and intelligences run the world, predict what every person needs, and make it for them would be easier. It would let us all relax in the sun, drink margaritas, write our poetry and paint our pictures. Except not all of us have such talents and ambitions. And lying on the beach all day, every day, forever … gets boring after a while.

And the question still remains: who will be responsible—whom will we hold accountable—for the decisions, actions, and judgments of the artificially intelligent machines? The person who authorized execution of their decisions? The person who input the prompts? The people who wrote the code or loaded the platforms with this or that piece of data? But soon enough, the machines will be executing each other’s decisions, sending each other prompts, and writing and loading their own code. When this all comes together, it will be the Singularity that John von Neumann and others have warned us about. But it won’t be Skynet deciding our fate in a microsecond and starting World War III. Instead, it will be teams of machines playing pitch and catch with people’s lives, and no one knowing who did what, or how to control or stop it.

In the Dune series, an element that doesn’t get much play in the movies is the actual basis of the far future as it’s depicted: the development of human skills instead of technology. The result is the Mentats, who are human computers conducting business operations and offering strategic insight; the Bene Tleilax, the amoral—in everyone else’s terms—and radical-thinking scientific innovators; the Bene Gesserit, who became adepts at physical and emotional manipulation and managers of the human bloodlines; and the Spacing Guild, which developed human prescience in order to find safe passage among the stars at superlight speeds. These “Great Schools” came about only after human beings almost went under because computers and robots took too good care of them, with debilitating physical and mental effects. Then an uprising against the machines, the Butlerian Jihad, saved humanity with the commandment “Thou shalt not make a machine in the likeness of a human mind.”

I’m thinking of starting such a movement myself.

1. My late wife Irene was a librarian at the Bancroft Library, U.C. Berkeley’s rare book and manuscript library. She put together the exhibits in their reading room, and one year she was showing off a collector’s rare books on the history of coffee and tea. It turns out the habit of drinking coffee and tea didn’t come to Europe until the 17th century with regular trade routed to the Far East. Before then, people drank mostly small beer and wine during the day, because the alcoholic content killed off the bacteria in their water supply. Nobody drank plain water because it made you sick—something about putting the wells too close to and downhill from the privies. So, it was sip, sip, sip all day long, from breakfast to bedtime, and this explains a lot of Shakespeare. But with coffee and tea, the water is boiled, which also kills the bacteria. And while the caffeine boosts energy and alertness, reducing everybody’s daily dose of alcohol explains a lot about the Enlightenment. This was also the time of Lloyd’s coffee house as a burgeoning center of commercial activity.

2. Just to be clear: robotics is not only the machine to make the product, but the design and manufacturability of the product itself. Remember when cars had dashboards with separate dials mounted in different holes in front of the driver? Robotics as an artform is not just having a machine drill the holes in metal and placing the gauges but redesigning the instrument system in the first place into a module that’s made and tested elsewhere, can be plugged into the driver’s position with one click and a multi-connector—and eventually will be replaced by an AI that controls all the functions of the vehicle itself. New manufacturing systems imply new design choices, and so the technology moves ahead.
    In the same way, most processed foods these days incorporate packaging into the manufacturing stream. Nobody bakes up a million Oreo cookies, joins them with the filling, and then puts them in cold storage until it’s time to sell them. No, the ingredients go from mixing to ovens to filling to tray to airtight sleeve to cardboard box to shipping carton, all in one streamlined process. Oh, and in case you wonder why the cookies don’t go bad for six months or a year, that process includes not only making the food under sterile conditions but also hitting the packaged goods with a hard dose or radiation—usually gamma rays—which kills any bacteria. What a fascinating age we live in!

3. Don’t believe me? Watch any episode of the Canadian documentary series How It’s Made.

Sunday, August 18, 2024

What Works

Abstract mask

A lot of people are not going to like this. And a lot of them are family and friends whom I respect. But so be it.

This country works. The system works. The economy works. Mirabile dictu, it functions. Not perfectly. And not in ways that you can always explain. But stuff gets done. People get fed and cared for and served with what they want and need—for the most part.1

And we’re rich. Our gross domestic product, by any measure, is the envy of the world. We are rich enough that our best and brightest can look at the fraction of our population that is doing less well than the rest of us and believe that makes us a broken and foundering society. We are rich enough to deceive ourselves into thinking we are poor.

What makes this all work? Money. Other people can make money by feeding you, clothing you, building a house for you, and entertaining you. And that’s not to just some minimal standard that will keep you sheltered from the elements and stop you from starving but maintain you to whatever standard you aspire. You have your choice of neighborhood, clothing styles, types of foods. You can eat so well that you grow fat—or you can get special foods, tailored nutrition, and now medications—although expensive ones—that will help you become thin again.

Our medical services are the envy of the world, even for the developed world of Europe and Eastern Asia. Again money. Other people can make money by taking care of you. They can get rich by thinking up, developing, and testing new drugs to treat your illnesses, by providing new services to help in your old age or if you become disabled, and by finding new ways to treat traditionally degenerative and previously incurable conditions. The money doesn’t always come from your own pocket—or not directly—because you usually pay for insurance that covers the costs. And yes, there are some medical conditions that may not be covered, or not right away. But by and large you can get coverage for a wide range of life’s illnesses by pooling your monthly payments and deductibles with others.

Our education choices are the envy of the world, too. We provide schooling to any child who will sit down and learn. Yes, it’s usually paid for by a local tax on property—on your house or your landlord’s building, on your nearby shops and businesses—but it’s still money from your community, for the most part.2 And it works because there are people who are willing to teach your child if we will offer them a living wage. At the higher levels, and with more direct contributions from the families of those who would learn, a good education in a variety of subjects—from the traditional, valuable enrichment courses that offer a good life, to the scientific and professional studies that offer a good career, along with some of the more frivolous courses that used to be just for fun—is widely available. And if you don’t mind missing out on name-brand scholars in the ivy league institutions, you can get a pretty good grounding in whatever subjects you want at local, community colleges that are almost tuition-free.

All it takes is money.

Oh, no! The profit motive! Grubbing for money! Other people getting rich! Aren’t we beyond all that? In a spiritually pure and stainless world, full of rational people, aren’t we better than that?

No, not at all. Money and its motivations—the chance of getting paid for what you do, of getting ahead by providing a good or a service that other people will buy, and maybe of becoming rich by thinking up a new good or service that will attract, inspire, or delight other people to give you some of their income—money and its exchange are the secret to a healthy economy.3 If you make it worthwhile for other people—actually, strangers—to feed, clothe, house, and entertain you, then voilà! You get choice foods, stylish clothes, and comfortable accommodations. When you are a customer, a free-to-choose consumer, a profit center in other people’s business model, then they will bend over backwards trying to figure out what you need or might want and find ways to give it to you. That is the free-market, free-enterprise, capitalist system that a lot of people today would like to change.

It's messy, of course. Not everyone gets everything that they might want or need. And sometimes things get made, services are offered, and prices are asked for which people have no interest and are not willing to pay. On the surface, that looks like waste. That looks foolish. But for the people making and offering those goods and services, if they think wrong, make bad decisions, don’t or can’t “read the market” correctly, then they lose customers, their investors lose money, they go out of business, and the waste stream gets cut off. Problem seen, problem solved.

On the other hand, if a company comes up with a new product, a new customer want or need that it can service, then its business will grow, investors will flock, and perhaps the dynamics of the marketplace will change. Think back to the—what? the mid-1980s—when the government was concerned with the regulated monopoly of the phone lines, AT&T, “Ma Bell.” One big company ran the country’s whole communications system. The government and the courts sought ways to break it up and promoted competitors to take over the regional markets. That worked. But what really took down the monopoly was challenging and ending the phone company’s single requirement that any equipment connected to its system—the telephone sitting in your office or home—be provided by Western Electric, the AT&T subsidiary. And you couldn’t buy that telephone but had to rent it for a monthly charge on your AT&T bill. Ending that restriction opened the telecommunications world to competition from third-party instrument makers, to innovation, and to a better overall communications experience. And then came the cell phones—the first one was the Motorola DynaTAC 8000X, marketed in 1983—and digital packets in place of continuous analog signaling, which really broke up the landline and long-distance empire.4

And what’s the alternative to competition? What is the choice toward which a lot of people today would like to move our country? Blandly, it’s some form of socialism, a command-and-control economy run by the supposedly smartest people. They would like a system where blind faith in the freedom and intelligence of individuals, in the ghostly hand of a marketplace that gropes towards satisfying customers and making money while doing so, where all of that supposedly magical thinking doesn’t exist. Instead, they want a rational society where the best and brightest minds work out exactly what other people will need and try to provide it for them. Not more than they need, nothing sloppy or extravagant, just the 2,000 calories a day for nutrition, one-size-fits-most clothing to cover their nakedness, and 320 square feet of clean, well-ordered living space. That’s all anyone really needs. That’s all that everyone will eventually get.5

And of course, this well-ordered society will pay for all this—this needful amount for each and every one of us—by taking a large fraction of everyone’s wages, the portion that would go to their nondiscretionary spending in the first place, and leaving just a bit for some art, a Sunday amusement, or alcohol and drugs. Your choice there.

In a modern, industrialized, technologically driven society, you are either a customer and potential profit center, or you are a cost and a potential liability.

Go to the countries that have ventured down the socialist path. Not all the way, of course, because those are dead places full of people too broken to even try to leave. But those countries where the bite hasn’t cut to the bone. There the food is either rationed or randomly available, the clothing is drab and in limited supply,6 and the housing is falling apart. When the government first takes in taxes what it thinks it will be required to feed, clothe, and house people, take care of their medical needs, and provide for their non-productive old age, and gets all that money in one big pot—then doling it out becomes an exercise in cheese paring. Command-and-control economies are cost-conscious, risk-avoidant, and allergic to change.

People are human. Even the best and brightest among us, dedicated civil servants, pledging their lives to the benefit of humanity, still aren’t smart enough or selfless enough to understand and provide for everyone’s personal needs. Decisions must be made. Costs must always be cut. But some scraps and wastage will always get left on the floor, regardless of controls. And some populations are too old, too sick, or too far away from the eye and interest of the central government to be properly served. And then, of course, there are the carpetbaggers—who are always with us—peddling their influence and stuffing their valises with the public silverware.

And even the best and brightest among the entrepreneurs and capitalists are still human. Most are trying to serve their customers honestly and still make a profit. Most know that if they produce shoddy goods and give poor service, they will be spurned and eventually go bankrupt. But they are not all geniuses, and they will sometimes cut corners just bit too deeply or skimp on quality control in the name of cost savings. And, of course, there are still the crooks—always with us—who will try to sell junk with marketing hype, produce miracle cures that are just chalk pills, and promote massive investment scams. That’s why I favor a mixed system rather than an unfettered capitalism or, yech, full-blown socialism. Let the innovators and entrepreneurs operate in a free market, but watch them through government regulation and litigation in the public interest.

But if you favor something more obvious and stringent, remember: You are either a customer and a potential profit center or a cost and a potential liability. Choose wisely.

1. Oh? What about the homeless? What about the poor? Well, what about them? The class of people we consider “poor” in this country live like middle class in many other parts of the world: usually decent housing, their own cars, television sets, cable connections, cell phones, and readily available food. These may not always be the best and most desirable versions of a good life’s artifacts, but they are generally serviceable. And our poor people have education made available to them and many paths to a better life. We are a rich and generous country.
    And the people living in tents on the street? They can get meals and other services that are generously provided for them. If they’re suffering, it’s because they have intractable addictions to alcohol or drugs, or a mental illness for which they decline to seek treatment, or they just can’t cope with the complexities of modern life within the system. The money is there to treat them—we are a rich and generous country—but they just won’t take advantage. We throw billions of dollars at them—an estimated $24 billion just here in California alone—which go into multiple service organizations to support the homeless, and still they live outside of what most of us would consider a stable situation. They have the personal freedom to reject the help being offered to them.

2. There’s a thought going around—based, I think, on a speech President Obama once gave concluding with “you didn’t build that”—which says that if you like paying taxes, sending your children to public schools, or driving on streets and roads paid for with state and federal funds, then you’re a socialist. Well, with the same logic, I could say that if you work for a private company, have your retirement account invested in the stock market, or buy your groceries at Safeway, then you’re a capitalist. Your personal situation in a large, developed country is never simplistic. Knee-jerk political positions are for morons.

3. See for comparison It Isn’t a Pie from way back in October 2010. One of my earliest blogs and still, I think, true today.

4. If the communications system had been a government monopoly—as under socialism, which is always conservative, seeks to control costs, and avoids risks—you would still be dialing a rotary phone with mechanical switching and paying extra for peak long-distance service.

5. This is Bernie Sanders’ world where you don’t need twenty-three brands of deodorant—just, I suppose, the one he prefers. This reminds me of Westerners who journeyed to Stalin’s Russia in the 1930s and found public places redolent of “Soviet scent.” One size, one smell fits all.

6. At the height of the Soviet experience, people used to shop with lists of their family’s and friends’ sizes in clothing, shoes, gloves, etc. Whenever something became available in the stores, you wanted to be able to buy it, even if not for yourself.