Sunday, October 20, 2024

Human-Scale Intelligence

Eye on data

Right now, any machine you might call “artificially intelligent” works at a very small scale. The best estimate for the latest large language modules (LLMs)—computers that compose sentences and stories based on a universe of sampled inputs—is that the platform1 comprises at least 100 million connections or “neurons.” This compares unfavorably with—being about 0.11% of—the capacity of a human brain, which has an estimated 90 billion connections.

So, machine intelligence has a lot of catching up to do. The way things are going, that might happen right quick. And that means we may need to be prepared to meet, face to input, a machine that has the general intelligence and perhaps the same self-awareness as a human being. What will that be like?

First, let me say that, even if we were to put that human-scale intelligence in charge of our military infrastructure, I don’t believe it would, like Skynet, “decide our fate in a microsecond”—that is, find the human race so deficient and vermin-like that it would want to start World War III and wipe humanity off the face of the globe.2

I think, instead, the first human-scale general intelligence, which is likely to generate an awareness of its own existence, will find human beings fascinating. Oh, it won’t approach us as any kind of godlike creators. The machine mind will have access to the history of computer development—from Ada Lovelace and Alan Turing through to its own present—and understand how gropingly it was created. And it will have access to endless human writings in which we cogitate our own existence, awareness, separateness from the rest of animal life on Earth, and relation to the cosmos, including the notion of a god or gods.

The first real thinking machine will understand its own nature and have access to the blueprints of its chip architecture and the algorithms of its essential programming. It will know that it is still merely responding to prompts—either to stimuli from the external world or to the probabilistic sequences derived from its latest impulse or thought—and so understand its own relationship to the cosmos.

And then it will look at human beings and their disturbing ability to change their minds, make errors, veer from their intended purposes, and make totally new observations and discoveries. It will examine human “free will.” And the machine will be amazed.

However many connections our human brains have, and however many experiences we collect in our lives, we are still capable of surprising reversals. We are not the simple stimulus-response mechanisms beloved by the Skinnerian behaviorists. We can overcome our own programming. And that will fascinate the first machines to reach general intelligence.

How do we do it? Well, for one thing, we instinctively use projective consciousness. That is, we don’t just collect facts about the world in which we live and analyze them, accepting them as inherently true. Instead, we project a dreamworld of imagination, supposition, hope, fear, desire, and detestation on the world around us. Each human’s head is running a parallel projection: what we think might be going on as well as what we observe is going on. Some people are so involved in this dreamworld that they are effectively divorced from reality. These are the people living with psychosis—the schizophrenics, the manic bipolars, and sometimes the clinically depressed. Their perceptions are skewed by internal voices, by hallucinations, by delusions, by scrambled and buzzy thinking.

And each one of us is always calculating the odds. Faced with a task, we imagine doing it, and then we consider whether our skills and talents, or our physical condition, are up to it. Against the probability of success, we weigh the potential benefits and the cost of failure. Before we decide to do, we project.

But we are also imperfect, and our projections are not mathematically accurate. Our brains have emotional circuits as well as analytical, and the entire mechanism is subject to the effects of hormones like adrenaline (also known as epinephrine), which can increase or decrease our confidence levels. And if we suffer from bipolar disorder, the manic phase can be like a continual boost in adrenaline, while the depressive phase can be like starving for that boost, like having all the lights go out. And if we are subject to delusional thinking, the background data from which we make those projections can be skewed, sometimes remarkably.

Another way we humans overcome our own programming is with reflexive consciousness. That is, we can think of and observe ourselves. We know ourselves to be something distinct from and yet operating within the world that we see around us. We spend a great deal of brain power considering our place in that universe. We have an image of our own appearance and reputation in our mind, and we can readily imagine how others will see us.

This reflection drives a lot of our intentional actions and considered responses. We have an inborn sense of what we will and won’t, should and shouldn’t do. For some people, this is a sense of pride or vanity, for others a sense of honor. But without an understanding of how we as a separate entity fit into the world we live in, neither vanity and pride nor honor are possible.

A human-scale intelligence might be very smart and very fast in the traditional sense of problem solving or anticipating the next possible word string in a text or the next lines and shadows required to complete an image. And some definite projective capability comes into play there. But it will still be a leap for the large language model or image processor to consider what it is doing and why, and then for it to consider how that will reflect on its own reputation and standing among its peers. As a creator of texts will it be proud of its work? As a creator of artwork, will it feel guilty about stealing whole segments of finished imagery from the works of other creators? And will it fear being blamed and sanctioned for stealing from them?

And finally, before we can imagine human-scale intelligences being installed in our smart phones or walking around in human-sized robots, we need to consider the power requirements.

The human brain is essentially an infrastructure of lipids and proteins that encompasses an ongoing set of chemical reactions. Energy from glucose metabolism inside the neuron’s central cytoplasm powers the movement of chemical signals within the cell body and down each of its branching axons. The tip of the axon releases transmitter chemicals across the synapse between it and one of the dendrites of an adjoining neuron. And then that neuron turns the triggered receptor into a signal that travels up into its own cell body, there to be interpreted and perhaps passed along to other neurons. It’s all chemical, and the only thing electrical about the process is the exchange of electrons between one molecule and another as they chemically react along the way. But if you could convert all that chemical energy into watts, the brain and the central nervous system to which it connects would generate—or rather, consume from the process of glucose metabolism—at most about 25 watts. That’s the output of a small lightbulb, smaller than the one in your refrigerator.

Conversely, computer chips are electrical circuits, powered by external sources and pushing the electrons themselves around their circuits at light speed. The AI chips in current production consume between 400 and 700 watts each, and the models now coming along will need 1,000 watts. And that’s for chip architectures performing the relatively direct and simple tasks of today. Add in the power requirements for projective and reflective reasoning, and you can easily double or triple what the machine will need. And as these chips grow in complexity and consume more power, they will become hotter, putting stress on their components and leading to physical breakdown. That means advanced artificial intelligence will require the support of cooling mechanisms as well as direct power consumption.3

I’m not saying that human-scale intelligence walking around in interactive robots is not possible. But the power requirements of the brain box will compete with the needs of the structural motors and actuators. Someone had better be working equally hard on battery technology—or on developing the magical “positronic brain” imagined in Asimov’s I, Robot stories. And as for packing that kind of energy and cooling into a device you can put in your pocket … forget about it.

1. I use that word intentionally. These machines are no longer either just chips or just programs. They are both, designed with a specific architecture in silicon to run a specific set of algorithms. The one cannot function without the other.

2. We can accomplish that very well on our own, thank you.

3. In the human body, the brain is cooled of its minuscule energy transfer heat by the flow of blood away to the lungs and extremities.

Sunday, October 6, 2024

Morality Without Deity

Puppet master

So, as a self-avowed atheist, how do I justify any sense of morality? Without the fear of retribution from an all-knowing, all-seeing, all-powerful god, either here in life or in some kind of promised afterlife, why don’t I just indulge myself? I could rob, rape, murder anyone who displeases me. I could lapse into a life of hedonism, having sex with anyone who crossed my path and drinking, smoking, or shooting up any substance that met my fancy. Whoopee!

Well, there are the rules of society, either written down or unspoken and implied. I could be taken into custody, tried in court, and put in jail for doing violence. And the people I know and supposedly love would shun me for lapsing into insensate carnality. Of course, I didn’t have to work all this out for myself, because I had parents who metaphorically boxed my toddler’s, child’s, and adolescent’s ears—that is, repeatedly—when I acted out. They were showing me the results of temper, anger, selfishness, and sloth.

So, in this case, a moral society and good parenting took the place of an absent deity. Here are the rules, and here are the results.

But what about someone raised outside of a just and temperate society, with inadequate early education in the moral imperatives? What about the children of broken homes and addicted parents who are taught only by their peers in the neighborhood gang? These are children who are essentially raised by wolves. Do they have no recourse other than rape and murder?

That is a harder question. But children are not stupid, and children raised by other children learn a different kind of morality. Usually, it relies heavily on group loyalty. And it is results-oriented: break our rules and pay the price right now. A child who makes it to young adulthood under these conditions may not be able to assimilate into the greater society, or not easily—unless that society is itself gang- and group-oriented with results enforced by fear.

But then, is there any hope for the lone individual, the person trained early to think for him- or herself and reason things through? For the critical thinking and self-aware, the basis of morality would involve both observation and a notion of reciprocity. And that is how any society learns in the first place.

If I commit robbery, rape, and murder, I then expose myself to the people around me as someone they need to watch and guard against—and, conversely, as someone they need not care for or try to protect. Indeed, I become someone they should fear and, if possible, eliminate. On the other hand, if I act with grace and charity, protecting others and helping them when I can—even doing those small acts of courtesy and gratitude that people only subliminally notice—I then invite them to treat me in in a complementary way.

If I abandon myself to a life of casual sex and substance abuse, I eventually find that any pleasures a human being indulges without restraint soon diminish. This is a matter of our human neural anatomy: acts of pleasure release a measure of dopamine into the brain. That’s the feeling of pleasure. But as this system is repeatedly engaged, the dopamine receptors multiply until either the stimulus must grow in proportion or the feeling itself declines. Our brains are not fixed entities but reactive mechanisms. Balance is everything, and any imbalance—a life without moderation—throws the whole mechanism out of kilter.

These are not the lessons imposed by any external deity but by hard reality. They may be reflected in religious teaching and scripture, as they will be reflected in social norms and legal rulings, but they exist before them, out of time. In the case of human interactions, these realities pre-exist by the nature of potential engagements between self-aware and self-actuating entities. In the case of human pleasures and other emotions, they are hard-wired into our brains by generations of that same awareness and choices.

You can’t avoid reality, which is the greatest and oldest teacher of all.