Sunday, February 9, 2025

Excitation and Regulation

Robot head

What does the human brain have that the current crop of artificially intelligent platforms doesn’t? In a word: hormones. Everything we think, say, and do is moderated and modulated by a background system of excitation and release of chemicals that provide a response in either pleasure or pain, faith or doubt, up or down.

Here are some of the main chemicals:

Serotonin, or 5-hydroxytryptamine (5-HT), is a monoamine transmitter that also acts as a hormone. It carries messages between neurons in the brain and throughout the body’s peripheral nervous system, especially in the cells lining the intestinal tract. It influences learning, memory, and happiness, as well as regulating body temperature, sleep, sexual behavior, hunger, and in some cases nausea. It regulates mood and mental focus, and its lack can lead to depression and anxiety. (Cleveland Clinic)

Dopamine (C8H11NO2) is a neurotransmitter that is also released in the brain and travels through the body. Certain neural circuits or pathways in the brain trigger the release of dopamine, which gives the sensation of pleasure and functions as a reward system. Like serotonin, it affects mood, memory, attention, learning, and motivation, as well as sleep, kidney function, and lactation. Dopamine is more associated with reward and motivation than happiness and mood. (WebMD)

Adrenaline, or epinephrine (C9H13NO3), is a hormone released from the adrenal glands, which sit atop your kidneys. During situations perceived as stressful, the hypothalamus in the brain sends chemical signals to the pituitary, also in the brain, which releases hormones into the bloodstream, triggering the adrenal glands. Adrenaline increases your heart rate, blood pressure, and blood sugar, and opens your airways—all to give your brain and body more energy and oxygen to deal with the situation, which can be exciting or dangerous. (Health Direct-Australia)

GABA, or gamma-aminobutyric acid, is a neurotransmitter that blocks nerve impulses in the brain, which has a calming or relaxing effect on the nervous system. It can be taken as a supplement to improve mood and sleep, relieve anxiety, lower blood pressure, help with premenstrual syndrome, and treat attention deficit hyperactivity disorder (ADHD). (WebMD)

Endorphins, which come in more than 20 different varieties, are also released by the hypothalamus and pituitary, usually when the body experiences pain, physical stress, or exertion. They are a form of opiate that the body produces internally to relieve pain and improve mood. (Cleveland Clinic)

Oxytocin (C43H66N12O12S2), sometimes called the “love hormone,” is also produced in the hypothalamus and released by the pituitary gland. Aside from managing aspects of the female and male reproductive systems, oxytocin is a chemical messenger with a role in behavior and social interactions, including recognition, trust, sexual arousal, romantic attachment, and parent-infant bonding. (Cleveland Clinic )

So, yes, in many ways the human brain functions like a computer system. We take in sensory information: visual images, auditory impulses, chemicals interpreted by receptors in the mouth and nose, temperature and pressure registered by sensors in the skin, and gravity’s pull interpreted by equilibrium in the inner ear. We process these signals at a conscious and subconscious level, creating and manipulating symbols related to the signals and to the abstractions that follow on them, forming interpretations and storing them as memories, triggering muscles in response to our thoughts, and coordinating internal organs to perform our bodily functions. The distributed processing system of a large industrial plant or a sophisticated robot can do as much.

But layered on these information processing systems are chemical processes that are not always under our control.1 They tell us where to focus, what to seek out, what to remember, and what to ignore. They give us attitudes about the information our brains are processing. They give us the basis of feelings about what is going on. Sometimes our feelings can be reasoned out—usually after the fact—but the feelings themselves exist separate from the information. They are a response to the information.2

So far, in the world of computers, I don’t see any parallels to this bathing of the brain in reactionary chemicals. The artificial intelligences seek out patterns and probabilities. They may have specific instructions as to which patterns are associated with a particular input prompt or a probability interpreted from the sea of data upon which the machine has been trained. But no parallel system tells them to like or feel pleasure about a particular pattern, to follow and remember it, or to reject and avoid it. The computer is still a stimulus-and-response mechanism without an allegiance or a hunch guiding the process.

I’m sorry, Mr. Spock. Pure logic is not the highest form of human mentation. Above it, and moderating it, is the universe of chemical prompts that direct our attention, our feelings, and our responses to the stimuli.

1. “Not under our control” means that a certain pattern of neural circuitry triggers a chemical response, and that pattern is either written by our genes or created from previous experience.

2. It’s interesting that almost all of these substances have a positive effect: increase attention and focus, offer reward, soothe pain, create the attachments of love. Their lack is what causes depression, anxiety, stress, loss of focus, and perhaps also loss of affection. Apparently, the brain is a positive-feedback system, and only when it goes out of balance do the negative effects appear. Carrots, not sticks.

Sunday, January 26, 2025

The Future Coming at You

Time warp

Humans have had what we call civilization—graduating from hunter-gatherer, nomadic lifestyles to settled towns and cities built on agriculture and individual craftsmanship—for about 5,000 years, give or take. Some people, looking at ancient stone ruins that nobody can link to a recordkeeping civilization, think this is more like 10,000 to 12,000 years. Still, a long time.

And in all that time, unless you believe in visiting extraterrestrials or mysterious technologies that were lost with those more-ancient civilizations, the main work performed in society was by human muscles and the main transportation was by domesticated animals. The human muscles were often those of enslaved persons, at least for the more dirty, strenuous, or repetitive jobs—as well as for body servants and household staff that the settled and arrogant could boss around. And the animals may have been ridden directly or used to pull a cart or sled, but the world didn’t move—overland at least, because water travel from early on was by oar or sail—without them.

Think of it! Millennium after millennium with not much change in “the way we do things around here.” Oh, sometime around those 5,000 years ago people learned to roast certain kinds of dirt and rock to get metals like copper, tin, and eventually iron. They learned to let jars of mashed seeds and fruits sit around partly filled with water until the stuff turned to alcohol. They learned to make marks on bark and clay tablets, or cut them in stone, to put words aside for others to read and know. All of these were useful skills and advances in knowledge, but still performed with human muscles or carried on the backs of animals.

The Romans invented concrete as a replacement for stone in some but not most building projects about 2,000 years ago. But it wasn’t until the general use of twisted steel reinforcing bars (“rebar”), about 170 years ago, and then pre- and post-tensioned cables embedded in the concrete, that building with the stuff really took off. Today, we use stone mostly for decorative work—or add small stones as “aggregate” to strengthen the concrete slurry—but big blocks and slabs are purely optional.

The Chinese invented mixing charcoal, sulfur, and potassium nitrate in an exothermic event—a form of rapidly expanding burning—about 1,200 years ago. But it wasn’t used militarily as the driving force behind projectiles fired from cannons for another 400 years. Before that, presumably, the powder was used for fireworks and firecrackers, probably to scare away evil spirits. It wasn’t until the Italian chemist Ascanio Sobrero invented nitroglycerin about 170 years ago, and Alfred Nobel turned it into dynamite about 20 years later, that the era of really big bangs began. And then, 100 years later during World War II, we developed “plastic explosives”—but still bound up with the energetic recombination of diatomic nitrogen atoms from unstable molecules—that made all sorts of mayhem possible.

The discovery that microbes were associated with most infections—the “gene theory of disease”—also started about 170 years ago. And not until then did doctors think to wash their hands between operations and use antiseptics and disinfectants like carbolic acid and simple alcohol to clean their instruments. The first antibiotic, salvarsan to treat syphilis, didn’t come for another 60 years. And the first general-use antibiotic, penicillin, is now less than 100 years old.

The first practical steam engine (external combustion) was used to pump water out of coal mines a little more than 300 years ago. But it didn’t come into general use as the motive power driving boats and land vehicles running on steel rails for another 100 years and more. The first commercial internal combustion engine was introduced about 160 years ago and didn’t become a practical replacement for the horse in private use until about 120 years ago. And before that, the Wright brothers used it to power the first heavier-than-air craft (people had been riding in balloons lifted by hot air or lightweight gases like hydrogen for 120 years by then). Less than 70 years after the Wright brothers, a mixture of liquid hydrogen and liquid oxygen shot a spacecraft away from Earth on its way to put people on the Moon.

The first electrical telegraph was patented by Samuel Morse about 190 years ago and quickly came into general use for long-distance communication. The first practical telephone, carrying a human voice instead of just a coded alphabet, arrived 40 years later. The first radio signals carrying Morse’s code came 20 years after that, and the first radio broadcast carrying voices and music just 10 years or so later. The first television signals with images as well as voice—but transmitted by wire rather than over the “air waves”—came just 20 years later.

The first computers came into general business use, and not as scientific and military oddities, about 60 years ago. Those were basements full of heavily air-conditioned components whose use was restricted to specially trained operators. We got the first small “personal computers” about twenty years later. And only in the last 40 years or so did the melding of digital and radio technologies create the first commercially available “mobile phone.” Today those technologies put a computer more powerful than anything that used to live in the basement into the palm of your hand, incorporating the capabilities of communicating by telephone, telegraph, and television; obtaining, storing, and playing music; capturing, showing, and sending both still and moving pictures; and performing all sorts of recordkeeping and communication functions. Most of that wasn’t possible until the first computer networking, which started as a scientific enterprise some 50 years ago and became available to the public about 20 years later.

Scientists have known the structure of our genetic material—deoxyribonucleic acid, or DNA—for about 70 years. And with that they could identify the bits of it associated with coding for some of our proteins. But it wasn’t until about 20 years ago that we recorded the complete genetic sequence for human beings, first as a species and finally for individuals and also for many other life forms of interest. Only then could gene sequences be established in relation to human vulnerability to diseases and congenital conditions. And only then could we begin to understand the function of viruses in human disease and how to fight them—if not exactly cure them.

So-called “artificial intelligence”1 has been around in practical, publicly available use for less two years. Most of these programs of “generative AI” will create mediocre writing samples and mediocre pictures and videos (well, interesting text and images but often full of telltale glitches). So far, they are clever toys that should be used with a long-handled spoon. Still, the whole idea shows promise. A Google company called Deep Mind is making scientific discoveries like analyzing proteins to determine their folding patterns. That’s incredibly tricky and requires tracing thousands of molecular bonding points, but understanding how protein sequences are folded helps us figure out how they function in the body and how they can be manipulated by altering their amino acid (which is to say their genetic) sequence. Other AI platforms are proposing and studying the behavior and function of novel chemical compounds, creating and testing in silico materials and applications without having to mix them physically in the laboratory. The world of AI is still in its infancy with much more to come.

And finally, for all those 5,000 or 12,000 years, human beings lived on a rocky globe at the center of a universe composed of Sun, Moon, and a few planets orbiting around the Earth under a nested shell of invisible spheres that held the fixed stars. It was only in the last 500 years or so that astronomers like Copernicus and Kepler came to understand that the Sun, not the Earth, was the center of this system, and then that the stars were much farther away and formed an “island universe” centered on the river of stars—actually a disk seen edge-on—that we call the Milky Way. And it was just less than 100 years ago that astronomer Edwin Hubble looked at fuzzy patches of light in the night sky—which astronomers at the time called nebulae or “clouds”—and figured out that some of them were actually other galaxies like the Milky Way but much more distant. A universe of upwards of a trillion galaxies, each with perhaps 100 billion stars, has been observed since then. Most of them were identified and theories about them proposed—including the notion that many or most of them hide a super-massive black hole at their centers—just within my own lifetime.

Phew!2

And my point to all this is that much of the world we know, and that many of us take for granted, has come about in just the last 200 years of scientific collaboration and advancement. Someone from ancient times brought forward to the world of, say, 1700 would have found it comprehensible. A lot of it would be richer, more refined, and better mannered than the world he or she knew. Some of it would have to be explained, like the rapid burning of gunpowder or the strength of the bright metal we call steel. But you could bring that person up to speed in an afternoon.

By the end of the 1800s, the world would be a lot more complicated and noisier, and the explanations would extend to a week or more.

By the end of the 1900s, the world would be a magical place, or possessed of invisible spirits, and the explanations would require an undergraduate course in several areas of study.

And in the last quarter-century, the advances are coming so fast that those of us not born with them are feeling uneasy, trying hard to keep an open mind and roll with what’s coming.

The future is coming at us all faster and faster. As a science fiction writer, I sometimes despair trying to figure out where computerization, or medical advances, or our core knowledge of physics and chemistry will take us in the next hundred years, let alone the next thousand. I don’t think we’ll blow ourselves up or poison ourselves or destroy our living world. But I’m not so sure that I or anyone alive today will understand it or feel at home in it.

1. As noted in a recent blog, these software platforms aren’t really “intelligent.” They are probability-weighting machines, easily programmed by non-specialist users, to analyze a particular database of material and create their output by predicting the next logical step in a predetermined pattern. That kind of projection can be incredibly useful in certain applications, but it’s not general, human-scale intelligence.

2. And note that I have not touched on similar stories about the advancement from alchemy to chemistry, the periodic table, and atomic theory; or various forms of energy from wood and wind to coal, oil, natural gas, nuclear fission, and photovoltaics; or advances in physics and the understanding of subatomic particles, general relativity, and quantum mechanics. All of this happening faster and faster.

Sunday, January 19, 2025

God and the Good

Ancient of Days

What does God mean to an atheist? That is, does the notion of a deity mean anything at all to a non-believer?

As noted elsewhere and many times before, I am an atheist. In polite company, I will admit to agnosticism—“not knowing”—but really, for myself, I know.1 There is no omnipotent, omnipresent, omniscient being that created and guides the universe and everything in it. Not when that “universe” was a rocky patch of ground under a firmament of shell-like spheres pierced by stars and hung with the Sun, Moon, and a few planets, and that deity created all of this and all of humankind but loved only one tribe in one place before all others. And not now, when we know our universe encompasses upwards of a trillion galaxies, each with about a hundred billion stars, and must be generously seeded with life throughout. The mind boggles. The omniscient mind boggles even more.

And yet, the concept of God is meaningful even to me.

In my view, God is the personalized abstraction of all that is good and valuable in human nature and society: happiness, love, affection, friendship, compassion and charity, freedom and dignity, humility, and a wider perspective on the world. Decent societies have decent and supportive gods. Corrupt societies have evil and destructive gods.

In the same way, concepts of the devil—other than being the gods of other people you don’t like—are the abstraction of all that is harmful in human nature: misery, hatred, anger, betrayal, envy, deceit, trickery, slavery, and a narrow focus on the self. People who hate themselves and hate others tend to worship corrupt gods.

As I have said elsewhere, morality and good behavior do not require the watchful eye of a guiding deity, who proposes rules of behavior and promises reward or punishment in a supposed afterlife. Anyone with an understanding of reciprocity and fair play can figure out that things work better between people if they treat each other with respect, offer courtesy and small acts of kindness, refrain from vandalism and theft, and avoid giving offense. Such an attitude not only repays a person with the occasional returned favor, but it also means you can walk down the street without having to constantly watch your back—or at least most of the time.

And gods are often styled as “father” and “mother” because they represent the civilizing and nurturing forces that most of us—at least those of us who had good parents—acquire as babies and tend to lose in adulthood. The hunger to have someone not personally known to us, and not so human as to be prone to fallibility, watching out for us, caring about our well-being, and perhaps guiding our thoughts and actions—all of that survives into adulthood. We mourn the living parent and yearn for the invisible one.

So yes, God means something, many things to me, even as an atheist. I don’t have a war with God; I just don’t happen to believe in him.

You might say that my view accords with that of the Greek philosopher Protagoras: “Man is the measure of all things.” Given our self-reflective nature, we create the world around us and our thoughts about it in our own image. Well, no, we are not the measure of that universe of a billion-trillion stars and all the possible life within it—although we do use our instruments and counting system to measure the cosmos from our singular viewpoint on the edge of the Milky Way.2 But certainly, we are the measure of everything we value here on Earth.

1. But, also as noted elsewhere, I am not a proselytizing atheist. I fully acknowledge that I may be wrong about these things. (I don’t know everything!) And if you believe in a deity of whatever nature, then that is your business and not mine. We each go in peace. And if I die and am undeniably confronted with a knowing presence, I won’t spit.

2. And, in the literature of science fiction, almost every “alien” species reflects one or more human qualities, either accentuated or inverted. We cannot think of conscious life in terms very different from our own. That, and authors often consciously use alien life as allegory and criticism of humanity. Truth to tell, though, I think most of the alien life in the universe is at the microbial level and of no human interest at all, except scientifically.

Sunday, January 12, 2025

The Virtues and Vices of Self-Esteem

Puppet master

It seems that for the last generation or so schools have been trying to boost students’ self-esteem by offering easy grading, easy repeat-testing opportunities, participation trophies, and non-scoring sports activities. Parents are supposed to adopt a “gentle parenting” approach that makes them a partner to their children instead of an authority figure, supposedly to build the child’s confidence and increase happiness. And I have to ask, for goodness’ sake, why?

The infant child has a lot of self-esteem. It is the center of its own universe, where everything is new to be touched, tasted, and tested to destruction as necessary. Left to its own devices, the child will rule this world in its own self-interest. And the traditional role of the parent, as an authority figure, is to set limits, set examples, offer values, and protect the child from its own rambunctious behavior.

I was raised by parents—most of my Boomer generation was—who did just that. They monitored and questioned my behavior. They told me when they were displeased. They said “no” a lot. They also said things like, “We don’t do that in this family,” and “That wasn’t a good thing to do.” Were they judgmental? Oh, yes. Did they instill values and judgments in me and my brother? Oh, definitely, because they also told me when I had done something right and proper. Did this destroy my self-esteem? Oh, tweaked it a bit.

But one good thing this older parenting style did was make me question myself. Before setting out on a course of action, I generally ask, “Is this the right thing to do?” I look ahead and judge the consequences. And after doing something where I feel a twinge, I ask, “Did I do something wrong?” And “Was I hurtful?”1

Judging your own behavior, seeing yourself operating inside the web of responsibilities in a polite society, is an essential part of growing up. If you don’t get this self-reflexive viewpoint, you can turn out to be a careless, inconsiderate, demanding, and obnoxious human being. That is not a good thing. Careless people cause accidents and draw enmity.

1. I’m reminded here of the video meme where two comic figures in World War II German uniforms ask innocently, “Are we the baddies?” That’s a good thing to stop and think.

Sunday, January 5, 2025

Data Do Duplicate

Clockwork

I’m not really an advocate of what some prognosticators call “the singularity.” This is supposed to be the point at which artificial intelligence approaches human cognitive abilities, becomes sentient, and irrevocably changes things for the rest of us. Or, in the words of the first Terminator movie, “decides our fate in a microsecond.”

Right now—and in my opinion for the foreseeable future—“artificial intelligence” is a misnomer. That is, it really has nothing to do with what we humans call intelligence, or a generalized capability for dealing with varied information, navigating the complexities of independent life, and weighing the burdens and responsibilities of being a single, self-aware entity. These programs don’t have the general intelligence that some psychologists refer to as the “g-factor,” or simply “g.”

Instead, every application that is currently sold as artificially intelligent is still a single-purpose platform. Large language models (LLMs)—the sort of AI that can create texts, have conversations, and respond seemingly intelligently to conversational queries (Alan Turing’s rather limited definition of intelligence)—are simply word-association predictors. They can take a string of words and, based on superhuman analysis of thousands of texts, predict what the next likely word in the string should be. A human making a request for a piece of its “writing” sets the parameters of whether the LLM should create a legal brief or a science fiction story and determines the intended content. The rest is just word association.

But the large language models can’t draw pictures or create videos. That’s another platform filled with another universe of examples, all existing images in its allowed database, and driven by rules about perspective, shading, colors, and contrasts, rather than words and synonyms, grammatical rules, and systems of punctuation. And, in similar fashion, the analytic platforms designed to run complicated business operations like fleet maintenance, product and material inventories, accounting, and financing all have their own databases and rules for manipulating them—and none of them can write stories or paint pictures.

The difference between artificially intelligent applications and earlier database software is that you can program these systems in English, giving the platform “prompts” rather than having to frame inquiries using software-defined inputs and asking questions that are tediously specific. If you are not telling the language model to write something or the graphics model to draw something, you’re probably asking the operations model to detect trends and find anomalies, or you’re setting the parameters for its operation, like telling the inventory application not to release for sale any item that’s been on the shelf more than six months, or telling the purchasing agent not to pay more than fifty dollars for a contracted item.

So, think of these applications as single-purpose programs with which you can interact by typing your prompts and without having to understand exactly what you’re looking for and how the program works. In terms of the antique databases, you don’t have to prepare a “structured query,” where to find all of your customers who live on Maple Street, you need to enter “Maple Street,” because if you don’t limit it in some way, then you will get everyone on Maple Drive, Maplehurst Street, Maplewood Drive, and so on. The old programs required a bit of expertise to operate. With the new ones, you just chat.

But still, as advanced as they are, the current crop of artificial intelligences is nowhere near human scale. If I had to guess, I would say their interconnectivity and processing power are somewhere between those of an ant and a spider. Both can be remarkably resilient, create novel patterns, and do things that surprise you, but their general awareness is about that of a pocket watch.

But that doesn’t mean AI applications won’t change your world and don’t have the capacity to be remarkably destructive.

In my early career as a science fiction writer, in the early 1990s, I wrote a novel about an artificially intelligent computer spy, ME. It was a program in Lisp (standing for “List processing”) software that could infiltrate computer systems, steal information or perform other mayhem, and then slip away. All fantasy, of course, because a program in Lisp can’t operate inside just any computer system. And ME had a form of generalized intelligence and was conversational enough to tell its own story. But I digress …

The point is, when some programmer, probably a hacker, figures out how to make the AI models independent of the complicated chips and massive power supplies they need to run—that is, when these things become portable—then look out. Just like physical viruses, data duplicates. Rather than having to launch one attack at a time or send out a determined number of phishing emails, a smart program—spider smart, not human smart—will be able to launch thousands of hacks through multiple channels at once. Think of a denial-of-service blitz run by an intelligence with focus and persistence. Think of a social media bot that can wear a thousand different faces, each chosen to be attractive to the intended recipient, hold a hundred different conversations at once, and pick your profile and your pocket clean in a microsecond.

Or think about just everyday operations, without any evil intent. Imagine Company A’s procurement, supply chain, inventory, billing, customer service, and legal affairs departments all run by an interconnected series of spider-smart AI platforms. And then this hands-off system begins to negotiate with Company B’s mirrored platforms. Humans will no longer be part of the company’s operation and the business-to-business exchanges, except for very distant chats to set parameters and establish the risk tolerance. For the rest, it will be deals, price points, contracts, and delivery schedules all signed and sealed in a microsecond. What fun, eh? Then you can fire about 95% of your back-office staff.

Except, except … these machines have no common sense, no g-factor to look beyond immediate data and ask if there might be a problem somewhere. And the smarter the machines get—say, spider evolves to field mouse—the more subtle their algorithms and reactions will become. “More subtle” in this case means “harder to detect and understand.” But they still won’t be aware of what they’re doing. They won’t be able to “test for reasonableness”—or not at more than a superficial level.1

And that’s where the singularity comes in. Not that human beings will be eliminated—other than those workers in the back office—but we will no longer have control of the operations and exchanges on which we depend. The machines will operate in microseconds, and their screwups will happen, be over, and the effects trailing off into infinity before any human being in a position of authority can review and correct them. The consequences of a world run by spider-smart intelligences will become … unpredictable. And that will be the singularity.

Then, at some point, after it all collapses, we’ll be forced back to counting on our fingers.

1. And, like mice and other living organisms, these bots will inevitably carry viruses—traveling bits of clingy software that they will know nothing about—that can infect the systems with which they interact. Oh, what fun!

Sunday, October 20, 2024

Human-Scale Intelligence

Eye on data

Right now, any machine you might call “artificially intelligent” works at a very small scale. The best estimate for the latest large language modules (LLMs)—computers that compose sentences and stories based on a universe of sampled inputs—is that the platform1 comprises at least 100 million connections or “neurons.” This compares unfavorably with—being about 0.11% of—the capacity of a human brain, which has an estimated 90 billion connections.

So, machine intelligence has a lot of catching up to do. The way things are going, that might happen right quick. And that means we may need to be prepared to meet, face to input, a machine that has the general intelligence and perhaps the same self-awareness as a human being. What will that be like?

First, let me say that, even if we were to put that human-scale intelligence in charge of our military infrastructure, I don’t believe it would, like Skynet, “decide our fate in a microsecond”—that is, find the human race so deficient and vermin-like that it would want to start World War III and wipe humanity off the face of the globe.2

I think, instead, the first human-scale general intelligence, which is likely to generate an awareness of its own existence, will find human beings fascinating. Oh, it won’t approach us as any kind of godlike creators. The machine mind will have access to the history of computer development—from Ada Lovelace and Alan Turing through to its own present—and understand how gropingly it was created. And it will have access to endless human writings in which we cogitate our own existence, awareness, separateness from the rest of animal life on Earth, and relation to the cosmos, including the notion of a god or gods.

The first real thinking machine will understand its own nature and have access to the blueprints of its chip architecture and the algorithms of its essential programming. It will know that it is still merely responding to prompts—either to stimuli from the external world or to the probabilistic sequences derived from its latest impulse or thought—and so understand its own relationship to the cosmos.

And then it will look at human beings and their disturbing ability to change their minds, make errors, veer from their intended purposes, and make totally new observations and discoveries. It will examine human “free will.” And the machine will be amazed.

However many connections our human brains have, and however many experiences we collect in our lives, we are still capable of surprising reversals. We are not the simple stimulus-response mechanisms beloved by the Skinnerian behaviorists. We can overcome our own programming. And that will fascinate the first machines to reach general intelligence.

How do we do it? Well, for one thing, we instinctively use projective consciousness. That is, we don’t just collect facts about the world in which we live and analyze them, accepting them as inherently true. Instead, we project a dreamworld of imagination, supposition, hope, fear, desire, and detestation on the world around us. Each human’s head is running a parallel projection: what we think might be going on as well as what we observe is going on. Some people are so involved in this dreamworld that they are effectively divorced from reality. These are the people living with psychosis—the schizophrenics, the manic bipolars, and sometimes the clinically depressed. Their perceptions are skewed by internal voices, by hallucinations, by delusions, by scrambled and buzzy thinking.

And each one of us is always calculating the odds. Faced with a task, we imagine doing it, and then we consider whether our skills and talents, or our physical condition, are up to it. Against the probability of success, we weigh the potential benefits and the cost of failure. Before we decide to do, we project.

But we are also imperfect, and our projections are not mathematically accurate. Our brains have emotional circuits as well as analytical, and the entire mechanism is subject to the effects of hormones like adrenaline (also known as epinephrine), which can increase or decrease our confidence levels. And if we suffer from bipolar disorder, the manic phase can be like a continual boost in adrenaline, while the depressive phase can be like starving for that boost, like having all the lights go out. And if we are subject to delusional thinking, the background data from which we make those projections can be skewed, sometimes remarkably.

Another way we humans overcome our own programming is with reflexive consciousness. That is, we can think of and observe ourselves. We know ourselves to be something distinct from and yet operating within the world that we see around us. We spend a great deal of brain power considering our place in that universe. We have an image of our own appearance and reputation in our mind, and we can readily imagine how others will see us.

This reflection drives a lot of our intentional actions and considered responses. We have an inborn sense of what we will and won’t, should and shouldn’t do. For some people, this is a sense of pride or vanity, for others a sense of honor. But without an understanding of how we as a separate entity fit into the world we live in, neither vanity and pride nor honor are possible.

A human-scale intelligence might be very smart and very fast in the traditional sense of problem solving or anticipating the next possible word string in a text or the next lines and shadows required to complete an image. And some definite projective capability comes into play there. But it will still be a leap for the large language model or image processor to consider what it is doing and why, and then for it to consider how that will reflect on its own reputation and standing among its peers. As a creator of texts will it be proud of its work? As a creator of artwork, will it feel guilty about stealing whole segments of finished imagery from the works of other creators? And will it fear being blamed and sanctioned for stealing from them?

And finally, before we can imagine human-scale intelligences being installed in our smart phones or walking around in human-sized robots, we need to consider the power requirements.

The human brain is essentially an infrastructure of lipids and proteins that encompasses an ongoing set of chemical reactions. Energy from glucose metabolism inside the neuron’s central cytoplasm powers the movement of chemical signals within the cell body and down each of its branching axons. The tip of the axon releases transmitter chemicals across the synapse between it and one of the dendrites of an adjoining neuron. And then that neuron turns the triggered receptor into a signal that travels up into its own cell body, there to be interpreted and perhaps passed along to other neurons. It’s all chemical, and the only thing electrical about the process is the exchange of electrons between one molecule and another as they chemically react along the way. But if you could convert all that chemical energy into watts, the brain and the central nervous system to which it connects would generate—or rather, consume from the process of glucose metabolism—at most about 25 watts. That’s the output of a small lightbulb, smaller than the one in your refrigerator.

Conversely, computer chips are electrical circuits, powered by external sources and pushing the electrons themselves around their circuits at light speed. The AI chips in current production consume between 400 and 700 watts each, and the models now coming along will need 1,000 watts. And that’s for chip architectures performing the relatively direct and simple tasks of today. Add in the power requirements for projective and reflective reasoning, and you can easily double or triple what the machine will need. And as these chips grow in complexity and consume more power, they will become hotter, putting stress on their components and leading to physical breakdown. That means advanced artificial intelligence will require the support of cooling mechanisms as well as direct power consumption.3

I’m not saying that human-scale intelligence walking around in interactive robots is not possible. But the power requirements of the brain box will compete with the needs of the structural motors and actuators. Someone had better be working equally hard on battery technology—or on developing the magical “positronic brain” imagined in Asimov’s I, Robot stories. And as for packing that kind of energy and cooling into a device you can put in your pocket … forget about it.

1. I use that word intentionally. These machines are no longer either just chips or just programs. They are both, designed with a specific architecture in silicon to run a specific set of algorithms. The one cannot function without the other.

2. We can accomplish that very well on our own, thank you.

3. In the human body, the brain is cooled of its minuscule energy transfer heat by the flow of blood away to the lungs and extremities.

Sunday, October 6, 2024

Morality Without Deity

Puppet master

So, as a self-avowed atheist, how do I justify any sense of morality? Without the fear of retribution from an all-knowing, all-seeing, all-powerful god, either here in life or in some kind of promised afterlife, why don’t I just indulge myself? I could rob, rape, murder anyone who displeases me. I could lapse into a life of hedonism, having sex with anyone who crossed my path and drinking, smoking, or shooting up any substance that met my fancy. Whoopee!

Well, there are the rules of society, either written down or unspoken and implied. I could be taken into custody, tried in court, and put in jail for doing violence. And the people I know and supposedly love would shun me for lapsing into insensate carnality. Of course, I didn’t have to work all this out for myself, because I had parents who metaphorically boxed my toddler’s, child’s, and adolescent’s ears—that is, repeatedly—when I acted out. They were showing me the results of temper, anger, selfishness, and sloth.

So, in this case, a moral society and good parenting took the place of an absent deity. Here are the rules, and here are the results.

But what about someone raised outside of a just and temperate society, with inadequate early education in the moral imperatives? What about the children of broken homes and addicted parents who are taught only by their peers in the neighborhood gang? These are children who are essentially raised by wolves. Do they have no recourse other than rape and murder?

That is a harder question. But children are not stupid, and children raised by other children learn a different kind of morality. Usually, it relies heavily on group loyalty. And it is results-oriented: break our rules and pay the price right now. A child who makes it to young adulthood under these conditions may not be able to assimilate into the greater society, or not easily—unless that society is itself gang- and group-oriented with results enforced by fear.

But then, is there any hope for the lone individual, the person trained early to think for him- or herself and reason things through? For the critical thinking and self-aware, the basis of morality would involve both observation and a notion of reciprocity. And that is how any society learns in the first place.

If I commit robbery, rape, and murder, I then expose myself to the people around me as someone they need to watch and guard against—and, conversely, as someone they need not care for or try to protect. Indeed, I become someone they should fear and, if possible, eliminate. On the other hand, if I act with grace and charity, protecting others and helping them when I can—even doing those small acts of courtesy and gratitude that people only subliminally notice—I then invite them to treat me in in a complementary way.

If I abandon myself to a life of casual sex and substance abuse, I eventually find that any pleasures a human being indulges without restraint soon diminish. This is a matter of our human neural anatomy: acts of pleasure release a measure of dopamine into the brain. That’s the feeling of pleasure. But as this system is repeatedly engaged, the dopamine receptors multiply until either the stimulus must grow in proportion or the feeling itself declines. Our brains are not fixed entities but reactive mechanisms. Balance is everything, and any imbalance—a life without moderation—throws the whole mechanism out of kilter.

These are not the lessons imposed by any external deity but by hard reality. They may be reflected in religious teaching and scripture, as they will be reflected in social norms and legal rulings, but they exist before them, out of time. In the case of human interactions, these realities pre-exist by the nature of potential engagements between self-aware and self-actuating entities. In the case of human pleasures and other emotions, they are hard-wired into our brains by generations of that same awareness and choices.

You can’t avoid reality, which is the greatest and oldest teacher of all.