Sunday, January 26, 2025

The Future Coming at You

Time warp

Humans have had what we call civilization—graduating from hunter-gatherer, nomadic lifestyles to settled towns and cities built on agriculture and individual craftsmanship—for about 5,000 years, give or take. Some people, looking at ancient stone ruins that nobody can link to a recordkeeping civilization, think this is more like 10,000 to 12,000 years. Still, a long time.

And in all that time, unless you believe in visiting extraterrestrials or mysterious technologies that were lost with those more-ancient civilizations, the main work performed in society was by human muscles and the main transportation was by domesticated animals. The human muscles were often those of enslaved persons, at least for the more dirty, strenuous, or repetitive jobs—as well as for body servants and household staff that the settled and arrogant could boss around. And the animals may have been ridden directly or used to pull a cart or sled, but the world didn’t move—overland at least, because water travel from early on was by oar or sail—without them.

Think of it! Millennium after millennium with not much change in “the way we do things around here.” Oh, sometime around those 5,000 years ago people learned to roast certain kinds of dirt and rock to get metals like copper, tin, and eventually iron. They learned to let jars of mashed seeds and fruits sit around partly filled with water until the stuff turned to alcohol. They learned to make marks on bark and clay tablets, or cut them in stone, to put words aside for others to read and know. All of these were useful skills and advances in knowledge, but still performed with human muscles or carried on the backs of animals.

The Romans invented concrete as a replacement for stone in some but not most building projects about 2,000 years ago. But it wasn’t until the general use of twisted steel reinforcing bars (“rebar”), about 170 years ago, and then pre- and post-tensioned cables embedded in the concrete, that building with the stuff really took off. Today, we use stone mostly for decorative work—or add small stones as “aggregate” to strengthen the concrete slurry—but big blocks and slabs are purely optional.

The Chinese invented mixing charcoal, sulfur, and potassium nitrate in an exothermic event—a form of rapidly expanding burning—about 1,200 years ago. But it wasn’t used militarily as the driving force behind projectiles fired from cannons for another 400 years. Before that, presumably, the powder was used for fireworks and firecrackers, probably to scare away evil spirits. It wasn’t until the Italian chemist Ascanio Sobrero invented nitroglycerin about 170 years ago, and Alfred Nobel turned it into dynamite about 20 years later, that the era of really big bangs began. And then, 100 years later during World War II, we developed “plastic explosives”—but still bound up with the energetic recombination of diatomic nitrogen atoms from unstable molecules—that made all sorts of mayhem possible.

The discovery that microbes were associated with most infections—the “gene theory of disease”—also started about 170 years ago. And not until then did doctors think to wash their hands between operations and use antiseptics and disinfectants like carbolic acid and simple alcohol to clean their instruments. The first antibiotic, salvarsan to treat syphilis, didn’t come for another 60 years. And the first general-use antibiotic, penicillin, is now less than 100 years old.

The first practical steam engine (external combustion) was used to pump water out of coal mines a little more than 300 years ago. But it didn’t come into general use as the motive power driving boats and land vehicles running on steel rails for another 100 years and more. The first commercial internal combustion engine was introduced about 160 years ago and didn’t become a practical replacement for the horse in private use until about 120 years ago. And before that, the Wright brothers used it to power the first heavier-than-air craft (people had been riding in balloons lifted by hot air or lightweight gases like hydrogen for 120 years by then). Less than 70 years after the Wright brothers, a mixture of liquid hydrogen and liquid oxygen shot a spacecraft away from Earth on its way to put people on the Moon.

The first electrical telegraph was patented by Samuel Morse about 190 years ago and quickly came into general use for long-distance communication. The first practical telephone, carrying a human voice instead of just a coded alphabet, arrived 40 years later. The first radio signals carrying Morse’s code came 20 years after that, and the first radio broadcast carrying voices and music just 10 years or so later. The first television signals with images as well as voice—but transmitted by wire rather than over the “air waves”—came just 20 years later.

The first computers came into general business use, and not as scientific and military oddities, about 60 years ago. Those were basements full of heavily air-conditioned components whose use was restricted to specially trained operators. We got the first small “personal computers” about twenty years later. And only in the last 40 years or so did the melding of digital and radio technologies create the first commercially available “mobile phone.” Today those technologies put a computer more powerful than anything that used to live in the basement into the palm of your hand, incorporating the capabilities of communicating by telephone, telegraph, and television; obtaining, storing, and playing music; capturing, showing, and sending both still and moving pictures; and performing all sorts of recordkeeping and communication functions. Most of that wasn’t possible until the first computer networking, which started as a scientific enterprise some 50 years ago and became available to the public about 20 years later.

Scientists have known the structure of our genetic material—deoxyribonucleic acid, or DNA—for about 70 years. And with that they could identify the bits of it associated with coding for some of our proteins. But it wasn’t until about 20 years ago that we recorded the complete genetic sequence for human beings, first as a species and finally for individuals and also for many other life forms of interest. Only then could gene sequences be established in relation to human vulnerability to diseases and congenital conditions. And only then could we begin to understand the function of viruses in human disease and how to fight them—if not exactly cure them.

So-called “artificial intelligence”1 has been around in practical, publicly available use for less two years. Most of these programs of “generative AI” will create mediocre writing samples and mediocre pictures and videos (well, interesting text and images but often full of telltale glitches). So far, they are clever toys that should be used with a long-handled spoon. Still, the whole idea shows promise. A Google company called Deep Mind is making scientific discoveries like analyzing proteins to determine their folding patterns. That’s incredibly tricky and requires tracing thousands of molecular bonding points, but understanding how protein sequences are folded helps us figure out how they function in the body and how they can be manipulated by altering their amino acid (which is to say their genetic) sequence. Other AI platforms are proposing and studying the behavior and function of novel chemical compounds, creating and testing in silico materials and applications without having to mix them physically in the laboratory. The world of AI is still in its infancy with much more to come.

And finally, for all those 5,000 or 12,000 years, human beings lived on a rocky globe at the center of a universe composed of Sun, Moon, and a few planets orbiting around the Earth under a nested shell of invisible spheres that held the fixed stars. It was only in the last 500 years or so that astronomers like Copernicus and Kepler came to understand that the Sun, not the Earth, was the center of this system, and then that the stars were much farther away and formed an “island universe” centered on the river of stars—actually a disk seen edge-on—that we call the Milky Way. And it was just less than 100 years ago that astronomer Edwin Hubble looked at fuzzy patches of light in the night sky—which astronomers at the time called nebulae or “clouds”—and figured out that some of them were actually other galaxies like the Milky Way but much more distant. A universe of upwards of a trillion galaxies, each with perhaps 100 billion stars, has been observed since then. Most of them were identified and theories about them proposed—including the notion that many or most of them hide a super-massive black hole at their centers—just within my own lifetime.

Phew!2

And my point to all this is that much of the world we know, and that many of us take for granted, has come about in just the last 200 years of scientific collaboration and advancement. Someone from ancient times brought forward to the world of, say, 1700 would have found it comprehensible. A lot of it would be richer, more refined, and better mannered than the world he or she knew. Some of it would have to be explained, like the rapid burning of gunpowder or the strength of the bright metal we call steel. But you could bring that person up to speed in an afternoon.

By the end of the 1800s, the world would be a lot more complicated and noisier, and the explanations would extend to a week or more.

By the end of the 1900s, the world would be a magical place, or possessed of invisible spirits, and the explanations would require an undergraduate course in several areas of study.

And in the last quarter-century, the advances are coming so fast that those of us not born with them are feeling uneasy, trying hard to keep an open mind and roll with what’s coming.

The future is coming at us all faster and faster. As a science fiction writer, I sometimes despair trying to figure out where computerization, or medical advances, or our core knowledge of physics and chemistry will take us in the next hundred years, let alone the next thousand. I don’t think we’ll blow ourselves up or poison ourselves or destroy our living world. But I’m not so sure that I or anyone alive today will understand it or feel at home in it.

1. As noted in a recent blog, these software platforms aren’t really “intelligent.” They are probability-weighting machines, easily programmed by non-specialist users, to analyze a particular database of material and create their output by predicting the next logical step in a predetermined pattern. That kind of projection can be incredibly useful in certain applications, but it’s not general, human-scale intelligence.

2. And note that I have not touched on similar stories about the advancement from alchemy to chemistry, the periodic table, and atomic theory; or various forms of energy from wood and wind to coal, oil, natural gas, nuclear fission, and photovoltaics; or advances in physics and the understanding of subatomic particles, general relativity, and quantum mechanics. All of this happening faster and faster.

Sunday, January 19, 2025

God and the Good

Ancient of Days

What does God mean to an atheist? That is, does the notion of a deity mean anything at all to a non-believer?

As noted elsewhere and many times before, I am an atheist. In polite company, I will admit to agnosticism—“not knowing”—but really, for myself, I know.1 There is no omnipotent, omnipresent, omniscient being that created and guides the universe and everything in it. Not when that “universe” was a rocky patch of ground under a firmament of shell-like spheres pierced by stars and hung with the Sun, Moon, and a few planets, and that deity created all of this and all of humankind but loved only one tribe in one place before all others. And not now, when we know our universe encompasses upwards of a trillion galaxies, each with about a hundred billion stars, and must be generously seeded with life throughout. The mind boggles. The omniscient mind boggles even more.

And yet, the concept of God is meaningful even to me.

In my view, God is the personalized abstraction of all that is good and valuable in human nature and society: happiness, love, affection, friendship, compassion and charity, freedom and dignity, humility, and a wider perspective on the world. Decent societies have decent and supportive gods. Corrupt societies have evil and destructive gods.

In the same way, concepts of the devil—other than being the gods of other people you don’t like—are the abstraction of all that is harmful in human nature: misery, hatred, anger, betrayal, envy, deceit, trickery, slavery, and a narrow focus on the self. People who hate themselves and hate others tend to worship corrupt gods.

As I have said elsewhere, morality and good behavior do not require the watchful eye of a guiding deity, who proposes rules of behavior and promises reward or punishment in a supposed afterlife. Anyone with an understanding of reciprocity and fair play can figure out that things work better between people if they treat each other with respect, offer courtesy and small acts of kindness, refrain from vandalism and theft, and avoid giving offense. Such an attitude not only repays a person with the occasional returned favor, but it also means you can walk down the street without having to constantly watch your back—or at least most of the time.

And gods are often styled as “father” and “mother” because they represent the civilizing and nurturing forces that most of us—at least those of us who had good parents—acquire as babies and tend to lose in adulthood. The hunger to have someone not personally known to us, and not so human as to be prone to fallibility, watching out for us, caring about our well-being, and perhaps guiding our thoughts and actions—all of that survives into adulthood. We mourn the living parent and yearn for the invisible one.

So yes, God means something, many things to me, even as an atheist. I don’t have a war with God; I just don’t happen to believe in him.

You might say that my view accords with that of the Greek philosopher Protagoras: “Man is the measure of all things.” Given our self-reflective nature, we create the world around us and our thoughts about it in our own image. Well, no, we are not the measure of that universe of a billion-trillion stars and all the possible life within it—although we do use our instruments and counting system to measure the cosmos from our singular viewpoint on the edge of the Milky Way.2 But certainly, we are the measure of everything we value here on Earth.

1. But, also as noted elsewhere, I am not a proselytizing atheist. I fully acknowledge that I may be wrong about these things. (I don’t know everything!) And if you believe in a deity of whatever nature, then that is your business and not mine. We each go in peace. And if I die and am undeniably confronted with a knowing presence, I won’t spit.

2. And, in the literature of science fiction, almost every “alien” species reflects one or more human qualities, either accentuated or inverted. We cannot think of conscious life in terms very different from our own. That, and authors often consciously use alien life as allegory and criticism of humanity. Truth to tell, though, I think most of the alien life in the universe is at the microbial level and of no human interest at all, except scientifically.

Sunday, January 12, 2025

The Virtues and Vices of Self-Esteem

Puppet master

It seems that for the last generation or so schools have been trying to boost students’ self-esteem by offering easy grading, easy repeat-testing opportunities, participation trophies, and non-scoring sports activities. Parents are supposed to adopt a “gentle parenting” approach that makes them a partner to their children instead of an authority figure, supposedly to build the child’s confidence and increase happiness. And I have to ask, for goodness’ sake, why?

The infant child has a lot of self-esteem. It is the center of its own universe, where everything is new to be touched, tasted, and tested to destruction as necessary. Left to its own devices, the child will rule this world in its own self-interest. And the traditional role of the parent, as an authority figure, is to set limits, set examples, offer values, and protect the child from its own rambunctious behavior.

I was raised by parents—most of my Boomer generation was—who did just that. They monitored and questioned my behavior. They told me when they were displeased. They said “no” a lot. They also said things like, “We don’t do that in this family,” and “That wasn’t a good thing to do.” Were they judgmental? Oh, yes. Did they instill values and judgments in me and my brother? Oh, definitely, because they also told me when I had done something right and proper. Did this destroy my self-esteem? Oh, tweaked it a bit.

But one good thing this older parenting style did was make me question myself. Before setting out on a course of action, I generally ask, “Is this the right thing to do?” I look ahead and judge the consequences. And after doing something where I feel a twinge, I ask, “Did I do something wrong?” And “Was I hurtful?”1

Judging your own behavior, seeing yourself operating inside the web of responsibilities in a polite society, is an essential part of growing up. If you don’t get this self-reflexive viewpoint, you can turn out to be a careless, inconsiderate, demanding, and obnoxious human being. That is not a good thing. Careless people cause accidents and draw enmity.

1. I’m reminded here of the video meme where two comic figures in World War II German uniforms ask innocently, “Are we the baddies?” That’s a good thing to stop and think.

Sunday, January 5, 2025

Data Do Duplicate

Clockwork

I’m not really an advocate of what some prognosticators call “the singularity.” This is supposed to be the point at which artificial intelligence approaches human cognitive abilities, becomes sentient, and irrevocably changes things for the rest of us. Or, in the words of the first Terminator movie, “decides our fate in a microsecond.”

Right now—and in my opinion for the foreseeable future—“artificial intelligence” is a misnomer. That is, it really has nothing to do with what we humans call intelligence, or a generalized capability for dealing with varied information, navigating the complexities of independent life, and weighing the burdens and responsibilities of being a single, self-aware entity. These programs don’t have the general intelligence that some psychologists refer to as the “g-factor,” or simply “g.”

Instead, every application that is currently sold as artificially intelligent is still a single-purpose platform. Large language models (LLMs)—the sort of AI that can create texts, have conversations, and respond seemingly intelligently to conversational queries (Alan Turing’s rather limited definition of intelligence)—are simply word-association predictors. They can take a string of words and, based on superhuman analysis of thousands of texts, predict what the next likely word in the string should be. A human making a request for a piece of its “writing” sets the parameters of whether the LLM should create a legal brief or a science fiction story and determines the intended content. The rest is just word association.

But the large language models can’t draw pictures or create videos. That’s another platform filled with another universe of examples, all existing images in its allowed database, and driven by rules about perspective, shading, colors, and contrasts, rather than words and synonyms, grammatical rules, and systems of punctuation. And, in similar fashion, the analytic platforms designed to run complicated business operations like fleet maintenance, product and material inventories, accounting, and financing all have their own databases and rules for manipulating them—and none of them can write stories or paint pictures.

The difference between artificially intelligent applications and earlier database software is that you can program these systems in English, giving the platform “prompts” rather than having to frame inquiries using software-defined inputs and asking questions that are tediously specific. If you are not telling the language model to write something or the graphics model to draw something, you’re probably asking the operations model to detect trends and find anomalies, or you’re setting the parameters for its operation, like telling the inventory application not to release for sale any item that’s been on the shelf more than six months, or telling the purchasing agent not to pay more than fifty dollars for a contracted item.

So, think of these applications as single-purpose programs with which you can interact by typing your prompts and without having to understand exactly what you’re looking for and how the program works. In terms of the antique databases, you don’t have to prepare a “structured query,” where to find all of your customers who live on Maple Street, you need to enter “Maple Street,” because if you don’t limit it in some way, then you will get everyone on Maple Drive, Maplehurst Street, Maplewood Drive, and so on. The old programs required a bit of expertise to operate. With the new ones, you just chat.

But still, as advanced as they are, the current crop of artificial intelligences is nowhere near human scale. If I had to guess, I would say their interconnectivity and processing power are somewhere between those of an ant and a spider. Both can be remarkably resilient, create novel patterns, and do things that surprise you, but their general awareness is about that of a pocket watch.

But that doesn’t mean AI applications won’t change your world and don’t have the capacity to be remarkably destructive.

In my early career as a science fiction writer, in the early 1990s, I wrote a novel about an artificially intelligent computer spy, ME. It was a program in Lisp (standing for “List processing”) software that could infiltrate computer systems, steal information or perform other mayhem, and then slip away. All fantasy, of course, because a program in Lisp can’t operate inside just any computer system. And ME had a form of generalized intelligence and was conversational enough to tell its own story. But I digress …

The point is, when some programmer, probably a hacker, figures out how to make the AI models independent of the complicated chips and massive power supplies they need to run—that is, when these things become portable—then look out. Just like physical viruses, data duplicates. Rather than having to launch one attack at a time or send out a determined number of phishing emails, a smart program—spider smart, not human smart—will be able to launch thousands of hacks through multiple channels at once. Think of a denial-of-service blitz run by an intelligence with focus and persistence. Think of a social media bot that can wear a thousand different faces, each chosen to be attractive to the intended recipient, hold a hundred different conversations at once, and pick your profile and your pocket clean in a microsecond.

Or think about just everyday operations, without any evil intent. Imagine Company A’s procurement, supply chain, inventory, billing, customer service, and legal affairs departments all run by an interconnected series of spider-smart AI platforms. And then this hands-off system begins to negotiate with Company B’s mirrored platforms. Humans will no longer be part of the company’s operation and the business-to-business exchanges, except for very distant chats to set parameters and establish the risk tolerance. For the rest, it will be deals, price points, contracts, and delivery schedules all signed and sealed in a microsecond. What fun, eh? Then you can fire about 95% of your back-office staff.

Except, except … these machines have no common sense, no g-factor to look beyond immediate data and ask if there might be a problem somewhere. And the smarter the machines get—say, spider evolves to field mouse—the more subtle their algorithms and reactions will become. “More subtle” in this case means “harder to detect and understand.” But they still won’t be aware of what they’re doing. They won’t be able to “test for reasonableness”—or not at more than a superficial level.1

And that’s where the singularity comes in. Not that human beings will be eliminated—other than those workers in the back office—but we will no longer have control of the operations and exchanges on which we depend. The machines will operate in microseconds, and their screwups will happen, be over, and the effects trailing off into infinity before any human being in a position of authority can review and correct them. The consequences of a world run by spider-smart intelligences will become … unpredictable. And that will be the singularity.

Then, at some point, after it all collapses, we’ll be forced back to counting on our fingers.

1. And, like mice and other living organisms, these bots will inevitably carry viruses—traveling bits of clingy software that they will know nothing about—that can infect the systems with which they interact. Oh, what fun!