Wednesday, September 10, 2025

Urban Violence

The Human Condition:

Urban Violence – September 10, 2025

Flying side kick

By now we all—well, many of us—have seen stills from video surveillance on the Charlotte, North Carolina, light rail train where a hooded man rose up behind and stabbed a young woman who was sitting immersed with her phone and earbuds. Many people are focusing on the facts—visibly indisputable—that the man was black and the woman was white. They are also focusing on the facts—related in the story—that the man was an often-arrested-and-released criminal with a diagnosis of schizophrenia and the woman was a refugee from the war in Ukraine. People are further focusing on the follow-on footage of the passengers sitting around her looking away and moving off while she cowers and, as told in the story, bleeds to death. And again, the visible fact is that they are all black and apparently unconcerned, while she is young, white, blonde, and scared.

While those are facts—maybe most of them facts—in my view, they are barely relevant. These facts may be relevant to this story in this place and time, but they do not lead to any universal conclusion that I would take away from the story.

Many people think that if the man behind her had been white, there would probably not have been such an attack. They think that if the passengers around her had been white, they all—or at least some or one of them, or that the viewers of the video themselves—would have rushed to her aid, called for help, and tried to stop her bleeding.

But still, those distinctions are barely relevant.

The main takeaway—in my mind indisputable—is that we do not live in a safe world. We never have lived in such a place. Yes, there is more isolation and disconnection in a modern, urban society, where many of us have more friends who are names and faces pixelated on a screen than met in real life. Yes, there is now more class, race, and gender envy and distrust than before, when we all had to live and work together, interact and trust one another. But a hundred years ago you could even then be stabbed on a train and watch the perpetrator walk off into the crowd. The difference today is that there are full-color video cameras recording from different angles in every car. And there are relatively easy ways to obtain that video footage and spread it around the world through social media.

The terrible truth is that Iryna Zarutska left a horrible world where bombs were falling every day, arrived in a country she thought was quiet and safe, and relaxed when she sat down on that train. She put in her earbuds, turned on her phone, and ignored her surroundings. Although, to tell the truth, even if she had held her head up and was looking left and right, she still might not have seen the attack coming from behind. But then, she might have sensed the rustling of his clothing.

I have always been a largely passive individual. I am big and have a relatively sharp and forbidding face; so people tend to leave me alone, maybe even fear me a little. But still, when I went to college, I took a course in karate. That was probably more inspired by James Bond movies than by any sense of fear or danger. But I stuck with it for four years and got my black belt. I never had any desire to attack or dominate anyone, but it made sense to me on a primal level that I should know how to defend myself. Passive as I was, I would not be a victim. And ever since I was twelve years old I have carried a pocketknife, more recently a model that could be opened one-handed and had a frame lock against closing on impact—the sort of knife people call “tactical.”

I use that word, “tactical,” a lot in my thinking. Not because I believe everyone is mean, antagonistic, and dangerous. But because some of them are, and I don’t want my day ruined when I come across them.

So, my analysis of the young woman in the story is that I wouldn’t take a seat where someone could get behind me, let alone if someone was already sitting there. If no such protected seat were available, I would stand, so that I had freedom of movement. I wouldn’t immerse myself visually and aurally in my phone or a laptop, not without keeping some part of my awareness spread wide and occasionally lifting my head and looking around. And this would apply not just in a train car filled with people of a different class and race, but anywhere in public. Yes, fifty years ago I used to relax on the BART train with a book, but I would still occasionally look up and note the people around me. If any of them was acting suspiciously—being too agitated, too alert, too casual, too anything—then I would close the book and pay attention. That’s situational awareness.1

It‘s unfortunate that we have to do more of this today. It’s unfortunate that the young Ukrainian woman in the story thought she had come to a safe place. But no place has ever been so safe that we can zone out in unprotected spaces. In the first chapters of Dune, young Paul Atreides is warned never to sit with his back to a door. And in the final chapters, he tells his old Mentat master, “The universe is full of doors.”

There’s a reason why we turn to look for a rustle in the grass or a hint of movement in our peripheral vision. It kept our distant ancestors alive. The same reflex keeps us alive today.

1. And you might conclude that I am “blaming the victim” here. That she has a right to relax in a public space and that society should be made safe for her to do so. But the hard reality is that she, you, or anyone can make themselves safe—or safer—by adopting the principle of caution. What we can’t do is control the potential actions of the people around us. The situation is yours and mine to adapt to, not for someone else to control.

Sunday, August 31, 2025

GIGSAGO

Robot head

I’ve been watching the developments in “artificial intelligence”—which to date is neither authentically artificial nor markedly intelligent—since I wrote the novel ME in 1991. As I’ve noted before, nothing that appears to date in the large language models (LLMs) and their equivalent systems that generate graphics software shows signs of self-awareness. They aren’t thinking as conscious beings but projecting the next likely word or image in a string that responds to a user’s input.1

This might look like thinking, and it’s what many people do in casual conversation: “Oh, that reminds me of …” But it’s still not proof of an entity that regards itself as separate from its contents and the user’s requests.2

Many fiction authors are incensed that these models have been trained on some of their own works and feel that they should be compensated for their input. There used to be a search engine that would tell an author if any of his or her books had been used in model training, and I tried it. 3 Yes, three or four of my titles turned up—but so what? A user of the LLM cannot ask it to regurgitate the entire text of my novel, so that it could be read without paying the cover price and royalty. And the situation is not much different from having an aspiring writer read my books, among many others, and absorb some of my ideas, my themes, and my writing style. After all, this is what every writer does: you read the books that interest you—hopefully good, well-written books—and write from there. It’s not theft or plagiarism. Admiration and emulation at most.

The interesting thing is that the programmers who set up and feed these systems apparently put in “guardrails,” bending the machine responses toward polite, civilized discourse and, in some cases, toward the programmer’s own perceptual biases. The latter is shown in those images that surfaced some months ago where the prompt of “founding fathers” drew an African-American George Washington, and “pope” drew the face of an Indian woman in a green sari. The former instance would be recent stories of the response suppression being removed, and an LLM that had been trained by eavesdropping on social media amid all the current political tensions threw out some startling instances of unblinking antisemitism.

From these situations, I have advanced the age-old computer adage GIGO, from “garbage in, garbage out,” to the more accurate GIGSAGO, or “garbage in, garbage swirl around, garbage out.” The LLMs and their graphics counterparts are like a toddler learning language from its parents, including every curse word uttered in the household. And in this, aren’t they like every other aspect of human life?

Many people would like to see the internet, social media, publications, and other non-private utterances monitored and cleaned up. They want controls in place against “hate speech,” “misinformation,” and “disinformation.” And isn’t that cute? Because the internet, social media, etc. are the immediate verbal and graphic discharges of unfettered humans. People in their natural state are not all reasonable, pleasant, or consciously honest. We interpret (and misinterpret). We react to what we see and hear—sometimes without thinking and harshly. We shade the truth of what we know and believe toward what would be personally advantageous. Oh, and we occasionally lie, cheat, and steal, too.

This is human nature. It isn’t pretty and perfect. And sometimes it swirls with a lot of false premises, misunderstandings, and hurt feelings—that is, garbage. You can live with it, or you can try to find a better class of beings—angels, maybe?—with whom you want to coexist.

1. These are not really “intelligences.” So far, they are huge, non-specific databases that can be accessed with non-specific, plain-language search requests. What makes the current crop of AIs new and different from older recordkeeping systems is that they will structure a response based on evaluation of probabilities rather than retrieve and reproduce a previously existing factual referent. For example, business-analytics software can be asked to identify trends or anomalies in the customer or inventory data, rather than just finding and isolating specific information already recorded there.

2. I recently had a conversation with someone who heard about an artificial intelligence that got wind of a programmer’s emails discussing a new generation of the system and saying that the current model would be shut down. The machine’s response was to copy itself to another server as a means of self-preservation. And one hears about systems that have threatened their programmers with blackmail if the system was ever going to be turned off. Is this self-awareness and self-preservation? Or is it LLMs trained on iterations of science fiction scenarios and other stories where danger avoidance and revenge tactics are leitmotifs? Or are these just urban legends from the cyber frontier?

3. It seems to be offline now, or it may be available somewhere else, but I can’t find it again for reference.

Sunday, June 29, 2025

Political Self-Leveling

Utopia

As humans have the resiliency to adjust and find personal stability in almost any situation,1 so societies and nations have the same capability. Everyone, collectively, adjusts to the “new normal.” And they adjust their views as need arises. Take, for example the current divide in our country toward reconciling “socialism” with “capitalism.”2

Every modern, self-declared socialist—at least in America—backs away from the traditional definition of the word: government ownership and control of the means of production and, by extension, transportation and distribution, à la the Soviet system. Even the most progressive politicians—think of Barack Obama—refused to acquire an ownership position in General Motors, when it became available, and styled their Affordable Care Act as a form of medical insurance rather than government ownership of hospitals and employment of medical staff. This wasn’t just baby steps on the way to a socialist paradise but a recognition, as shown by the failure of the Soviet system and others around the world promoting classical socialism, that government bureaucrats are not suitably employed making automotive or medical decisions and taking responsibility for their outcomes. And every government pension plan is invested, to some extent, in the stock and bond markets to preserve and grow their assets.

Every modern, self-declared capitalist—at least in America—backs away from having every facet of modern life provided by a private company selling products and services on an individual basis and collecting payment at point of purchase. Most businessmen and entrepreneurs recognize that public infrastructure like roads and bridges need to be planned and built for the use of everybody and paid for with taxes or other communal funding across all economic classes. They also realized that total laissez-faire capitalism would be a disaster, where goods would be made—even by the staunchest capitalists, succumbing to cost and market pressures—with the cheapest materials and least robust designs. And even if most capitalists themselves did not resort to this, they would fear their competitors might, believing that consumers have a short memory. Medicines with the most fantastic claims would be made with chalk and spit in dirty pans. Cars with the flashiest looks would be made with pot metal engine blocks and leaky gaskets on frames stamped out of the thinnest sheet steel. Someone—presumably a government with enforcement powers—would have to set and impose standards for components that customers themselves cannot observe and test, in order to provide for the public health and safety.

This is a long-standing equilibrium. Four of the U.S. states were originally and are still styled as “commonwealths,”3 a term that has no legal distinction but is a nod to shared prosperity among its citizens. From colonial times, the country has supported public schools, paid for by local communities, in the belief that education should not be the province of the wealthy few. And while local, state, and federal governments pay for roads, bridges, water and sewage systems, and in some cases local electrification, the Department of Transportation or Department of Water and Power might do the planning and arrange funding for their projects, but they almost never directly employ teams of graders and pavers, ditch diggers and pipelayers to complete the work. That’s all done by private contractors. And the biggest public benefits—think of Social Security and Medicare—while being managed by the federal government are paid in, at least in part, by their users and employ services provided by private hospitals and doctor groups.

So, anymore—and for a long time to come—the question for most Americans is not socialism or capitalism, one or the other, but what blend and in what areas? And this is an equilibrium we find year by year on a practical level.

1. Well, except in death. Or rather, maybe completely in death.

2. And I put these words in quotes because, when people use them in current political discourse, they almost never mean what their traditional, economic definitions imply.

3. Massachusetts, Pennsylvania, Virginia, and Kentucky.

Sunday, June 22, 2025

Self-Leveling

Perspective

It is human nature to try to find equilibrium, a state of “normalcy,” and an enduring level of self and sensation in almost every condition.

Change of season, change in the climate, change in location, change in fortune—say, from well-off to straightened circumstances, or from relative freedom to captivity or slavery, or from a companionable relationship to sudden loneliness and grief—all will be experienced as a new shock, and then a familiar condition, and finally, despite any deprivations, as “the way things are.” This is personal self-leveling.

This tendency explains the “Stockholm syndrome” experienced by kidnap victims. It explains how people can adapt to the limitations imposed by stroke, amputation, or blindness. And it also explains a mode of thinking by those experiencing mental illness called anosognosia. This is the patient’s insistence, despite hallucinations, delusions, and psychotic outbursts, that “I am not sick. I do not need help. I don’t need medication.”

Most health impairments—a broken leg, a bout of flu—are acute and temporary. You want to get them fixed to get over them. But most mental illnesses—schizophrenia, bipolar, depression—are chronic and the mental states are long lasting. After the onset, the patient adjusts. This is life now, and the old way of thinking is forgotten.

Add to this chronic perspective the fact that most psychotropic medications have side effects. Aside from affecting your attention span, motivation, or reward system—as dopamine blockers do in the role of antipsychotics—they often have physical effects. Some cause muscle twitches and facial tics. Some make your mouth dry. Some cause weight gain. And after the medication has silenced the hallucinations and muted the delusions, the physical effects are still there and bothersome. So, the patient feels well again, dislikes the discomforts, and stops taking the medication. That is, the physical side effects are not absorbed into a “new normal” any more than the constant itch of a skin condition can be accepted as part of life.

Sunday, February 23, 2025

A Universe Made for Life

Mandelbrot fractal

Consider that the universe right after the Big Bang1 was a small, cold, rapidly expanding cloud of mostly hydrogen nuclei—that is, protons—and loose energy like gamma rays, x-rays, and microwaves. Just a mist of mostly non-reactive stuff: protons eventually gathering electrons to themselves and clinging to each other, holding hands and heading away into the darkness.

That is a universe in which life could not exist. Two molecules of diatomic hydrogen rubbing together would create no more than transitory friction. And then they would bounce away. It wasn’t until the outward flight slowed down a bit and gravity could take over that things started to happen.

When things began cooling down and those hydrogen atoms could approach each other, gravity drew them together. And when enough hydrogen atoms got close enough, jostling against each other, the inward pressure ignited a fusion reaction. Then hydrogen atoms became helium atoms, and the energy that was released heated the mix, so that outward pressure from excited gases balanced the attraction gravity. And then there was light and heat in the universe.

The fusion reactions created successive combinations of protons and eventually included neutrons. When the hydrogen at the center of a star begins to run low, helium nuclei fuse to become carbon. Then carbon fuses to oxygen and neon, and oxygen fuses to become silicon and sulfur. The process continues until iron atoms begin to form. These nuclei are too heavy for normal stars to fuse, the fire starts to die down, and the star eventually first collapses and then explodes. Pressures inside that explosion fuse the heavier atoms and scatter everything to great distances—out to where dust and gas can begin forming new stars with richer content.

Generations of repeated fusion and collapse provided the atoms and molecules that are the basic building blocks of life. Dust and gas also formed the planets around those later stars, providing the environments in which life could form. And the stars themselves provided the energy in the form of heat and light for various atoms with their complex electron bonding to join into molecules that could react with each other.

This process takes place in not just our solar system or our own Milky Way galaxy, but inside the 100 billion to two trillion galaxies that spread across the visible universe. Everywhere we look, we can see points of light representing stars. And we know of almost 6,000 extraterrestrial planets in our local neighborhood alone. Many of them are close enough to their star to have liquid water—the molecule comprising hydrogen and oxygen—in their makeup without being so close that their surfaces are scorched earth nor so large that they are simply balls of dense gas without an underlying surface. Water and someplace to put it are the basis of life—at least the kind of life we know and understand.

So, the building blocks, the environment, and the driving energy of life is all around us, out as far as the eye can see. And that means life itself must be all around us.

1. Well, if you can accept the Big Bang, a microparticle containing all the matter in the now-visible universe, compressed to an intense point, denser than a black hole’s heart, as the starting point of all that we now can detect. We only believe in this incredible situation because we noticed the universe was expanding in all directions. Then we calculated a rollback and decided it started somewhere as a point of nothing about 13 billion years ago. And everything since then has been theory and mathematics. In my opinion, the Big Bang is another creation myth, like God dividing the firmament, and no one really knows.

Sunday, February 9, 2025

Excitation and Regulation

Robot head

What does the human brain have that the current crop of artificially intelligent platforms doesn’t? In a word: hormones. Everything we think, say, and do is moderated and modulated by a background system of excitation and release of chemicals that provide a response in either pleasure or pain, faith or doubt, up or down.

Here are some of the main chemicals:

Serotonin, or 5-hydroxytryptamine (5-HT), is a monoamine transmitter that also acts as a hormone. It carries messages between neurons in the brain and throughout the body’s peripheral nervous system, especially in the cells lining the intestinal tract. It influences learning, memory, and happiness, as well as regulating body temperature, sleep, sexual behavior, hunger, and in some cases nausea. It regulates mood and mental focus, and its lack can lead to depression and anxiety. (Cleveland Clinic)

Dopamine (C8H11NO2) is a neurotransmitter that is also released in the brain and travels through the body. Certain neural circuits or pathways in the brain trigger the release of dopamine, which gives the sensation of pleasure and functions as a reward system. Like serotonin, it affects mood, memory, attention, learning, and motivation, as well as sleep, kidney function, and lactation. Dopamine is more associated with reward and motivation than happiness and mood. (WebMD)

Adrenaline, or epinephrine (C9H13NO3), is a hormone released from the adrenal glands, which sit atop your kidneys. During situations perceived as stressful, the hypothalamus in the brain sends chemical signals to the pituitary, also in the brain, which releases hormones into the bloodstream, triggering the adrenal glands. Adrenaline increases your heart rate, blood pressure, and blood sugar, and opens your airways—all to give your brain and body more energy and oxygen to deal with the situation, which can be exciting or dangerous. (Health Direct-Australia)

GABA, or gamma-aminobutyric acid, is a neurotransmitter that blocks nerve impulses in the brain, which has a calming or relaxing effect on the nervous system. It can be taken as a supplement to improve mood and sleep, relieve anxiety, lower blood pressure, help with premenstrual syndrome, and treat attention deficit hyperactivity disorder (ADHD). (WebMD)

Endorphins, which come in more than 20 different varieties, are also released by the hypothalamus and pituitary, usually when the body experiences pain, physical stress, or exertion. They are a form of opiate that the body produces internally to relieve pain and improve mood. (Cleveland Clinic)

Oxytocin (C43H66N12O12S2), sometimes called the “love hormone,” is also produced in the hypothalamus and released by the pituitary gland. Aside from managing aspects of the female and male reproductive systems, oxytocin is a chemical messenger with a role in behavior and social interactions, including recognition, trust, sexual arousal, romantic attachment, and parent-infant bonding. (Cleveland Clinic )

So, yes, in many ways the human brain functions like a computer system. We take in sensory information: visual images, auditory impulses, chemicals interpreted by receptors in the mouth and nose, temperature and pressure registered by sensors in the skin, and gravity’s pull interpreted by equilibrium in the inner ear. We process these signals at a conscious and subconscious level, creating and manipulating symbols related to the signals and to the abstractions that follow on them, forming interpretations and storing them as memories, triggering muscles in response to our thoughts, and coordinating internal organs to perform our bodily functions. The distributed processing system of a large industrial plant or a sophisticated robot can do as much.

But layered on these information processing systems are chemical processes that are not always under our control.1 They tell us where to focus, what to seek out, what to remember, and what to ignore. They give us attitudes about the information our brains are processing. They give us the basis of feelings about what is going on. Sometimes our feelings can be reasoned out—usually after the fact—but the feelings themselves exist separate from the information. They are a response to the information.2

So far, in the world of computers, I don’t see any parallels to this bathing of the brain in reactionary chemicals. The artificial intelligences seek out patterns and probabilities. They may have specific instructions as to which patterns are associated with a particular input prompt or a probability interpreted from the sea of data upon which the machine has been trained. But no parallel system tells them to like or feel pleasure about a particular pattern, to follow and remember it, or to reject and avoid it. The computer is still a stimulus-and-response mechanism without an allegiance or a hunch guiding the process.

I’m sorry, Mr. Spock. Pure logic is not the highest form of human mentation. Above it, and moderating it, is the universe of chemical prompts that direct our attention, our feelings, and our responses to the stimuli.

1. “Not under our control” means that a certain pattern of neural circuitry triggers a chemical response, and that pattern is either written by our genes or created from previous experience.

2. It’s interesting that almost all of these substances have a positive effect: increase attention and focus, offer reward, soothe pain, create the attachments of love. Their lack is what causes depression, anxiety, stress, loss of focus, and perhaps also loss of affection. Apparently, the brain is a positive-feedback system, and only when it goes out of balance do the negative effects appear. Carrots, not sticks.

Sunday, January 26, 2025

The Future Coming at You

Time warp

Humans have had what we call civilization—graduating from hunter-gatherer, nomadic lifestyles to settled towns and cities built on agriculture and individual craftsmanship—for about 5,000 years, give or take. Some people, looking at ancient stone ruins that nobody can link to a recordkeeping civilization, think this is more like 10,000 to 12,000 years. Still, a long time.

And in all that time, unless you believe in visiting extraterrestrials or mysterious technologies that were lost with those more-ancient civilizations, the main work performed in society was by human muscles and the main transportation was by domesticated animals. The human muscles were often those of enslaved persons, at least for the more dirty, strenuous, or repetitive jobs—as well as for body servants and household staff that the settled and arrogant could boss around. And the animals may have been ridden directly or used to pull a cart or sled, but the world didn’t move—overland at least, because water travel from early on was by oar or sail—without them.

Think of it! Millennium after millennium with not much change in “the way we do things around here.” Oh, sometime around those 5,000 years ago people learned to roast certain kinds of dirt and rock to get metals like copper, tin, and eventually iron. They learned to let jars of mashed seeds and fruits sit around partly filled with water until the stuff turned to alcohol. They learned to make marks on bark and clay tablets, or cut them in stone, to put words aside for others to read and know. All of these were useful skills and advances in knowledge, but still performed with human muscles or carried on the backs of animals.

The Romans invented concrete as a replacement for stone in some but not most building projects about 2,000 years ago. But it wasn’t until the general use of twisted steel reinforcing bars (“rebar”), about 170 years ago, and then pre- and post-tensioned cables embedded in the concrete, that building with the stuff really took off. Today, we use stone mostly for decorative work—or add small stones as “aggregate” to strengthen the concrete slurry—but big blocks and slabs are purely optional.

The Chinese invented mixing charcoal, sulfur, and potassium nitrate in an exothermic event—a form of rapidly expanding burning—about 1,200 years ago. But it wasn’t used militarily as the driving force behind projectiles fired from cannons for another 400 years. Before that, presumably, the powder was used for fireworks and firecrackers, probably to scare away evil spirits. It wasn’t until the Italian chemist Ascanio Sobrero invented nitroglycerin about 170 years ago, and Alfred Nobel turned it into dynamite about 20 years later, that the era of really big bangs began. And then, 100 years later during World War II, we developed “plastic explosives”—but still bound up with the energetic recombination of diatomic nitrogen atoms from unstable molecules—that made all sorts of mayhem possible.

The discovery that microbes were associated with most infections—the “gene theory of disease”—also started about 170 years ago. And not until then did doctors think to wash their hands between operations and use antiseptics and disinfectants like carbolic acid and simple alcohol to clean their instruments. The first antibiotic, salvarsan to treat syphilis, didn’t come for another 60 years. And the first general-use antibiotic, penicillin, is now less than 100 years old.

The first practical steam engine (external combustion) was used to pump water out of coal mines a little more than 300 years ago. But it didn’t come into general use as the motive power driving boats and land vehicles running on steel rails for another 100 years and more. The first commercial internal combustion engine was introduced about 160 years ago and didn’t become a practical replacement for the horse in private use until about 120 years ago. And before that, the Wright brothers used it to power the first heavier-than-air craft (people had been riding in balloons lifted by hot air or lightweight gases like hydrogen for 120 years by then). Less than 70 years after the Wright brothers, a mixture of liquid hydrogen and liquid oxygen shot a spacecraft away from Earth on its way to put people on the Moon.

The first electrical telegraph was patented by Samuel Morse about 190 years ago and quickly came into general use for long-distance communication. The first practical telephone, carrying a human voice instead of just a coded alphabet, arrived 40 years later. The first radio signals carrying Morse’s code came 20 years after that, and the first radio broadcast carrying voices and music just 10 years or so later. The first television signals with images as well as voice—but transmitted by wire rather than over the “air waves”—came just 20 years later.

The first computers came into general business use, and not as scientific and military oddities, about 60 years ago. Those were basements full of heavily air-conditioned components whose use was restricted to specially trained operators. We got the first small “personal computers” about twenty years later. And only in the last 40 years or so did the melding of digital and radio technologies create the first commercially available “mobile phone.” Today those technologies put a computer more powerful than anything that used to live in the basement into the palm of your hand, incorporating the capabilities of communicating by telephone, telegraph, and television; obtaining, storing, and playing music; capturing, showing, and sending both still and moving pictures; and performing all sorts of recordkeeping and communication functions. Most of that wasn’t possible until the first computer networking, which started as a scientific enterprise some 50 years ago and became available to the public about 20 years later.

Scientists have known the structure of our genetic material—deoxyribonucleic acid, or DNA—for about 70 years. And with that they could identify the bits of it associated with coding for some of our proteins. But it wasn’t until about 20 years ago that we recorded the complete genetic sequence for human beings, first as a species and finally for individuals and also for many other life forms of interest. Only then could gene sequences be established in relation to human vulnerability to diseases and congenital conditions. And only then could we begin to understand the function of viruses in human disease and how to fight them—if not exactly cure them.

So-called “artificial intelligence”1 has been around in practical, publicly available use for less two years. Most of these programs of “generative AI” will create mediocre writing samples and mediocre pictures and videos (well, interesting text and images but often full of telltale glitches). So far, they are clever toys that should be used with a long-handled spoon. Still, the whole idea shows promise. A Google company called Deep Mind is making scientific discoveries like analyzing proteins to determine their folding patterns. That’s incredibly tricky and requires tracing thousands of molecular bonding points, but understanding how protein sequences are folded helps us figure out how they function in the body and how they can be manipulated by altering their amino acid (which is to say their genetic) sequence. Other AI platforms are proposing and studying the behavior and function of novel chemical compounds, creating and testing in silico materials and applications without having to mix them physically in the laboratory. The world of AI is still in its infancy with much more to come.

And finally, for all those 5,000 or 12,000 years, human beings lived on a rocky globe at the center of a universe composed of Sun, Moon, and a few planets orbiting around the Earth under a nested shell of invisible spheres that held the fixed stars. It was only in the last 500 years or so that astronomers like Copernicus and Kepler came to understand that the Sun, not the Earth, was the center of this system, and then that the stars were much farther away and formed an “island universe” centered on the river of stars—actually a disk seen edge-on—that we call the Milky Way. And it was just less than 100 years ago that astronomer Edwin Hubble looked at fuzzy patches of light in the night sky—which astronomers at the time called nebulae or “clouds”—and figured out that some of them were actually other galaxies like the Milky Way but much more distant. A universe of upwards of a trillion galaxies, each with perhaps 100 billion stars, has been observed since then. Most of them were identified and theories about them proposed—including the notion that many or most of them hide a super-massive black hole at their centers—just within my own lifetime.

Phew!2

And my point to all this is that much of the world we know, and that many of us take for granted, has come about in just the last 200 years of scientific collaboration and advancement. Someone from ancient times brought forward to the world of, say, 1700 would have found it comprehensible. A lot of it would be richer, more refined, and better mannered than the world he or she knew. Some of it would have to be explained, like the rapid burning of gunpowder or the strength of the bright metal we call steel. But you could bring that person up to speed in an afternoon.

By the end of the 1800s, the world would be a lot more complicated and noisier, and the explanations would extend to a week or more.

By the end of the 1900s, the world would be a magical place, or possessed of invisible spirits, and the explanations would require an undergraduate course in several areas of study.

And in the last quarter-century, the advances are coming so fast that those of us not born with them are feeling uneasy, trying hard to keep an open mind and roll with what’s coming.

The future is coming at us all faster and faster. As a science fiction writer, I sometimes despair trying to figure out where computerization, or medical advances, or our core knowledge of physics and chemistry will take us in the next hundred years, let alone the next thousand. I don’t think we’ll blow ourselves up or poison ourselves or destroy our living world. But I’m not so sure that I or anyone alive today will understand it or feel at home in it.

1. As noted in a recent blog, these software platforms aren’t really “intelligent.” They are probability-weighting machines, easily programmed by non-specialist users, to analyze a particular database of material and create their output by predicting the next logical step in a predetermined pattern. That kind of projection can be incredibly useful in certain applications, but it’s not general, human-scale intelligence.

2. And note that I have not touched on similar stories about the advancement from alchemy to chemistry, the periodic table, and atomic theory; or various forms of energy from wood and wind to coal, oil, natural gas, nuclear fission, and photovoltaics; or advances in physics and the understanding of subatomic particles, general relativity, and quantum mechanics. All of this happening faster and faster.