Sunday, May 23, 2021

Nothing on My Mind, Again

Black square

It appears that the concept of zero, and the negative numbers that precede it—follow from it?—came originally from India and were brought into the Western world by Arab traders. All of this happened in the 7th century AD, long after the fall of the Western Roman Empire and the earlier eclipse of the ancient Greek scientific culture. The Greeks, who apparently based their mathematics on geometry, never considered negative numbers or zero, as all their measurements in space were positive. The Romans, who were practical engineers and not theorists, drew straight lines and simple arches with positive numbers, which they identified with alphabetic notation. So, once again, I’m wondering where all this nothingness, this absence, came from.1

Modern life is permeated by numbers. This is part of the 17th-century scientific revolution—think of the countdown to zero in a rocket launch—and also part of the social revolution in personal literacy, which began with Gutenberg-style printing in the 14th century, and the economic revolution in finance and banking, which was started by the Italians and their letters of credit at about the same time. Today, almost everyone has a bank account, a checkbook, a credit or debit card, and a line of credit. We are adding and subtracting numbers all the time, and we all—or most of us—watch as that dreaded lower limit, zero balance, or even a potential overdraft, a negative number, approaches. We also watch clocks made of numbers and count the hours negatively until quitting time.

We think easily in terms of null and negative arithmetic. “How many supermodels did you date last year?” Zero. “How are you doing at the blackjack table?” Down by five hundred bucks.

Did the ancient Greeks or Romans not have these and similar, context-sensitive conversations? Well, probably. But not in a mathematical framework. “How many sheep do you have?” None—I don’t keep sheep. “What did you win betting on the chariot race?” Oh, nothing—I lost.

In a world where we are not so conscious of modern mathematical concepts. A person focused on what was there, in existence, in front of their face. Yes they could do subtraction: I had five apples and gave you two; now I have three apples. But the concept of zero was the simply concept of not having, not being, not knowing. It didn’t have a number. The idea of having fewer than zero apples, because you gave away more than had, didn’t arise very often. And if you owed a debt, you didn’t think of it as a negative number in your bank balance but instead as a positive number that you eventually had to pay to someone else.

Are we better off for being more sophisticated about all this? Certainly, our kind of mathematics has enabled us to calculate with both positive and negative forces, compare tradeoffs, and create simulations of complex systems. It was modern mathematics that took us to the Moon and Mars, and some variant will take us to the stars. Zero is a real number, and negative numbers have real meaning, when you’re making these calculations.

For the ancients, the world was made up of substances. Their elements were earth, water, air, and fire. Even though the last two are gases, and air itself is invisible, anyone who has taken a deep breath and blown it out—or blown into a trumpet or a flute—would notice air’s liquid nature. Anyone who has watched fire curl around a candle wick or tremble in the wind could see that it is also semi-liquid.

The ancient Greeks and Romans never had mountains high enough that they could notice the air getting thinner the higher you went. They would not have come to the obvious conclusion, then, that at some point such thinness might lead to nothing at all, a vacuum. For them, the space above the Earth was a series of concentric spheres that held and propelled the orbits of the Sun, Moon, planets, and stars—and there might as well be air between those spheres than nothing at all. Only with modern aircraft and rocketry do we know that the air runs out about twenty miles above the Earth’s surface and all the rest is empty space.2

For the ancients, these substances were solids, not divided into tiny bits, and then even tinier bits, until you arrive at subatomic fragments too small to see or weigh. Democritus of Athens did theorize about atoms and empty space, but he probably thought of those atoms as jostling around each other like marbles in a bag. The idea that atoms themselves are mostly empty space occupied by subatomic particles, and that everything we can see and touch is a lot more nothing than something, is a concept out of modern physics. On both the quantum and cosmological levels, we have to get our heads around the idea of nothing, non-being, emptiness on a mind-boggling scale.3

The poor human brain evolved in a world full of—and was adapted to deal with—real things. We survived by knowing about the tangible environment and manipulating objects and forces that could hurl a spear to bring down a deer or gather and carry roots and berries to a place of familial consumption. Our peripheral vision is cued to a trembling in the bushes—even if it’s only the wind, because it just might be a predator stalking us or a human enemy trying to ambush us. We are primed to experience and work with what’s there, and not what’s not.

But nothing is on our minds now. And its reach is growing, especially since the 20th century, when existentialist philosophers began to question why humans, the world, and everything in it even exist. Apparently, for them, nothing is the default state of the universe and the only thing that doesn’t have to be explained. So, at least for the French avant garde, our thinking on being and nothingness has come full circle.

Forgive me for occasionally having nothing on my mind. It’s been a slow week, and I’ve run out of ideas.

1. See About Nothing from June 25, 2017.

2. Except that modern physics insists on filling space with somethings. For example, Stephen Hawking solved the problem of the apparent evaporation of primordial black holes by positing the instantaneous creation and mutual destruction of particles and anti-particles in a vacuum, going on all the time, invisibly, everywhere. When one of those pairs happened to spontaneously erupt along the event horizon of a black hole, one of the paired particles fell into the hole and the other drew out a quantum of energy in response. And so micro black holes disappeared over time. … Oh, hell! Hawking might as well have said that pixies ate them.

3. How big is an atom? I’ve read somewhere that if the nucleus of a small atom like hydrogen or helium were the size of a fly inside Notre Dame cathedral, then the electron shells and their potential orbits would occupy the entire enclosed space. And all the rest would simply be empty.

Sunday, May 16, 2021

Living with Ambiguity

Girl with magic box

The quantum mechanics conundrum of Schrödinger’s Cat1 is not an actual physics experiment but a famous thought experiment about the state of human knowledge and observation. Basically, it says that the universe goes on about its business and doesn’t reveal itself unless human beings, our active intelligence at work, actually stop and look. And sometimes, at least in the subatomic realm, the mere act of observing interferes with the outcome—as when the detection of a subatomic particle in flight by an instrument using a beam of photons interferes with either that particle’s position or its direction.2 So some things, at the most remote scales, are truly unknowable.

I would posit that the ambiguity of quantum mechanics has a lot more to do with everyday life than we normally admit. We are always faced with situations where what’s going on and what we know about it are separated. For example, did I get the promotion? Somewhere on one of the upper floors someone, or a group of someones, knows who got the nod for the job, but there’s no way—at least, no ethical way—of finding out until they announce it. And if you find out before it’s announced, that will likely change the decision. Does she love me? She knows, or maybe she doesn’t know yet, but there no way for you to know until she declares herself by word or action. And your pestering her for an answer would change the relationship. Will the jury find me guilty or not? Again, the twelve members of the jury know, or soon will know, but you won’t find out until the verdict is read in court. And if you learned the verdict ahead of time, it would cause a mistrial.

In each of these instances, from the time the question arises until you open the lid and observe the cat, the question remains both “yes” and “no” at the same time. Both choices are in a state of superposition—at least as far as you are concerned—until you learn the answer and the two states are resolved into one. This is not a question of probability, although you can take odds or make bets with yourself about how you think the question will be resolved. But all of your weighing of factors and listing of pros and cons will not make a bit of difference when the question itself lies in the hands of others, of the management team, the girl, the jury … or the Geiger counter attached to the vial of poison.

This is the sort of ambiguity we have to live with all the time. In most cases, the superposition will resolve itself eventually. But sometimes the company’s fortunes change and the job is never awarded or announced. Sometimes the girl moves away or dies before she can accept or reject you. (And sometimes she says “yes” when what she means is “maybe” or “wait and see.”) Sometimes you get a hung jury, no verdict, or a mistrial. Some issues may never be resolved in your lifetime.

For example, I’ve always wondered about the true story of the Kennedy assassination. Did Oswald act alone out of disaffection, or was he a plant by the KGB after the embarrassments of the Cuban missile crisis and Bay of Pigs invasion? Did Jack Ruby kill Oswald out of patriotic sentiment, or was he sent in by the CIA to keep the lid on a foreign decapitation action that might have led to Congress declaring World War III? The entire Warren Commission report has been unsealed by now, years ahead of the actual date, due largely to the Freedom of Information Act. The commission’s findings suggest that Oswald and Ruby both acted alone, and supposedly there was no evidence of a coverup or international involvement. Still, I wonder. Since that’s as far as the investigation went, despite 552 witness depositions, 888 pages of documentation, and 3,100 exhibits, we will never know who outside of persons in the immediate U.S. might have been involved. So, in my mind, “Russian plot” and “angry gunman” remain in superposition, as do “CIA coverup” and “angry patriot.” At this point we will probably never know.

Another example of ambiguity is the mystery of the universe’s origin. When you rewind the expansion of the galaxies that we observe back over the 13 billion years of the universe’s calculated existence, you end up with a putative point, a tiny dense particle that exploded in the Big Bang. That is supposedly our cosmological creation story. But if you expand the observable universe from a single point to its current size, even allowing for everything to move at light speed, the calculated radius is smaller than the universe in which we find actually ourselves. This problem was supposedly corrected by the “inflationary period,” proposed by cosmologist Alan Guth in 1980, in which the whole shebang accelerated instantly a few microseconds after the Big Bang, so that it went from something with a radius of less than a subatomic particle to—and here various calculations give different answers—a cloud of matter somewhere between the size of a grain of sand to something on the order of nine meters in diameter. And then it all continued to expand normally from there.

A third example of the currently unknowable—although not for lack of trying to detect it—is the relationship of matter and energy in the observable universe. From the way that the stars in spiral galaxies spin around their center—as if they were painted on a disk, rather than freely orbiting in the void—it would seem that these galaxies have more gravitationally bound material in them than the matter that shines brightly as stars. A lot more, as in several times as much. This is the “dark matter” that plagues cosmology. Either galaxies contain much more dust, gas, and both central and primordial black holes than our observations account for, or the universe is permeated by particles that affect gravity but are otherwise invisible and undetectable in every other way. And then, the universe itself is not only expanding, as if still impelled by that initial Big Bang explosion, but also its speed of expansion is accelerating at an alarming rate. So either the vacuum of space contains a mysterious force that increases with space and distance—a “dark energy” that is otherwise undetectable in our immediate neighborhood—or we don’t understand the basic structure of the universe and the real nature of the effects we observe as “space,” “time,” and “gravity.”

We can theorize about these things, but until we create better instruments and take better measurements, I think we have to live with the ambiguity of not actually understanding the universe. Many possibilities are in superposition, and not all of them can be true.

And finally, on the human scale, is the matter of human life, spirit, and what may lie on the other side of death. Is there a God or not? Do we vanish at death, like a candle flame when it’s blown out, or does some part of us—soul? ghost? brain wave? personality? memory?—exist for a time or perhaps for eternity? And there you can theorize, rationalize, believe, or doubt all you want, but only the actual experience of death will reveal the answer. And by then it may be too late to do anything about it.

Given all of this, and the example of Erwin Schrödinger’s cat to begin with, I must remain comfortable with ambiguity. I must accept that some things cannot be known until they are revealed, that others may not be revealed in my lifetime, and that some may never be revealed to any of us, no matter how long we live.

1. For those who do not know it, you imagine putting a cat into a box with a vial of cyanide and a striking mechanism that will break the vial and kill the cat when triggered by a random event, such as the decay of a radioactive element. Then you close the lid. You have no way of knowing whether the particle has decayed and the cat was killed until you actually open the lid again. So, from your perspective, the cat is simultaneously in two different states—called a “superposition”—of being both alive and dead. This composite state is not resolved until you open the lid, and then the cat is either alive or dead. But all of this pertains only to you, as the observer; for the cat, the effects are more immediate.

2. However, this question of observational interference is not part of the Schrödinger’s Cat thought experiment.

Sunday, May 9, 2021

Predicting the Future

Immortal dream

Last week I wrote about the rising curve of human technology since the 17th century and suggested what it might mean for the future. Now I’m going to dip a toe in the perilous waters of future gazing and see where, in the short term, such technologies might lead us.

But first, a few looks back, to see how quickly things have changed.

When the Washington Monument was completed in 1888, it was capped with a small pyramidal casting made of aluminum, chosen because it was likely to be a good lightning conductor. At the time, aluminum was fairly rare as a metal. It was not more valuable than platinum, as some have suggested, but it cost about $1 per ounce, which was the typical daily wage of the average construction worker on the monument.1 Although aluminum is a fairly common element in the Earth’s surface, being bound with oxygen in crystals of alum, Al2O3, found in the red earth bauxite, it took large amounts of electricity to drive off the oxygen—and electricity was in short supply in the mid-19th century. Within fifty years, after the building of large hydroelectric dams in the U.S. West, aluminum became widely available—plentiful enough to dominate the skies with airframes made during World War II, and then cheap enough to make building siding, lawn furniture, and throwaway cans in the years after. Aluminum is the wonder metal of the 20th century, along with titanium and stainless steel.

And at the turn of that last century, according to the U.S. Department of Agriculture,2 41 percent of the American workforce was employed in farming. By 1930, that number had shrunk by about half to 21.5 percent; by 1945, to 16 percent; by 1970, to 4 percent, and by 2000, to less than 2 percent. Credit for the change goes to improved use of machinery and its overall efficiency; changes in land use, with larger “factory farms” and the loss of the picturesque but noncompetitive “family farm”; and the “green revolution” in the production and use of fertilizers and weed and pest controls, as well as the genetic modification of crops. All of those chemical and biological developments are still in play and will only increase in their use and effectiveness. So we can count on the next hundred years bringing us new hybrid crops and tougher, more robust, more nutritious food resources. The only limiting factors will be arable land and fresh water—and even the supply of those may change.

Okay, let’s start with water. Right now, humanity depends on two sources for the water it uses for drinking, bathing, toilet flushing, and irrigation: rain and snow falling from the sky and running off in the local river; and ancient rains trapped in subsurface groundwater and aquifers that are tapped by wells. The more use we make of those aquifers—which collect slowly over the ages and drain quickly under pressure and pumping—the less we have for the future. But the world’s surface is three-quarters water, just that most of it is laden with mineral salts and thus undrinkable and unsuitable for farming. We have known how to filter out the salts by reverse osmosis since the middle of the 18th century. We could live by this process today, except for the fact that it’s costly. But the cost is not so much in the plants themselves and their placement as in the energy they require. (And the cost has actually dropped for flash desalination, from $30 per cubic meter in the 1960s to about $1 in 2010.) If you have abundant electric energy to run the pumps, you can have all the desalinized sea water you can drink.

So the key to the future of our water supply, our agricultural irrigation, and our population growth in general is going to be energy. Right now, like it or not, our most abundant energy resource is fossil fuels: coal, oil, and gas. Coal is abundant—we have about a thousand-year supply in North America at current consumption—but bulky to move and messy to clean up. Oil and gas can be piped to their users, and while we recently thought we were running out of easily tapped reserves—the specter of “peak oil”—technological advances in the form of horizontal drilling and hydrofracturing of oil and gas shales have extended our future. But—thinking in terms of centuries rather than years or decades—one day we will run out. And, in the meantime, hydrocarbons are much more precious as a chemical feedstock than as an energy source. Wind and solar power, being diffuse resources dependent on adequate siting, will not replace hydrocarbons in our energy future—not unless we change the landscape, harvesting solar power in orbit and beaming it down to huge diode fields on the planet’s surface, as in my novel Sunflowers, or planting windmills in mechanical forests along every ridgeline, as if they were trees.

But what I call the “enterprise of science” is well aware of the energy problem. Physicists and engineers all over the developed world are studying its production and storage from many angles. Although fusion power, electricity from the deuterium and tritium in sea water, always seems to be ten years off into the future, always receding from our grasp, one day we will figure out how to produce it, even if we have to invent artificial gravity to make it work. And once we have the design and formula worked out, we can adapt and scale it for efficiency. Biologists are at work, too, trying to use our newfound genetic ingenuity to manipulate algae into growing and secreting lipids, or hydrocarbon substitutes, from water and sunlight without adding to—but rather subtracting from—the atmosphere’s carbon-dioxide burden.

And about that carbon dioxide, greenhouse gases, climate change? All of that is this year’s daily fright, like Malthusian overpopulation or the collapse of “peak oil.” Listen: climate has always been changing, and people have always adapted to the new conditions. When winters came early and lasted longer at the beginning of the Dark Ages, they migrated south. When coastlines shifted, they moved inland. You can say the difference today is we have billions invested in shoreline real estate that is too valuable to lose. But in the Bay Area, where I live, a lot of that shoreline was saltwater marsh a hundred years ago; in another hundred years, it may be saltwater marsh again. Shrug. Changing climate, like most effects of weather—is an inconvenience, not a catastrophe. Consider the inconvenience of having to shovel tons of snow repeatedly every winter and what it does to the economy to dig out homes and plow the roads after every blizzard. And on the plus side, pushing the snow line further north into Siberia and the Canadian tundra will open up new lands for agriculture.3

And speaking of the Reverend Thomas Malthus and his prediction that human population would rapidly outgrow agricultural resources, leading to worldwide starvation, that didn’t happen, did it? When I was growing up, we heard dire predictions about vastly overpopulated countries like China and India, where people were regularly starving. And Africa, where apparently people are still starving—although much of the famine appears to be genocide by political manipulation of the food supply. We have seen that, as countries develop economically and technologically, with a greater proportion of their population moving into the educated and skilled classes, the population and its growth rate tend to shrink. Much of Europe—at least among the demographic that populated Europe over the past thousand years—is now reproducing below its replacement level, figured at 2.1 children per couple. Japan has been below that rate for decades, and the U.S.—again for the population mix of the last century or so—is trending that way. When people become prosperous and educated, and their medicine saves most of their babies who would otherwise disappear into the infant mortality statistic, they have fewer children and generally treat them better, so they live longer, more productive, more satisfying lives. China, India, Africa, and South America will eventually catch up with this curve before the planet implodes.

Add in the advancements that will come with the genetic revolution in biology and medicine, and most of the medical problems we see today will fade away. We will find ways to target and repair cancer cells. We will resurrect failing hearts and brains through tissue repair. Organ repair and replacement will become a matter of manipulating your own stem cells, as in my two-volume novel Coming of Age, rather than receiving an organ donation—willing or not—from another human being, with a lifetime of immune suppressants to follow. Issues of congenital and developmental conditions, susceptibility to the environmental causes of degeneration and disease, and the mystery of differentials in health and fitness among people will unravel as we analyze, predict, and eventually control all the biological processes of life. People will then live a lot longer, with even richer, more productive lives.

Normally, you would then expect the Ponzi scheme of Social Security and Medicare—where you need more and more young people working to pay for the care of ever more retired parents and grandparents—to collapse. But I don’t expect this to happen, and not because I believe the U.S. Congress will pay back the money it has drained from these funds for other uses. With more automation of materials extraction, manufacturing, and the supply chain and infrastructure that support them, the need for human hands to dig, make, and trade things is rapidly diminishing.4 That trend is only going to accelerate with artificial intelligence, 3D printing, and other physical amplifications of the computer age. The question is not whether we will have enough people to work in the economy, but how people will work to support themselves in the cornucopia of food, goods, services, and entertainments that is going to be showered down upon them.

I once thought that some form of Universal Basic Income—a global and permanent government dole—would be necessary to replace the “Protestant work ethic” with which my generation was raised (or, as my mother would say, “No work, no eat!”). But people are inventive and creative, and a life of easy handouts is not part of human nature. I think, instead, that people faced with an economy of predictable, unexciting, machine-made and -supplied goods and services will return to valuing human artistry and craftsmanship, at least in the areas that interest them. Yes, you can get a basic, particle-board-and-veneer desk at IKEA, but you’ll pay more for something hand carved with a flourish from a renowned local craftsman. And yes, you can watch computer-generated movies on the widescreen most evenings, but you will still hunger to go out and sit in a theater with other human beings to watch real actors speak from story lines reflecting varied human thought.

I am not a Pollyanna. The future won’t be all rosy. There will be dislocations, disruptions, and growing pains in the new world into which we are venturing. But we’ve had those difficulties along the way already and survived them. Our life today is unimaginable to someone living two centuries ago. If you had told a 19th-century farmer that in the 21st century one person would do the work of 100 of his kind, he would have despaired and wondered how his children will survive. For him, the change would be a huge, looming, unsolvable problem. But now we look back and we can barely comprehend the drudgery and futility of his life, laboring sunup to sundown, hacking at the land with a horse-drawn plow, hauling buckets of water from the well and manure out to the field, just to feed his family and still have something left over to sell at market.

And come what may, people will still have imaginative and enticeable human spirits. They will still able to look at a flower in the dawn light or the sea at sunset, breathe a sigh, and find a measure of contentment in the moment.

1. See The Point of a Monument: A History of the Aluminum Cap of the Washington Monument by George Binczewski, JOM, 1995.

2. See The 20th Century Transformation of U.S. Agriculture and Farm Policy by Carolyn Dimitri, Anne Effland, and Neilson Conklin, USDA Economic Research Service.

3. And climate change may not always be toward the warming side, regardless of the CO2 burden. Consider the change in sunspot cycles over the past twenty years at spaceweather.com. The last cycle, No. 24 since the Maunder Minimum of the 17th century, was much weaker than the previous cycle that peaked in 1998. This and the even weaker cycle No. 25 that we are now entering suggests we may be headed toward another solar minimum.

4. Watch any episode of the television series How It’s Made and count the number of human hands at work vs. the number of machines. We live in a mechanized age.

Sunday, May 2, 2021

The Rising Curve

Compound steam engine
Steam turbine blades

The rates of increase between a slope and a curve have different mathematical properties. A steady slope is usually generated arithmetically by adding single units (1, 2, 3, 4 …), while a curve is usually generated exponentially by adding squared or cubed units (2, 4, 8, 16 …). A parabolic curve, at least the part that proceeds upward from its low point, is generated by the formula ax2+bx+c,1 and it can rise really fast. My contention here is that our technological advancement since about the 17th century has been on a parabolic curve rather than a slope.

In ancient times—think of Greece and Rome from about the 8th BC, the first Olympiad for the Greeks, or the mythical founding of their city for the Romans—there was technological advancement, but not even a slope. More of a snail’s pace. The Greeks had their mathematicians and natural and political philosophers, like Pythagoras and Aristotle, but aside from writing down complex formulas and important books which probably only a fraction of the populace bothered to read, their works did not materially improve everyday life. The Greeks never united their peninsula politically, for all their concept of democracy, remaining stuck at the tribal and city-state level of conflict. And from one century to the next, they drank from the local wells, shat in the nearby latrines, and traveled roads that washed out every year with the spring floods. They built in marble the temples of their gods, but otherwise the average people lived in houses of wood and mud brick not much different from those of their predecessors in Homeric times, five centuries earlier.

The Romans did somewhat better, being short of actual philosophers but abounding in practical engineers. They developed a democratically based political and military system that united their peninsula and went on to conquer most of their known world. They built huge aqueducts to bring fresh water into their cities from distant springs, underground sewers to take away human wastes, and roads dug many layers deep into the ground that could reliably move goods—and armies—from one end of the empire to the other. They built temples and palaces in marble laid over brick but also invented a synthetic stone, concrete, that their engineers originally made from a volcanic ash known as “pozzolana.” Common people in the city lived in apartment blocks called insulae, or “islands.” They bathed regularly and made a civic virtue of the practice. Life was better under the Romans, but technological advancement was still glacially slow.

Rome, at its fall in the Western Empire during the 5th century AD, was technologically not much different from the Rome of Julius Caesar, five centuries earlier. And that fall—due largely to climate change and the ensuing barbarian migrations—plunged Europe into a Dark Age that saw small advancement in any of the arts, although we did get some practical technologies like the wheeled plow and the stirrup. Those, along with gunpowder adopted from Chinese fireworks and movable type adapted from the Chinese by Gutenberg in the 15th century for printing bibles, carried us through to the 16th century, the time of the Tudor reign in England or the Medici in Italy.

After that, technologically speaking, all hell broke loose.

Some might credit Rene Descartes and his inventions of analytic geometry and the scientific method, based on observation and experiment; or Isaac Newton and his invention of the calculus (also developed independently by Gottfried Leibniz in Germany) and his studies of gravity and optics; or Galileo and his work in physics and astronomy. Intellectually, it was a fruitful century.

But from an exhibit that my late wife prepared at The Bancroft Library years ago, I learned that a more immediate change came about with the exploration of distant lands and when the European trading companies set up to exploit them began importing coffee and tea into the home market in the 17th century. Before then, people didn’t drink much water because of rampant contamination; so instead they drank fermented beverages—sweet wines, small beer, and ale—because alcohol helped kill the germs, although they didn’t think about it in those terms. So they would sip, sip, sip all day long, starting at breakfast, until everyone was half-plotzed all the time. But then along came coffee and tea, which were good for you because you had to boil the water to make them. Everyone brightened up and began thinking. The denizens of Lloyd’s Coffee House in London invented insurance companies to protect the sea trade, which required estimates of risk and probability, and that led to a whole new branch of mathematics and the spirit of investment banking.

Put together scientific investigation with the widespread availability of printed books and the clear minds to read them, and we’ve been on that rapidly rising parabolic curve ever since.

We are just over three hundred years from the first steam engine, patented in 1698 to draw water from flooded mines. In the time since then, the engine has gone from triple-condensing cylinders to turbine blades. And that is the least of our advances. This year, we are just two hundred years from the first primitive electric motor, built by Michael Faraday in 1821. And now we have motors both small and large driving everything from trains, elevators, and cars to vacuum cleaners and electric shavers.

In my lifetime, I have seen music go from analog grooves cut into vinyl disks and magnetic domains on paper tape to digital representations stored on a chip, and photography go from light-sensitive emulsions on film and paper to similar—but differently structured—digital sequences on chips. My electric typewriter—again driven by a small motor—has gone from impact-printing metal representations of the alphabet on a sheet of paper to storing different digital sequences on that same chip in my computer. All of this puts the stereo system, camera, and typewriter I lugged off to college fifty years ago into a single device that started out on my desktop, migrated to my laptop, then moved into my hand inside a smart phone, and now lives on my wrist instead of a watch. And the long-distance call I made every week from college to my parents at home was once a direct wire connection established by operators closing switches; it would now be a series of digitized packets sent out through the internet and assembled by computers at each end of the conversation. Gutenberg’s process for printing words on paper is now embodied in the photo-masking of electronic circuits on silicon chips. And we’re not done yet.

In 1943, Alan Turing invented the first computer, designed to crack the Enigma code for the Allies. In 2011—just 68 years later—that machine’s linear descendent, IBM’s Watson, was playing although not necessarily consistently dominating the game of trivia based on history, culture, geography, and sports, dependent on linguistic puzzles and grammatical inversion, known as Jeopardy! While that was a stunt, similar “artificially intelligent” systems based on the Watson design are now being sold to businesses to analyze and streamline operations like maintenance cycles and supply chain deliveries. They will take the human element, with its vulnerability to inattention, imagination, and corruption, out of processes like contracting and medical diagnosis. Any job that involves routine manipulation of repetitive data by well-understood formulas is vulnerable to the AI revolution.2

Add in separate but related advances in materials, such as 3D printing—especially when they learn how to make metal-resin composites as strong as steel—and you get disruption in much of manufacturing, along with the global supply chain.3

Any theory of economic value that depends on human brawn—I’m looking at you, Marxists—or now even human brains is going to be defunct in another half century. That’s going to be bad news for countries that rely on huge populations of relatively unskilled hands to make the world’s goods, like China and India.

Intelligent computers are also able to do things that human beings either cannot do or do poorly and slowly. For example, in November 2020, Nature magazine reported on an AI that can predict and analyze the 3D shapes of proteins—that is, how they fold up from their original, DNA-coded amino acid sequences—almost as well as the best efforts of humans using x-ray crystallography. And this was just 20 years after the first sequencing of the human genome using supercomputers, and only 66 years after the first glimpse of the DNA molecule itself using x-ray crystallography. Knowing the structure and thereby the function of a protein from its DNA sequence is a big deal in the life sciences. It will take us far ahead in our understanding of the chemistry of life.

Ever since the 17th century, our technology has been riding a curve that gets steeper every year. And the progress is not going to slow down but only get faster, as every government, academic institution, and industrial leader invests more and more in what I call this “enterprise of science.” Anyone who reads the magazines Science and Nature can see the process at work every week.4 We all stand on the shoulders of giants. We stand on each other’s shoulders. We build and build our understanding with each advance and article.

This rate of increase might be slowed, marginally, by a global depression. We might be set back entirely by a nuclear war, which might revert our technological level, temporarily, to that of, say, the telegraph and the steam engine. But it will only be stopped, in my estimation, by an extinction event like an unavoidable asteroid or comet strike, and then so much of life on this planet would die out that we humans might not be in a position to care.

As to where the curve will lead … I don’t think even the best science philosophers or science fiction writers really know. Certainly, I don’t—and I’m supposed to write this stuff for a living. The next fifty years will take us in perhaps predictable directions, but after that the effects on human economics, culture, and society will create an exotic land that no Asimov, Bradbury, or Heinlein ever imagined. Fasten your seat belts, folks, it’s going to be a bumpy ride!

1. That’s a quadratic equation. And no, I don’t really understand the formula’s properties myself, having nearly flunked Algebra II.

2. But no, the computer won’t be a “little man in a silicon hat,” capable of straying far outside its structural programming to ape human intellect and emotions—much as I like to imagine with my ME stories. And it won’t be a global defense computer “deciding our fate in a microsecond” and declaring war on humanity.

3. It’s become a commonplace that the U.S. lost its steelmaking industry first to the Japanese, then to the Chinese, because they were more advanced, more efficient, and cheaper. Not quite. This country no longer makes the world’s supply of bulk steel for things like pipe, sheets, beams, and such. But so what? We are still the leader in specialty steels, formulations for a particular grade of hardness, tensile strength, rust resistance, or some other quality. Steelmaking in our hands has become exquisite chemistry rather than the bulk reduction of iron ore.

4. For example, just this morning I read the abstract of an article about adapting the ancient art of origami to create inflatable, self-supporting structures that could be used for disaster relief. I read and I skim these magazines every week. And frankly, some of the articles, even their titles, are so full of references to exotic particles, or proteins, or niches of mathematics and physics that I can only guess as to their subject matter, let alone understand their importance or relation to everyday life.