Sunday, December 26, 2021

The World in Pictures

View camera

Scrolling through Facebook—and occasionally dipping into the clickbait topics associated with it, particularly those before-and-after comparisons of the actors and actresses of my youth—I had a sudden inspiration: our world is measurably different since the invention and widespread distribution of the personal means of photography.

Taking photographs, especially in the early days, used to be a learned skill. To practice photography successfully, you needed a good camera, some understanding of film speeds, f-stops, aperture settings, and light conditions. And if you weren’t going to go broke getting your negatives and prints developed at the drug store, you had to build a darkroom, invest in an enlarger, buy a stock of specially treated paper, and become something of an amateur chemist. I know all this because my grandfather was a dedicated photographer, and he gave the bug to my brother, who sealed off one room in every house we owned and infested it with the vinegar-and-rotten-egg smell of developer and fixer. Since then, he’s invested a goodly portion of his disposable income in better and better cameras and a variety of lenses to fit them.

Everyone else, these days, uses an application that comes free with their smartphone and is built around a lens no bigger than a grain of sand and a photo chip to match. People routinely, or so the meme goes, photograph their lunches and send them to friends on social media. And along the way, they capture every stage of their children’s development, of their home renovation, their latest road trip, and whatever they can see out the window of a bus. Those with access to an old family album—or someone like my brother—will also post pictures from decades ago when their grandparents were young and hot and in love.

Think of what our future will be! One day soon, you won’t have to dig out a dusty old album or box of prints from the attic and hunt through them to make those comparisons. Instead, you will have your entire family and the highlights of your life catalogued year by year, sometimes day by day, to pull up, enjoy, and relive.

But now I imagine a time before even the dedicated family photographer or the studio professional, capturing those key moments like Christmas mornings, weddings, and birthdays for us to keep in our hearts forever. That time goes back to … when? Certainly before the 20th century. You can pick 1888, when George Eastman developed and marketed the first simple box camera using celluloid film under the brand name Kodak. Or the Civil War years, when Matthew Brady visited battlefields and encampments with his view camera that recorded on glass plates, which themselves required a whole wagonload of equipment to store and develop. Or back to the 1830s, when Louis Daguerre experimented with exposing copper plates that had been treated with a vapor of silver iodide to make them light sensitive, eventually producing the first “photograph”—or, from two Greek words, “light writing.”

Before that, if you wanted to capture the image of someone or something, you hired a person with skills in drawing, color mixing, and painting. That meant you put up with that person’s peculiarities of vision and perception, distortions from their own imagination and personal biases, the speed at which they were willing to work, and their other competing clients and interests. You could hardly capture an event or a moment at your discretion unless it was hurriedly sketched or carefully staged before the final application of paint to canvas for the full-color effect.

That is, if you wanted to document the people or events in your life, you were either royalty or immensely wealthy. For the rest of us, events happened and then disappeared forever, captured only in memory and imagination, or perhaps with a description in our own or someone else’s diary. People aged before our eyes, and we had only our memories of their faces to compare with the day-to-day reality in front of us. We couldn’t laugh or marvel at the clothing styles that our grandparents had thought fashionable unless they laid the actual articles in a trunk for us to discover and try on decades later.

For most of us, the world lived in a perpetual now. Our sense of history came from our parents’ memories and what we could read in books—accepting, once again, the distortions of the author’s imagination and biases. Things happened, and then they became part of an ever-fading yesterday that lost relevance even as you thought about it. People lived, grew up, flourished, grew old, and died, but all you ever knew about them was the person who happened to be standing in front of you in the current moment. It was a world with a lot less to think about and compare. It was a world with a fragile human memory, some dusty books, a few mementoes in trunks, and a gallery of paintings about other people.

Do we live in a better world now? Today we can access high-resolution pictures of the lunch we had three years ago. We can relive every birthday and Christmas morning even after most of the participants are dead. We can—now that our smartphone records not only still pictures but also full minutes of action with video—live in the event. We can do everything but speak to the dead and change, for better or worse, the moments we are watching.

Is that better? Or are we forced out of the now and into the remembered channels of fond feelings, lost hopes, and sometimes bitter regrets? I don’t know. I would ask my brother, but right now he’s too busy capturing and processing the moments themselves.

Sunday, December 19, 2021

Transitions and End States

Lotus flower

I’m probably going to get in trouble for saying this, but I think Buddha was wrong. Or at least the main teaching of Hinayana—or “Lesser Vehicle”—Buddhism, that the goal of meditation and right life practices is the state of enlightenment, is wrong. Any final end state of understanding, where all knowledge and precepts are fixed and nothing more can be attained, is simply not part of the human experience. And no, the bodhisattvas are not some other form of being, some kind of supermen, but just regular human beings who have trained their minds in certain ways of thinking.1

In my view, all life—human life, animal life, evolution on this planet, and by extension the life cycles of stars and galaxies—is transition. So long as you are alive, you are engaged in a process. It starts with the combination of egg and sperm and the first unraveling of the embryo’s DNA to create differentiation and development of specialized cell types. It ends with the failure of major organs that support cellular metabolism and the dissolution of the body’s tissues. Everything in between is process and change.

In the same way, stars begin when enough dust and gas collect in one place through electrostatic attraction, the collapse into a compact mass through the attraction of gravity, and the ignition of a fusion reaction through internal pressure. It ends with the exhaustion of fuel, and the final collapse into a neutron star or black hole. Everything in between is process and change.

In both cases, the only end state is death, the stilling of all processes, the unchanging forever of nothingness. Dead people cannot be detected or resurrected as dust and component atoms. Neutron stars can only collect more dust and mass, changing regular atoms into a jostling compaction of stripped neutrons, until they reach enough mass to disappear into a black hole and never be seen again, except for the effects of gravity.

Human beings with their subtle brains seem to long for a living end state. A final attainment of enlightenment, complete understanding, with nothing more to experience, no doubts, nothing more to endure, no reason ever to change again. This is a fantasy.

You can no more learn all there is to know—or all that’s important—and stop thinking than you can flap your arms and fly like a bird. You might be able to believe your brain has stopped because you have temporarily shut out the mutter of background thoughts that come with the firing of random synapses outside of your conscious, focused attention.2 But even if you are very strong-willed, sooner or later the environment or your body’s needs will interrupt the silence. You will ask, “What’s that noise?” or “What’s for lunch?” And the processing of inputs and answering of questions begins all over again.

Serial enlightenments are possible. You can learn new things, resolve certain questions, gain sudden understanding of truths that have long been hidden from you. You can have insights and “Aha!” moments. But none of them will be the complete answer. None will stop your thinking and questioning, provide the perfect understanding of all things—or even just the things that are temporarily important to you—and free you from the need to think, doubt, or decide ever again.

In the same way, human societies seem to hunger for a final state of organization. A revolution that will bring on a subline method of government supplying all human needs to all people, with no dissatisfactions, no failures of equality and personal respect, no cases of privation or indignity. Plato’s Republic was ruled by philosopher kings who could make no mistakes. Communal societies are set up so that every member works selflessly and joyously for the good of all. And these are supposed to be the “end of history,” because the dissatisfactions and disruptions that make up most of the historic record—think wars, famines, invasions, market crashes, and other disasters that require dating and analysis—will no longer play a part in human life. Instead, everyone simply enjoys one long, golden summer’s afternoon, like Odysseus among the Lotus Eaters.

But, as noted above, a human being—a body of cells in process, inhabited by a mind active with synapses firing away—never reaches a state of perpetual understanding and satisfaction. And if one human can’t do it, then all of them at once will never do it. Life is change. And the cessation of change is death.

1. And, actually, I am not being quite accurate here. In the Buddhist teaching, bodhi or enlightenment is valued because it supposedly stops the karmic cycle of rebirths to higher planes of existence, if you’ve been good, and to lower planes, if you’ve been bad. Instead, when you die you go to Nirvana, which is a place of endless meditation, like Heaven without the halos, wings, and harps. Bodhisattvas are supposedly those who have already attained enlightenment, could retire to Nirvana themselves if they wanted, but have chosen to stay behind and help others along the path. Of course, if you don’t believe in reincarnation and the transmigration of souls, but instead think that when you die you just “go out, like a candle flame”—to use the Buddha’s own metaphor—then … what’s the big deal?

2. See Working With the Subconscious from September 30, 2012.

Sunday, December 12, 2021

Big Lies

Puppet master

Telling big lies and expecting them to be accepted as true is a political expediency—and it’s on everyone’s mind these days, from Trump supporters to followers of the mainstream media, and always blamed consequentially on the other party.

It was none other than Vladimir Ilyich Ulyanov, who took the nomme de guerre ‘Lenin,’ who said, “A lie told often enough becomes the truth.” Well … Lenin was a liar.1

My firm belief is that truth is a thing that exists separately from the mind of human beings. It may be shaded by perception. And certain things a person may hold to be true are the product of his or her own desires and imagination. But there is also such a thing as objective reality, a state of affairs or of nature that can be discovered, examined, and proven to exist regardless of anyone’s belief in its existence.

Still, political parties, from the government ministers surrounding a failed monarch or floundering oligarchy, to the revolutionaries attempting to overthrow such a state—Lenin again, and his Bolsheviks—to the opportunists intent on capturing an already crumbling society—Hitler, Goebbels, and their Nazi cohort—have used outright, fabricated, manipulative deception, lies by any other name, to attain their ends. The question is whether the strategy really works.

Sure, for some people. There are those who will believe anything that is popular or persistent. They are either too lazy or otherwise engaged in everyday life to put effort into questioning, researching, weighing arguments, thinking through probabilities, and determining the nature of reality for themselves. These are the blindly political and the stupidly apolitical, people who don’t know or don’t care.

But most of us do care about the nature of reality, pay attention to the news and the political climate, and try to think for ourselves. For us, the Big Lie will only work under two conditions: first, total repression of the truth, and second, total control of the population and its culture. That is, make the truth inaccessible and then make seeking it either dangerous or uncomfortable.

Societies where the Big Lie has gained a foothold have eliminated all independent media outlets—newspapers, radio, television, and now the internet and its spawning channels—and replaced them with those under government control. This requires either a priori censorship, where the ministry of information and propaganda establishes guidelines for what it considers “the truth” and demands prepublication review of all journalistic and artistic content, or a government shutdown of all dissenting sources and promotion of those that are friendly to its purposes. But even in countries where there are only a few official, government-friendly sources, dissenting views do get out. The Soviets had to deal with samizdat—hand-copied and clandestinely distributed literature, music, and other independent influences—for most of its existence. While nobody knew what was really going on, it seems that everybody knew.

And so, societies that depend on the Big Lie also need to erect and maintain a police state, invest in forces to track down and punish dissenters, establish prison systems into which they will disappear, and pretend that all their people wear happy, smiling faces. The Soviets had their Cheka, the Bolshevik secret police, which morphed into the NKVD, which took on international counterinsurgency, followed by the KGB, which grew to control both internal and external espionage. In the same way, the Nazis had their Gestapo, and the Imperial Japanese had the Kempeitai. But even when knowing and speaking the truth could get you caught, internally exiled, and perhaps killed, all these systems did was to shut people’s mouths. They couldn’t stop people’s brains.

Many people in such a society keep quiet in the presence of the Big Lie because it is safer not to question. Some will speak it aloud because that way they can attain more certain safety, promotion at work, advancement in the party, or other material benefit. And some will challenge the lie, either quietly to themselves, or among family and close friends, or by smuggling samizdat, because they cannot turn off their minds and become willfully blind or stupid.

Which brings us back to this country, America today, and its current divisions. The Republicans and conservatives—who include a solid core of “Never-Trumpers”—as well as the Trump supporters themselves, and the Democrats and progressives, who are hardly a monolithic presence but vote together for strength, as well as the establishment media that mostly include their true believers, would all like to think that they hold onto the one, true reality and that the opposition has fallen to the Big Lie. And this is amusing right up to the point that people start burning cities and killing each other.

But the certainty is, technology has outrun the concept of the Big Lie. Gone are the dim, dark days of the 20th century, when you could shut down two national newspapers and three radio stations to control the flow of information. The internet has put an end to all that. And much as governments—the Chinese Communist Party, of course, and certain people in our own Administrative State—would like to control the internet for their own benefit, they can’t even keep down the amount of digital sabotage, “dark web” conspiracies, and free-spirited public anarchy flowing through the system, let alone compel the people toward truth and stop the spread of outright lies.

And even if there weren’t this flowering of information resources, true or not, this country still doesn’t have effective secret police—or not a force that people actually fear. The FBI has been running counter-intelligence programs for years, mostly against the whack-a-moles they identify as organized crime, drug and gun runners, and coercive religious cults, often involving elaborate sting operations, and it hasn’t done much good. So nobody in the Bureau has time to go after everyday speech offenders. The progressives in academia have been trying to enforce “hate speech” rules and a culture of “political correctness” for years without much more than laughable effect. Out of politeness, the rest of us try not to be crudely offensive, but we still have our own nest of thoughts.

America has always been an unruly place, full of scofflaws—if you doubt this, go drive the California freeways sometime—and independent thinkers. We are more cynics and skeptics than true believers. And most of us are just involved with the everyday business of living.

1. The Big Lie theory has also been attributed to Nazi Propaganda Minister Joseph Goebbels, but it appears he never actually advocated for nor admitted to it. That was probably good strategy, because the Third Reich rose to power on a wave of pervasive, all-encompassing lies and a series of staged misdirections, including the Reichstag fire and the Kristallnacht disturbances.

Sunday, December 5, 2021

The Thomassian Jihad (Redux)1

Robot juggling

In the Dune novels, the civilization of the far future is shaped by a war in the distant past, the Butlerian Jihad, that freed humanity from the lassitude and enfeeblement of being helped—to the point of near extinction—by robots and artificially intelligent machines. The defining call of that jihad was: “Thou shalt not make a machine in the likeness of a human mind.”

In the aftermath of this war and its upheavals, the “Great Schools” arose to develop human beings who would take over some of the necessary functions that had been handled by the now-outlawed machines. They trained the Mentats, human computers, the Bene Gesserit, female protectors of the bloodlines, and the Bene Tleilax, genetic scientists who created special-purpose humans and body parts, often with destructive intent.

As the novels thoroughly explored, the Butlerian Jihad exchanged machines for human beings, who became valued, traded, and objectified solely for their special functions: Mentats for their calculating ability, Bene Gesserit-trained concubines for their seductive skills, and Bene Tleilax-created deformities for whatever the buyer desired. In the original novel, the distinction between House Atreides (the “Good Duke”) and House Harkonnen (the “Evil Baron”) lay in their treatment of these oddities. For the Atreides, Mentats like Thufir Hawat and Swordmasters like Duncan Idaho were trusted friends and companions. For the Harkonnens, everyone other than immediate family was just a commodity.

But the underlying reality from the Butlerian Jihad remains: people who have been trained or designed to perform a specific function are acquired and valued for that function and less as beings capable of personal development, surprises, and a sense of their own destiny, and so they are valued as less than fully human.2

This background inspires me to consider what I would do to ignite a jihad that shapes the entire human universe for thousands of years, the Thomassian Jihad. And I believe my central tenet would be: “Thou shalt not treat a sentient being as an object.” That would take care of a number of our current sins, as well as the underlying fault in the Dune books.

Most immediately, the call would do away with slavery of every kind: outright ownership of human beings as productive objects and the sort of wage-slavery and contrived indebtedness that traps the poor and the immigrant and fuels sweat shops and company towns around the world. It would also outlaw the treatment of women as chattels and sex slaves to their husbands. More than that, it would forbid—or at least make the practitioner feel a measure of guilt and shame—the objectification of women and children for the configuration of their faces and bodies and as receptacles for sexual appetites.

Politically, my jihad would put an end—or try—to the treatment of individuals as no more than members of a group based on a single, obvious common distinction, such as race, gender, religion, regional origin, or other useful and objective features. This kind of pigeonholing (the objectification of birds) is useful to those who would build political strength from individuals who are thereby deprived of their individuality and the sense of their own unique purpose and destiny. Group objectification turns human beings into political widgets.

I refer to “sentient beings” rather than just “human beings” because I tend to think more broadly than our current, limited understanding. One day, we will meet aliens from worlds elsewhere in the galaxy, and when we no longer have the prejudices of physical form and DNA analysis to rely on, we will have to judge them by what we can see of their minds and our measure of their conscious awareness. And this leads back to our treatment of putatively intelligent animals here on Earth: the whales we have hunted for their oil, the elephants we have slaughtered for their tusks, and the octopi we cut up for sushi. A creature that approaches humanity in its understanding and awareness—different in scale but not necessarily in kind—should get a measure of the respect in which we hold other human beings.

Does my jihad require that we approach all such beings subjectively, evaluating them for their potential to think and respond, to care and to love, to have hopes and fears, to dream and have a personal destiny? Oh, yes! That is the essence of the Golden Rule: if you would be treated as a real human being, an individual, whole and entire unto yourself, then you must treat others of your kind—and that includes those with awareness and self-actuation equivalent to your own—with the same appreciation and respect.

Of course, the Thomassian Jihad would be nothing new. Humanity has been waging it with varying success since ancient times. That Golden Rule is essential to Christianity, Judaism, Islam, Buddhism, Hinduism, Taoism, and most other world religions.3 As soon as people gain a full realization of themselves as thinking, feeling, self-aware, and self-actuating individuals, it becomes inescapable to any intelligent and well-balanced mind that others of like mind must think, feel, and actually be the same. This perception was augmented and rationalized during the European Enlightenment of the 17th and 18th centuries that drives the scientific and technological advances that benefit us today.

To deny the common humanity of like-minded beings is to put on personal blinders, either willfully or through ignorant error.

1. This is actually a rethinking, or restatement, of a meditation I posted on December 8, 2013. The wheel turns …

2. The Dune series—and I suspect Frank Herbert’s worldview itself—radiates a sense of ultimate failure. Or rather, a rejection of easy promises and bright futures through a regression to the human mean. Republics give way to imperial monarchies. Free people succumb to ever more refined tyrannies. And the novels’ central character—Paul in the first three books, and his son Leto II the God Emperor in the fourth book—ultimately fail despite having superb physical and mental training, immeasurable self-control, the power of prescience, an empire at his command, and in the case of Leto, physical invulnerability in a pre-sandworm body and access to the details of all human history through genetic memory. No matter how good things get or how well developed a person might be, return to some ingrained human “normal” is always coming.

3. See the American painter Norman Rockwell’s notes on the commonality of the Golden Rule.

Sunday, November 28, 2021

The Lighthouse Keepers

Lighthouse

Consider those who keep our civilization functioning. In this instance, I’m thinking of lighthouse keepers.

The job these days can mostly be handled by automated systems—and ones that don’t even require all that much computerization. But for centuries, dangerous coastlines and peninsulas that reach out into the sea were guarded by lighthouses that required continuous tending and maintenance. Someone had to polish the lenses when salt spray and haze clouded them. Someone had to be sure the fire had a supply of oil that would last the night, and after the lamp was electrified, this person had to make sure the wiring was good—that salt air again—and the bulb hadn’t burned out. More than that, someone had to make sure the light came on at dusk and lasted till dawn. And now, with all the modern technology involved with shipping, monitor the radio for distress calls, maybe monitor a radar station, and prepare to lend a hand in emergencies.

But consider, also, the basic function of a lighthouse. It shines a light so that ships and sailors without all the modern tools of radar, depth sounders, and global positioning can identify a solid point on land and compare it with their charts. A lighthouse is a signpost, a way station, a fixed star at the edge of the ocean.

The lighthouse keeper is the guardian of that star. This is a person who commits to a function and has no room in his or her soul for doubt, equivocation, or easy excuses. “Well, I don’t see any ship’s running lights on the horizon, so no one’s out there to see my beacon. It doesn’t matter if my light goes out.” Or, “with that storm offshore, the beam won’t penetrate more than a few miles. I might as well not light up tonight.” The lighthouse keeper does not know, and must not care, whether the light will serve its function on this or any other night. The whole point is to be there, to keep up his or her end of the contract.

This is persistence in the absence of hope or expectation. This is faith in mere function. This is the essence of duty. And it’s what keeps communities together and civilization functioning.

Consider other examples: the police who patrol the streets of quiet neighborhoods after midnight, knowing that they won’t likely get a call; the firefighters who come to their station and polish their equipment, waiting out the night in a city of concrete and steel that won’t likely burn; the emergency medical technicians who keep their ambulances stocked and ready to roll in a college town full of healthy young working folks and students who won’t likely need their services; and the librarians who stand at their desks, ready to hand out books, in a population where reading is on the decline and every book, movie, and recording is now available online.

More than that, the soldiers who stand watch and continue their training in foreign outposts where the presumed enemy may never attack; the samurai who work in a garden and yet practice with their weapons every day; and the writers, painters, and musicians who practice their art and keep the faith even when their work does not sell—or not in any great quantity—and they must keep plugging away at their “day jobs.”

I believe it was Woody Allen who said that ninety percent of life is just showing up. I would amend that to ninety percent being just doing what you committed and perhaps were paid to do, even when you can’t see an effect, know your contribution may not be making a difference, and sometimes think your work is wasted effort. This is faith. This is perseverance. This is what keeps the world going round.

Sunday, October 10, 2021

The View from 2300 AD

Graffiti

I have been offline with these musings for the past three months. That is partly the result of having nothing much new to say after more than ten solid years of weekly blogging—and finding myself having returned to and reinvented several sets of ideas more than once. It is also partly because one theme in my blogs deals with “politics and economics,” which I consider different aspects of the same human endeavor. And the politics and economics of the past year or so have defied description. As a writer, I could not conceive of a scenario to rival all that has happened: it would be too unbelievable.1

I mean … well, I won’t go into details. Anyone who has been watching the news for the last four years, and then the accelerating conundrums and catastrophes of the past three to nine months, will understand what I mean. If you believe that the current state of our politics and economics is normal, then we don’t have much to talk about. I doubt we even live on the same planet.

The only way I have managed to stay sane is, first, by staying away from the keyboard. I knew that if I wrote about what I was really thinking, I would probably alienate half my friends and family, and I would draw the attention of the national authorities as a probable domestic terrorist. Second, by reminding myself that, as a science fiction writer, I have always tried to take the long view, the consideration of centuries if not millennia, while the human circus rides through my town. In times of stress, I think of this as the view from 2300 AD.

This does not mean I try to calculate, foresee, or envision the world of 2300 AD. I have done that—and beyond—in several of my novels, but it has always been with a “tweak” to create a certain imagined effect. I don’t know what the accelerating advance of technology2 will bring in the next three hundred years, but it will be fantastic.

When I was growing up, everyone thought that by now we would have flying cars—and those, like sustained nuclear fusion, are always ten years away, for good reasons having to do with basic physics, energy transfer and storage, and average human reaction time. But no one considered that we would have a device no bigger than a pack of cards that was also a camera; a stereo system, television, and movie theater; a link to every library, news source, and technologically connected person in the world; with access to every book, movie, and record ever made; immediate access to our banking, retail stores, and personal services; and a modern form of telegraphy—as well as a telephone. (Not to see that coming out of digital versus analog recordkeeping was just a failure of imagination.) Add to that information advancement the current advances in biology and medicine, driven largely by our new understanding of genetics, and you will see a complete redefinition of disease, disability, human health and capability, and perhaps even death itself.

But we also have before us various chimeric visions that simply are not going to occur without a whole lot of either scientific naïeté or scientific advancement beyond imagination.

As to the latter, consider the Star Trek vision for the 23rd century, which is a utopia of dedicated and happy people fulfilling their personal lives in communal cooperation without an economic care in the world. This vision is only made possible, of course, by turning economics on its head and eliminating the laws of supply and demand and the effects of scarcity on necessary and desirable commodities. The Star Trek universe achieves this through unlimited energy supply, made possible by matter-antimatter reactions, and unlimited material abundance through molecular replication of foodstuffs and other goods, driven by that inexhaustible energy supply. In the real world, pattern replication by energy-to-matter conversion would involve a physics far beyond what we understand today, as would a “warp drive” that somehow bends the “fabric” of space. And matter-antimatter conversion will never be practical.3 Personally, I’m not holding out for utopia.

As to the former, consider the current concern4 about anthropogenic global warming or, more recently, “climate change.” The climate is always changing and has been changing throughout recorded history, although previous civilizations lacked the technological tools to record local and global temperatures with much precision or to project the weather a hundred years into the future based on a few key variables. Sea levels have been rising steadily if slowly since the last Ice Age, 12,000 years ago, and temperatures have been rising steadily in the 22-year cycles of sunspot activity since the last Maunder Minimum, 400 years ago. The valuable coastal property that is now so in jeopardy—at least here in the San Francisco Bay Area—was mostly marshland a hundred years ago and may be marshland a hundred years from now. But whether the global temperature rises by two to five degrees centigrade or falls by that amount, the change will be so gradual that people will adapt. And a robust technological civilization—one which will certainly spread its knowhow to the far corners of the “developing world” by the time these effects take place—will be able to counter most of the changes. And anyway, the whole fraught scenario is based on computer modeling, projections made by selecting various parameters out of an astounding mix of variables and making certain assumptions about feedback loops and “forcings.” Personally, I’m not worried about global catastrophe.5

I don’t know what the future will bring, not in the next hundred years, let alone three hundred. There will be dislocations, local and personal catastrophes, as well as opportunities for immense civilizational and personal growth. I won’t be there to see it, but I know it’s all coming. That’s the perspective of the long view.

In the 19th century, the majority of Americans worked in agriculture, just trying to feed the nation. The steam engine was only then coming into common use, and applications of electric current—the generator and electric motor, the telegraph and telephone—were still in their infancy. In the 20th century, those technologies, along with the internal combustion engine and the vacuum and cathode-ray tubes, started the mechanical age that led to the urbanization of the American worker and the dissemination of a national culture. The latter half of that century also brought into being the transistor and the computer, which changed the face of knowledge—its storage, manipulation, and control. The mechanically oriented factory workers of the 20th century became the intellectually oriented “symbol manipulators” of the 21st century. And now, the extension of computer technology into artificial intelligence; factory, supply chain, and financial automation; and what I call Gutenberg manufacturing6 is threatening the jobs of both factory and knowledge workers. But I do know that a hundred years, three hundred years, from now there will still be a society, with both politics and economics, and it will serve the interests of human beings and not robots or progressive policy wonks or philosopher kings.

In the 1st and 2nd centuries, the Western world was dominated by the Roman Empire, and it was a calculatedly brutal place, although life in other “civilized” regions like India and China was not much better. In the two thousand years since then, the benign teachings of Christianity and the rationality of the Enlightenment have made the average person more refined, gentler, and more amenable to learning about both other cultures and the physical world. And in just the sixty years since the passage of Civil Right legislation, America has become a more inclusive and mindful place, with a better life for all its citizens and residents. I don’t believe human nature changes much, but I do believe that civilizations can deflect some of the savagery of life and adapt the baser instincts with which people meet it. I do know that in a hundred or three hundred years, life will still have its stresses, injustices will still regularly occur, and that people will meet them with attitudes that embody the latest philosophies with a measure of kindness and hope.

So the last three months, the last four years, while still bizarre, inane, and distressing, will eventually pass into history. I believe I will go on for another two decades or so—with any luck—in a state of hope and expectation. And my car still won’t fly.

1. Or, as I have said in other contexts, “truth is stranger than fiction, because fiction must be realer than truth.”

2. See, for example, Two Different Worlds from September 24, 2017.

3. Matter we have in abundance. It’s the antimatter that’s hard to come by. It doesn’t exist naturally in this universe, so you can’t mine it or refine it from existing materials. It can be made in particle accelerator at a huge cost in energy. The output of antimatter at the CERN accelerator in Europe—which is powered by conventional means like coal, nuclear, or now partially wind and solar energy—is 1x1015 antiprotons per year, or about 1.67 nanograms. At the operating cost of this facility, that would make a gram of antiprotons worth about $62.5 trillion—not something you’re going to burn in a “warp core” to get a cup of Earl Grey from your food replicator.

4. I would write “hysteria,” but that would only make half my friends mad and attract the attention of those national authorities.

5. Remember, I am part of the generation that grew up with tales that the world would end in nuclear holocaust, so we learned to duck and cover, followed by nuclear winter. Then it was Malthusian predictions of overpopulation and eventual starvation. Then we had global cooling, quickly followed by global warming. And finally, we had the Y2K bug, which was going to crash all the world’s computers back to Stone Age. Every millennium, the end of the world strikes fear into the heart of the populace. And yet, here we are.

6. See, for example, Gutenberg and Automation from February 20, 2011.

Sunday, July 4, 2021

Looking Backward, Looking Forward

 
The Fool The Magician

Time, it seems, goes in only one direction. Or that is our current understanding of the laws of physics. Of all the dimensions of this place we call the universe, the only one that is unidirectional is time. Of course, we may not understand everything yet. And perhaps our sense of “time’s arrow” is based on our perceptions of cause and effect: if one thing follows another, they appear to be ordered in time. But time is malleable, stretching and collapsing according to the theory of general relativity, although still just going in the one direction we call forward, toward the future.

I am not here for a meditation on physics, however. The human brain is capable of working time both forward and backward. We encode short- and long-term memories that capture images, words, feelings, sounds, smells, and other artifacts of experience. Some people’s brains record everything they experience, every day, every minute, going back to first awareness. For most of us, the capture is more selective, usually affixed to experiences that left us with a strong emotional response. But some memories just get stored, almost randomly, and repeat themselves at the oddest of times or in scrambled contexts.1

We also use the mechanisms of the prefrontal cortex for planning and decision making, which are both projections of today’s activities into the future. We not only try to predict the future with our plans and decision, but we mentally inhabit a fictitious future with our hopes and daydreams—a false future that exists in our emotional lives as an alternative to the choices and possibilities that exist in the front part of our brain.

Human beings—and to some lesser extent other mammals—are the animals that can move forward and backward in time, using our brains. Our bodies, however, only travel the forward route, seeing planning and decision points pass by us like signposts along a highway.

A balanced human in an integrated life uses both functions. We look forward in expectation for the protection of ourselves and our families. We look backward in remembrance to give meaning to that expectation.

Some people live too much in the past. Those who have bleak or unknowable futures, or the very old with almost no future at all, tend to dwell on the past and their memories of a former life and loved ones as a means of sustaining their individuality. Some who have trauma or misdeeds in the past tend to dwell on them in a fretful attempt to change what has already occurred and, in their dreams, create a new current reality.

Some people live too much in the future. The very young look forward because everything that has gone before—especially if their life has been the mundane daily round of play and school, and interactions with parents and friends—is merely preparation for the life to come, creating a set of skills and responses that will carry the person into adulthood and beyond. Others more adult live for the future because their lives so far have been wasted—especially if drugs and alcohol, bad relationships, or bad choices and actions have been involved—and they can only look to the future to make amends, make a better life, or create new meaning for that life.

My own life, I realize now, toward the end of it, has always been frontally focused. I have always been considering, planning, and dreaming about the next day’s work, the next job, the next book to read or write, the next experience. I do recall the past and have pleasant memories of most of it, but I do not live there. I live in tomorrow, next month, next year. My head is always somewhere six months out.

When I was at the university and was already focused on studying English literature and a future as a writer—novels were always my first choice, although not the most lucrative part of my eventual writing career—I took a course on Predicting the Future. This was part of that academically silly season in the late 1960s, when campus radicals were demanding courses with more “relevance”—by which they meant to steer away from the traditions of Western Civilization. But my mentor, the science fiction author who wrote under the pen name William Tenn, took advantage of the opportunity to inject a bit of his favorite subject into the teaching. We read the current crop of futurist authors and a bit of predictive science fiction, and we studied the ways people have tried to know what’s coming next. For example, I wrote term paper on Tarot cards as a method of fortunetelling.2

But now, in my seventy-third year, I find that looking forward has disturbing possibilities. There aren’t that many new experiences, possibilities, or choices out there ahead of me. Sometimes the future seems like a narrow, gray space, like the last few pages under your thumb in a book that you are reading and enjoying, whose plot you are following, and whose climactic moment has not yet come, and you’re not sure there are pages enough, time enough, to make a suitable ending. Six months out used to be a long time for me. Now, on some days, it seems to be all the time that is left—even though I am still healthy, strong, eating right, exercising, healing well, and hopeful. But just … how much more can I expect from life?

It’s not a terrifying thought … yet. But when you can sense the Great Darkness somewhere beyond that gray space, it makes you pause and consider your past life choices.

1. And, as research into “false” or altered memories has suggested, every time we recall a memory, our brain does a little editing—maybe a bit of improvement, sometimes a bit of damage—that changes the memory for future recall. Nothing is fixed in our brains, like an engraving on a steel plate. Instead, everything is more malleable, like the silver nitrate in a film emulsion, which can be affected by later exposure to light, or the digital bits in a computer memory, which get translated out of storage and then translated back into storage, with changes and degradations going both ways. Our brains are more a fluid “chemical pot” than a hard-wired “electric box.”

2. Actually, the Tarot—particularly in the 22 cards of the Major Arcana—is a story of human struggle and conflicting values that stands in opposition to the Judeo-Christian tradition. It relates the life transition of every aware soul from the insouciant and careless Fool of Card 0 to the powerful and careful Magician of Card 1. It’s a story of personal development.

Sunday, June 6, 2021

On Personal Boundaries

Teasing with apple

We all have them, boundaries—or frontiers, if you prefer—the edges of our souls, the demarcations of our “comfort zone,” the limits to which we will go, where beyond lie trespass and possibly danger.

We build these boundaries over a lifetime. Sometimes it’s by choice: we have a bad experience and say, “Not going there, never again.” Sometimes the boundary is set by habit: “I’ve never been there, but it looks dangerous, or demeaning, or ‘just not my style.’ ” And sometimes the territory beyond the boundary, the frontier, is someplace that simply lies outside our imagination, it’s not part of the image of ourselves and the world that we have built up as “the right and proper me.” Or we view it from the vantage point of imagination and decide, “That doesn’t look right.”

We build these boundaries—establish these frontiers—like an engineer building a fortress wall. Or, if you prefer a softer metaphor, like a gardener defining the region of conscious cultivation as separate from the wild lands outside, and maybe there’s a wall, too. As I say, everyone does this, because it’s part of living and deciding who we are, what we will be, which conditions of living we will accept and which, based on that personal image and experience, we reject.

And everything goes well until life rises up and smacks us in the face. It may be a new job, where we are required to expand our skills or handle crises we’ve never encountered before. Or it could be a change in life direction, like going off to college or joining the army, suddenly becoming rich or just as suddenly becoming poor. But for each of us who have the capacity, that smack in the face is sure to come when we fall in love. Then we must, simultaneously, cross our own boundaries and enter into another person’s frontiers.

It’s all well and good to imagine your “perfect woman” or “ideal man,” or your “soul mate.” But those are creations of your own imagination. They want the same things you want, share your interests and dislikes, conform to your vision of yourself, and never question or make demands about the things—thoughts, activities, sacred beliefs—you hold dear. Your soul mate is a fiction: a boundless, smooth orb that is congruent with your idea of self, with appropriate gender alterations. You don’t have to deal with your perfect woman or ideal man. You don’t have to cross any boundaries.

And the reverse is true. You don’t have to let that ideal person cross your frontiers and invade your home territory, the center of yourself, because that fictitious person already lives there, in your imagination. And just as it’s scary to go and try new things you’ve never wanted to or even imagined experiencing, because the object of your growing affection loves them or demands them, so it’s scary to let a real person cross your frontiers and learn about—and try to deal with—the real you, your likes, fears, foibles, habits, and sacred beliefs.

This is when life comes up and smacks you in the face. If you can, you then lower your defenses, cross the frontier, and try to deal with a real person with real wants and needs. Maybe the differences are too great, the new territory too unknown, or too dangerous, or just too bizarre for you to enter and be comfortable. And then maybe the object of your affection, in her or his real self, becomes less desirable to you, less possible for you to be with. Not that they simply fail in being your ideal, or that they become something opposite, all sharp corners and bad angles, nothing like congruent at all. But just some of the ground you have to cover—the experiences you have to embrace and pursue in order to be with that person—are simply impossible for you. And then you learn from that encounter, adjust your definition of boundaries, and move on.

And perhaps your boundaries or frontiers are so wide, your walls so high, that no one but the ideal soul mate can get through, because they are already inside. Then, I am sorry to say, you will never have what you want. You will be alone in your garden, safe in your castle, and never know the terrors of opening up to someone unknown and the joys of finding that you can expand.

But if you can venture outside your comfort zone, take risks, cross boundaries, and in some cases redefine yourself, you will find the happiness of discovering that you are not alone in the universe, that someone else can share your joys and burdens, and you can walk the road of life together.

That’s the cold, sober reality, and you must make the best of it.

Sunday, May 23, 2021

Nothing on My Mind, Again

Black square

It appears that the concept of zero, and the negative numbers that precede it—follow from it?—came originally from India and were brought into the Western world by Arab traders. All of this happened in the 7th century AD, long after the fall of the Western Roman Empire and the earlier eclipse of the ancient Greek scientific culture. The Greeks, who apparently based their mathematics on geometry, never considered negative numbers or zero, as all their measurements in space were positive. The Romans, who were practical engineers and not theorists, drew straight lines and simple arches with positive numbers, which they identified with alphabetic notation. So, once again, I’m wondering where all this nothingness, this absence, came from.1

Modern life is permeated by numbers. This is part of the 17th-century scientific revolution—think of the countdown to zero in a rocket launch—and also part of the social revolution in personal literacy, which began with Gutenberg-style printing in the 14th century, and the economic revolution in finance and banking, which was started by the Italians and their letters of credit at about the same time. Today, almost everyone has a bank account, a checkbook, a credit or debit card, and a line of credit. We are adding and subtracting numbers all the time, and we all—or most of us—watch as that dreaded lower limit, zero balance, or even a potential overdraft, a negative number, approaches. We also watch clocks made of numbers and count the hours negatively until quitting time.

We think easily in terms of null and negative arithmetic. “How many supermodels did you date last year?” Zero. “How are you doing at the blackjack table?” Down by five hundred bucks.

Did the ancient Greeks or Romans not have these and similar, context-sensitive conversations? Well, probably. But not in a mathematical framework. “How many sheep do you have?” None—I don’t keep sheep. “What did you win betting on the chariot race?” Oh, nothing—I lost.

In a world where we are not so conscious of modern mathematical concepts. A person focused on what was there, in existence, in front of their face. Yes they could do subtraction: I had five apples and gave you two; now I have three apples. But the concept of zero was the simply concept of not having, not being, not knowing. It didn’t have a number. The idea of having fewer than zero apples, because you gave away more than had, didn’t arise very often. And if you owed a debt, you didn’t think of it as a negative number in your bank balance but instead as a positive number that you eventually had to pay to someone else.

Are we better off for being more sophisticated about all this? Certainly, our kind of mathematics has enabled us to calculate with both positive and negative forces, compare tradeoffs, and create simulations of complex systems. It was modern mathematics that took us to the Moon and Mars, and some variant will take us to the stars. Zero is a real number, and negative numbers have real meaning, when you’re making these calculations.

For the ancients, the world was made up of substances. Their elements were earth, water, air, and fire. Even though the last two are gases, and air itself is invisible, anyone who has taken a deep breath and blown it out—or blown into a trumpet or a flute—would notice air’s liquid nature. Anyone who has watched fire curl around a candle wick or tremble in the wind could see that it is also semi-liquid.

The ancient Greeks and Romans never had mountains high enough that they could notice the air getting thinner the higher you went. They would not have come to the obvious conclusion, then, that at some point such thinness might lead to nothing at all, a vacuum. For them, the space above the Earth was a series of concentric spheres that held and propelled the orbits of the Sun, Moon, planets, and stars—and there might as well be air between those spheres than nothing at all. Only with modern aircraft and rocketry do we know that the air runs out about twenty miles above the Earth’s surface and all the rest is empty space.2

For the ancients, these substances were solids, not divided into tiny bits, and then even tinier bits, until you arrive at subatomic fragments too small to see or weigh. Democritus of Athens did theorize about atoms and empty space, but he probably thought of those atoms as jostling around each other like marbles in a bag. The idea that atoms themselves are mostly empty space occupied by subatomic particles, and that everything we can see and touch is a lot more nothing than something, is a concept out of modern physics. On both the quantum and cosmological levels, we have to get our heads around the idea of nothing, non-being, emptiness on a mind-boggling scale.3

The poor human brain evolved in a world full of—and was adapted to deal with—real things. We survived by knowing about the tangible environment and manipulating objects and forces that could hurl a spear to bring down a deer or gather and carry roots and berries to a place of familial consumption. Our peripheral vision is cued to a trembling in the bushes—even if it’s only the wind, because it just might be a predator stalking us or a human enemy trying to ambush us. We are primed to experience and work with what’s there, and not what’s not.

But nothing is on our minds now. And its reach is growing, especially since the 20th century, when existentialist philosophers began to question why humans, the world, and everything in it even exist. Apparently, for them, nothing is the default state of the universe and the only thing that doesn’t have to be explained. So, at least for the French avant garde, our thinking on being and nothingness has come full circle.

Forgive me for occasionally having nothing on my mind. It’s been a slow week, and I’ve run out of ideas.

1. See About Nothing from June 25, 2017.

2. Except that modern physics insists on filling space with somethings. For example, Stephen Hawking solved the problem of the apparent evaporation of primordial black holes by positing the instantaneous creation and mutual destruction of particles and anti-particles in a vacuum, going on all the time, invisibly, everywhere. When one of those pairs happened to spontaneously erupt along the event horizon of a black hole, one of the paired particles fell into the hole and the other drew out a quantum of energy in response. And so micro black holes disappeared over time. … Oh, hell! Hawking might as well have said that pixies ate them.

3. How big is an atom? I’ve read somewhere that if the nucleus of a small atom like hydrogen or helium were the size of a fly inside Notre Dame cathedral, then the electron shells and their potential orbits would occupy the entire enclosed space. And all the rest would simply be empty.

Sunday, May 16, 2021

Living with Ambiguity

Girl with magic box

The quantum mechanics conundrum of Schrödinger’s Cat1 is not an actual physics experiment but a famous thought experiment about the state of human knowledge and observation. Basically, it says that the universe goes on about its business and doesn’t reveal itself unless human beings, our active intelligence at work, actually stop and look. And sometimes, at least in the subatomic realm, the mere act of observing interferes with the outcome—as when the detection of a subatomic particle in flight by an instrument using a beam of photons interferes with either that particle’s position or its direction.2 So some things, at the most remote scales, are truly unknowable.

I would posit that the ambiguity of quantum mechanics has a lot more to do with everyday life than we normally admit. We are always faced with situations where what’s going on and what we know about it are separated. For example, did I get the promotion? Somewhere on one of the upper floors someone, or a group of someones, knows who got the nod for the job, but there’s no way—at least, no ethical way—of finding out until they announce it. And if you find out before it’s announced, that will likely change the decision. Does she love me? She knows, or maybe she doesn’t know yet, but there no way for you to know until she declares herself by word or action. And your pestering her for an answer would change the relationship. Will the jury find me guilty or not? Again, the twelve members of the jury know, or soon will know, but you won’t find out until the verdict is read in court. And if you learned the verdict ahead of time, it would cause a mistrial.

In each of these instances, from the time the question arises until you open the lid and observe the cat, the question remains both “yes” and “no” at the same time. Both choices are in a state of superposition—at least as far as you are concerned—until you learn the answer and the two states are resolved into one. This is not a question of probability, although you can take odds or make bets with yourself about how you think the question will be resolved. But all of your weighing of factors and listing of pros and cons will not make a bit of difference when the question itself lies in the hands of others, of the management team, the girl, the jury … or the Geiger counter attached to the vial of poison.

This is the sort of ambiguity we have to live with all the time. In most cases, the superposition will resolve itself eventually. But sometimes the company’s fortunes change and the job is never awarded or announced. Sometimes the girl moves away or dies before she can accept or reject you. (And sometimes she says “yes” when what she means is “maybe” or “wait and see.”) Sometimes you get a hung jury, no verdict, or a mistrial. Some issues may never be resolved in your lifetime.

For example, I’ve always wondered about the true story of the Kennedy assassination. Did Oswald act alone out of disaffection, or was he a plant by the KGB after the embarrassments of the Cuban missile crisis and Bay of Pigs invasion? Did Jack Ruby kill Oswald out of patriotic sentiment, or was he sent in by the CIA to keep the lid on a foreign decapitation action that might have led to Congress declaring World War III? The entire Warren Commission report has been unsealed by now, years ahead of the actual date, due largely to the Freedom of Information Act. The commission’s findings suggest that Oswald and Ruby both acted alone, and supposedly there was no evidence of a coverup or international involvement. Still, I wonder. Since that’s as far as the investigation went, despite 552 witness depositions, 888 pages of documentation, and 3,100 exhibits, we will never know who outside of persons in the immediate U.S. might have been involved. So, in my mind, “Russian plot” and “angry gunman” remain in superposition, as do “CIA coverup” and “angry patriot.” At this point we will probably never know.

Another example of ambiguity is the mystery of the universe’s origin. When you rewind the expansion of the galaxies that we observe back over the 13 billion years of the universe’s calculated existence, you end up with a putative point, a tiny dense particle that exploded in the Big Bang. That is supposedly our cosmological creation story. But if you expand the observable universe from a single point to its current size, even allowing for everything to move at light speed, the calculated radius is smaller than the universe in which we find actually ourselves. This problem was supposedly corrected by the “inflationary period,” proposed by cosmologist Alan Guth in 1980, in which the whole shebang accelerated instantly a few microseconds after the Big Bang, so that it went from something with a radius of less than a subatomic particle to—and here various calculations give different answers—a cloud of matter somewhere between the size of a grain of sand to something on the order of nine meters in diameter. And then it all continued to expand normally from there.

A third example of the currently unknowable—although not for lack of trying to detect it—is the relationship of matter and energy in the observable universe. From the way that the stars in spiral galaxies spin around their center—as if they were painted on a disk, rather than freely orbiting in the void—it would seem that these galaxies have more gravitationally bound material in them than the matter that shines brightly as stars. A lot more, as in several times as much. This is the “dark matter” that plagues cosmology. Either galaxies contain much more dust, gas, and both central and primordial black holes than our observations account for, or the universe is permeated by particles that affect gravity but are otherwise invisible and undetectable in every other way. And then, the universe itself is not only expanding, as if still impelled by that initial Big Bang explosion, but also its speed of expansion is accelerating at an alarming rate. So either the vacuum of space contains a mysterious force that increases with space and distance—a “dark energy” that is otherwise undetectable in our immediate neighborhood—or we don’t understand the basic structure of the universe and the real nature of the effects we observe as “space,” “time,” and “gravity.”

We can theorize about these things, but until we create better instruments and take better measurements, I think we have to live with the ambiguity of not actually understanding the universe. Many possibilities are in superposition, and not all of them can be true.

And finally, on the human scale, is the matter of human life, spirit, and what may lie on the other side of death. Is there a God or not? Do we vanish at death, like a candle flame when it’s blown out, or does some part of us—soul? ghost? brain wave? personality? memory?—exist for a time or perhaps for eternity? And there you can theorize, rationalize, believe, or doubt all you want, but only the actual experience of death will reveal the answer. And by then it may be too late to do anything about it.

Given all of this, and the example of Erwin Schrödinger’s cat to begin with, I must remain comfortable with ambiguity. I must accept that some things cannot be known until they are revealed, that others may not be revealed in my lifetime, and that some may never be revealed to any of us, no matter how long we live.

1. For those who do not know it, you imagine putting a cat into a box with a vial of cyanide and a striking mechanism that will break the vial and kill the cat when triggered by a random event, such as the decay of a radioactive element. Then you close the lid. You have no way of knowing whether the particle has decayed and the cat was killed until you actually open the lid again. So, from your perspective, the cat is simultaneously in two different states—called a “superposition”—of being both alive and dead. This composite state is not resolved until you open the lid, and then the cat is either alive or dead. But all of this pertains only to you, as the observer; for the cat, the effects are more immediate.

2. However, this question of observational interference is not part of the Schrödinger’s Cat thought experiment.

Sunday, May 9, 2021

Predicting the Future

Immortal dream

Last week I wrote about the rising curve of human technology since the 17th century and suggested what it might mean for the future. Now I’m going to dip a toe in the perilous waters of future gazing and see where, in the short term, such technologies might lead us.

But first, a few looks back, to see how quickly things have changed.

When the Washington Monument was completed in 1888, it was capped with a small pyramidal casting made of aluminum, chosen because it was likely to be a good lightning conductor. At the time, aluminum was fairly rare as a metal. It was not more valuable than platinum, as some have suggested, but it cost about $1 per ounce, which was the typical daily wage of the average construction worker on the monument.1 Although aluminum is a fairly common element in the Earth’s surface, being bound with oxygen in crystals of alum, Al2O3, found in the red earth bauxite, it took large amounts of electricity to drive off the oxygen—and electricity was in short supply in the mid-19th century. Within fifty years, after the building of large hydroelectric dams in the U.S. West, aluminum became widely available—plentiful enough to dominate the skies with airframes made during World War II, and then cheap enough to make building siding, lawn furniture, and throwaway cans in the years after. Aluminum is the wonder metal of the 20th century, along with titanium and stainless steel.

And at the turn of that last century, according to the U.S. Department of Agriculture,2 41 percent of the American workforce was employed in farming. By 1930, that number had shrunk by about half to 21.5 percent; by 1945, to 16 percent; by 1970, to 4 percent, and by 2000, to less than 2 percent. Credit for the change goes to improved use of machinery and its overall efficiency; changes in land use, with larger “factory farms” and the loss of the picturesque but noncompetitive “family farm”; and the “green revolution” in the production and use of fertilizers and weed and pest controls, as well as the genetic modification of crops. All of those chemical and biological developments are still in play and will only increase in their use and effectiveness. So we can count on the next hundred years bringing us new hybrid crops and tougher, more robust, more nutritious food resources. The only limiting factors will be arable land and fresh water—and even the supply of those may change.

Okay, let’s start with water. Right now, humanity depends on two sources for the water it uses for drinking, bathing, toilet flushing, and irrigation: rain and snow falling from the sky and running off in the local river; and ancient rains trapped in subsurface groundwater and aquifers that are tapped by wells. The more use we make of those aquifers—which collect slowly over the ages and drain quickly under pressure and pumping—the less we have for the future. But the world’s surface is three-quarters water, just that most of it is laden with mineral salts and thus undrinkable and unsuitable for farming. We have known how to filter out the salts by reverse osmosis since the middle of the 18th century. We could live by this process today, except for the fact that it’s costly. But the cost is not so much in the plants themselves and their placement as in the energy they require. (And the cost has actually dropped for flash desalination, from $30 per cubic meter in the 1960s to about $1 in 2010.) If you have abundant electric energy to run the pumps, you can have all the desalinized sea water you can drink.

So the key to the future of our water supply, our agricultural irrigation, and our population growth in general is going to be energy. Right now, like it or not, our most abundant energy resource is fossil fuels: coal, oil, and gas. Coal is abundant—we have about a thousand-year supply in North America at current consumption—but bulky to move and messy to clean up. Oil and gas can be piped to their users, and while we recently thought we were running out of easily tapped reserves—the specter of “peak oil”—technological advances in the form of horizontal drilling and hydrofracturing of oil and gas shales have extended our future. But—thinking in terms of centuries rather than years or decades—one day we will run out. And, in the meantime, hydrocarbons are much more precious as a chemical feedstock than as an energy source. Wind and solar power, being diffuse resources dependent on adequate siting, will not replace hydrocarbons in our energy future—not unless we change the landscape, harvesting solar power in orbit and beaming it down to huge diode fields on the planet’s surface, as in my novel Sunflowers, or planting windmills in mechanical forests along every ridgeline, as if they were trees.

But what I call the “enterprise of science” is well aware of the energy problem. Physicists and engineers all over the developed world are studying its production and storage from many angles. Although fusion power, electricity from the deuterium and tritium in sea water, always seems to be ten years off into the future, always receding from our grasp, one day we will figure out how to produce it, even if we have to invent artificial gravity to make it work. And once we have the design and formula worked out, we can adapt and scale it for efficiency. Biologists are at work, too, trying to use our newfound genetic ingenuity to manipulate algae into growing and secreting lipids, or hydrocarbon substitutes, from water and sunlight without adding to—but rather subtracting from—the atmosphere’s carbon-dioxide burden.

And about that carbon dioxide, greenhouse gases, climate change? All of that is this year’s daily fright, like Malthusian overpopulation or the collapse of “peak oil.” Listen: climate has always been changing, and people have always adapted to the new conditions. When winters came early and lasted longer at the beginning of the Dark Ages, they migrated south. When coastlines shifted, they moved inland. You can say the difference today is we have billions invested in shoreline real estate that is too valuable to lose. But in the Bay Area, where I live, a lot of that shoreline was saltwater marsh a hundred years ago; in another hundred years, it may be saltwater marsh again. Shrug. Changing climate, like most effects of weather—is an inconvenience, not a catastrophe. Consider the inconvenience of having to shovel tons of snow repeatedly every winter and what it does to the economy to dig out homes and plow the roads after every blizzard. And on the plus side, pushing the snow line further north into Siberia and the Canadian tundra will open up new lands for agriculture.3

And speaking of the Reverend Thomas Malthus and his prediction that human population would rapidly outgrow agricultural resources, leading to worldwide starvation, that didn’t happen, did it? When I was growing up, we heard dire predictions about vastly overpopulated countries like China and India, where people were regularly starving. And Africa, where apparently people are still starving—although much of the famine appears to be genocide by political manipulation of the food supply. We have seen that, as countries develop economically and technologically, with a greater proportion of their population moving into the educated and skilled classes, the population and its growth rate tend to shrink. Much of Europe—at least among the demographic that populated Europe over the past thousand years—is now reproducing below its replacement level, figured at 2.1 children per couple. Japan has been below that rate for decades, and the U.S.—again for the population mix of the last century or so—is trending that way. When people become prosperous and educated, and their medicine saves most of their babies who would otherwise disappear into the infant mortality statistic, they have fewer children and generally treat them better, so they live longer, more productive, more satisfying lives. China, India, Africa, and South America will eventually catch up with this curve before the planet implodes.

Add in the advancements that will come with the genetic revolution in biology and medicine, and most of the medical problems we see today will fade away. We will find ways to target and repair cancer cells. We will resurrect failing hearts and brains through tissue repair. Organ repair and replacement will become a matter of manipulating your own stem cells, as in my two-volume novel Coming of Age, rather than receiving an organ donation—willing or not—from another human being, with a lifetime of immune suppressants to follow. Issues of congenital and developmental conditions, susceptibility to the environmental causes of degeneration and disease, and the mystery of differentials in health and fitness among people will unravel as we analyze, predict, and eventually control all the biological processes of life. People will then live a lot longer, with even richer, more productive lives.

Normally, you would then expect the Ponzi scheme of Social Security and Medicare—where you need more and more young people working to pay for the care of ever more retired parents and grandparents—to collapse. But I don’t expect this to happen, and not because I believe the U.S. Congress will pay back the money it has drained from these funds for other uses. With more automation of materials extraction, manufacturing, and the supply chain and infrastructure that support them, the need for human hands to dig, make, and trade things is rapidly diminishing.4 That trend is only going to accelerate with artificial intelligence, 3D printing, and other physical amplifications of the computer age. The question is not whether we will have enough people to work in the economy, but how people will work to support themselves in the cornucopia of food, goods, services, and entertainments that is going to be showered down upon them.

I once thought that some form of Universal Basic Income—a global and permanent government dole—would be necessary to replace the “Protestant work ethic” with which my generation was raised (or, as my mother would say, “No work, no eat!”). But people are inventive and creative, and a life of easy handouts is not part of human nature. I think, instead, that people faced with an economy of predictable, unexciting, machine-made and -supplied goods and services will return to valuing human artistry and craftsmanship, at least in the areas that interest them. Yes, you can get a basic, particle-board-and-veneer desk at IKEA, but you’ll pay more for something hand carved with a flourish from a renowned local craftsman. And yes, you can watch computer-generated movies on the widescreen most evenings, but you will still hunger to go out and sit in a theater with other human beings to watch real actors speak from story lines reflecting varied human thought.

I am not a Pollyanna. The future won’t be all rosy. There will be dislocations, disruptions, and growing pains in the new world into which we are venturing. But we’ve had those difficulties along the way already and survived them. Our life today is unimaginable to someone living two centuries ago. If you had told a 19th-century farmer that in the 21st century one person would do the work of 100 of his kind, he would have despaired and wondered how his children will survive. For him, the change would be a huge, looming, unsolvable problem. But now we look back and we can barely comprehend the drudgery and futility of his life, laboring sunup to sundown, hacking at the land with a horse-drawn plow, hauling buckets of water from the well and manure out to the field, just to feed his family and still have something left over to sell at market.

And come what may, people will still have imaginative and enticeable human spirits. They will still able to look at a flower in the dawn light or the sea at sunset, breathe a sigh, and find a measure of contentment in the moment.

1. See The Point of a Monument: A History of the Aluminum Cap of the Washington Monument by George Binczewski, JOM, 1995.

2. See The 20th Century Transformation of U.S. Agriculture and Farm Policy by Carolyn Dimitri, Anne Effland, and Neilson Conklin, USDA Economic Research Service.

3. And climate change may not always be toward the warming side, regardless of the CO2 burden. Consider the change in sunspot cycles over the past twenty years at spaceweather.com. The last cycle, No. 24 since the Maunder Minimum of the 17th century, was much weaker than the previous cycle that peaked in 1998. This and the even weaker cycle No. 25 that we are now entering suggests we may be headed toward another solar minimum.

4. Watch any episode of the television series How It’s Made and count the number of human hands at work vs. the number of machines. We live in a mechanized age.

Sunday, May 2, 2021

The Rising Curve

Compound steam engine
Steam turbine blades

The rates of increase between a slope and a curve have different mathematical properties. A steady slope is usually generated arithmetically by adding single units (1, 2, 3, 4 …), while a curve is usually generated exponentially by adding squared or cubed units (2, 4, 8, 16 …). A parabolic curve, at least the part that proceeds upward from its low point, is generated by the formula ax2+bx+c,1 and it can rise really fast. My contention here is that our technological advancement since about the 17th century has been on a parabolic curve rather than a slope.

In ancient times—think of Greece and Rome from about the 8th BC, the first Olympiad for the Greeks, or the mythical founding of their city for the Romans—there was technological advancement, but not even a slope. More of a snail’s pace. The Greeks had their mathematicians and natural and political philosophers, like Pythagoras and Aristotle, but aside from writing down complex formulas and important books which probably only a fraction of the populace bothered to read, their works did not materially improve everyday life. The Greeks never united their peninsula politically, for all their concept of democracy, remaining stuck at the tribal and city-state level of conflict. And from one century to the next, they drank from the local wells, shat in the nearby latrines, and traveled roads that washed out every year with the spring floods. They built in marble the temples of their gods, but otherwise the average people lived in houses of wood and mud brick not much different from those of their predecessors in Homeric times, five centuries earlier.

The Romans did somewhat better, being short of actual philosophers but abounding in practical engineers. They developed a democratically based political and military system that united their peninsula and went on to conquer most of their known world. They built huge aqueducts to bring fresh water into their cities from distant springs, underground sewers to take away human wastes, and roads dug many layers deep into the ground that could reliably move goods—and armies—from one end of the empire to the other. They built temples and palaces in marble laid over brick but also invented a synthetic stone, concrete, that their engineers originally made from a volcanic ash known as “pozzolana.” Common people in the city lived in apartment blocks called insulae, or “islands.” They bathed regularly and made a civic virtue of the practice. Life was better under the Romans, but technological advancement was still glacially slow.

Rome, at its fall in the Western Empire during the 5th century AD, was technologically not much different from the Rome of Julius Caesar, five centuries earlier. And that fall—due largely to climate change and the ensuing barbarian migrations—plunged Europe into a Dark Age that saw small advancement in any of the arts, although we did get some practical technologies like the wheeled plow and the stirrup. Those, along with gunpowder adopted from Chinese fireworks and movable type adapted from the Chinese by Gutenberg in the 15th century for printing bibles, carried us through to the 16th century, the time of the Tudor reign in England or the Medici in Italy.

After that, technologically speaking, all hell broke loose.

Some might credit Rene Descartes and his inventions of analytic geometry and the scientific method, based on observation and experiment; or Isaac Newton and his invention of the calculus (also developed independently by Gottfried Leibniz in Germany) and his studies of gravity and optics; or Galileo and his work in physics and astronomy. Intellectually, it was a fruitful century.

But from an exhibit that my late wife prepared at The Bancroft Library years ago, I learned that a more immediate change came about with the exploration of distant lands and when the European trading companies set up to exploit them began importing coffee and tea into the home market in the 17th century. Before then, people didn’t drink much water because of rampant contamination; so instead they drank fermented beverages—sweet wines, small beer, and ale—because alcohol helped kill the germs, although they didn’t think about it in those terms. So they would sip, sip, sip all day long, starting at breakfast, until everyone was half-plotzed all the time. But then along came coffee and tea, which were good for you because you had to boil the water to make them. Everyone brightened up and began thinking. The denizens of Lloyd’s Coffee House in London invented insurance companies to protect the sea trade, which required estimates of risk and probability, and that led to a whole new branch of mathematics and the spirit of investment banking.

Put together scientific investigation with the widespread availability of printed books and the clear minds to read them, and we’ve been on that rapidly rising parabolic curve ever since.

We are just over three hundred years from the first steam engine, patented in 1698 to draw water from flooded mines. In the time since then, the engine has gone from triple-condensing cylinders to turbine blades. And that is the least of our advances. This year, we are just two hundred years from the first primitive electric motor, built by Michael Faraday in 1821. And now we have motors both small and large driving everything from trains, elevators, and cars to vacuum cleaners and electric shavers.

In my lifetime, I have seen music go from analog grooves cut into vinyl disks and magnetic domains on paper tape to digital representations stored on a chip, and photography go from light-sensitive emulsions on film and paper to similar—but differently structured—digital sequences on chips. My electric typewriter—again driven by a small motor—has gone from impact-printing metal representations of the alphabet on a sheet of paper to storing different digital sequences on that same chip in my computer. All of this puts the stereo system, camera, and typewriter I lugged off to college fifty years ago into a single device that started out on my desktop, migrated to my laptop, then moved into my hand inside a smart phone, and now lives on my wrist instead of a watch. And the long-distance call I made every week from college to my parents at home was once a direct wire connection established by operators closing switches; it would now be a series of digitized packets sent out through the internet and assembled by computers at each end of the conversation. Gutenberg’s process for printing words on paper is now embodied in the photo-masking of electronic circuits on silicon chips. And we’re not done yet.

In 1943, Alan Turing invented the first computer, designed to crack the Enigma code for the Allies. In 2011—just 68 years later—that machine’s linear descendent, IBM’s Watson, was playing although not necessarily consistently dominating the game of trivia based on history, culture, geography, and sports, dependent on linguistic puzzles and grammatical inversion, known as Jeopardy! While that was a stunt, similar “artificially intelligent” systems based on the Watson design are now being sold to businesses to analyze and streamline operations like maintenance cycles and supply chain deliveries. They will take the human element, with its vulnerability to inattention, imagination, and corruption, out of processes like contracting and medical diagnosis. Any job that involves routine manipulation of repetitive data by well-understood formulas is vulnerable to the AI revolution.2

Add in separate but related advances in materials, such as 3D printing—especially when they learn how to make metal-resin composites as strong as steel—and you get disruption in much of manufacturing, along with the global supply chain.3

Any theory of economic value that depends on human brawn—I’m looking at you, Marxists—or now even human brains is going to be defunct in another half century. That’s going to be bad news for countries that rely on huge populations of relatively unskilled hands to make the world’s goods, like China and India.

Intelligent computers are also able to do things that human beings either cannot do or do poorly and slowly. For example, in November 2020, Nature magazine reported on an AI that can predict and analyze the 3D shapes of proteins—that is, how they fold up from their original, DNA-coded amino acid sequences—almost as well as the best efforts of humans using x-ray crystallography. And this was just 20 years after the first sequencing of the human genome using supercomputers, and only 66 years after the first glimpse of the DNA molecule itself using x-ray crystallography. Knowing the structure and thereby the function of a protein from its DNA sequence is a big deal in the life sciences. It will take us far ahead in our understanding of the chemistry of life.

Ever since the 17th century, our technology has been riding a curve that gets steeper every year. And the progress is not going to slow down but only get faster, as every government, academic institution, and industrial leader invests more and more in what I call this “enterprise of science.” Anyone who reads the magazines Science and Nature can see the process at work every week.4 We all stand on the shoulders of giants. We stand on each other’s shoulders. We build and build our understanding with each advance and article.

This rate of increase might be slowed, marginally, by a global depression. We might be set back entirely by a nuclear war, which might revert our technological level, temporarily, to that of, say, the telegraph and the steam engine. But it will only be stopped, in my estimation, by an extinction event like an unavoidable asteroid or comet strike, and then so much of life on this planet would die out that we humans might not be in a position to care.

As to where the curve will lead … I don’t think even the best science philosophers or science fiction writers really know. Certainly, I don’t—and I’m supposed to write this stuff for a living. The next fifty years will take us in perhaps predictable directions, but after that the effects on human economics, culture, and society will create an exotic land that no Asimov, Bradbury, or Heinlein ever imagined. Fasten your seat belts, folks, it’s going to be a bumpy ride!

1. That’s a quadratic equation. And no, I don’t really understand the formula’s properties myself, having nearly flunked Algebra II.

2. But no, the computer won’t be a “little man in a silicon hat,” capable of straying far outside its structural programming to ape human intellect and emotions—much as I like to imagine with my ME stories. And it won’t be a global defense computer “deciding our fate in a microsecond” and declaring war on humanity.

3. It’s become a commonplace that the U.S. lost its steelmaking industry first to the Japanese, then to the Chinese, because they were more advanced, more efficient, and cheaper. Not quite. This country no longer makes the world’s supply of bulk steel for things like pipe, sheets, beams, and such. But so what? We are still the leader in specialty steels, formulations for a particular grade of hardness, tensile strength, rust resistance, or some other quality. Steelmaking in our hands has become exquisite chemistry rather than the bulk reduction of iron ore.

4. For example, just this morning I read the abstract of an article about adapting the ancient art of origami to create inflatable, self-supporting structures that could be used for disaster relief. I read and I skim these magazines every week. And frankly, some of the articles, even their titles, are so full of references to exotic particles, or proteins, or niches of mathematics and physics that I can only guess as to their subject matter, let alone understand their importance or relation to everyday life.

Sunday, April 25, 2021

Understanding Alien Psychology

Borg Queen

I have been thinking and blogging about the potential for finding and understanding aliens a lot recently, ever since reading Avi Loeb’s book on the interstellar object ‘Oumuamua, Extraterrestrial: The First Sign of Intelligent Life Beyond Earth. Now I am heading into realms that are totally unknowable—except from the viewpoint of what we know on Earth. But bear with me …

First off, I am not too interested—well, very, but not for the purpose of this meditation—in simply finding signs of life. We’ve seen things that could be confused with fossilized cells in the surface geology of Mars, and we suspect we might have found gases that could only be created by life-as-we-know-it in the atmosphere of Venus. When we get to other planets, both in this solar system and around other stars, we may well find chemical reactions and physical structures that we, from inside the realms of earthly biology and human understanding, define as “life.” Some of it may be intelligent but a lot of it, like most of the living forms on Earth, will not be what we choose to call “intelligent” or even “sentient.” Slime molds, for instance—honking huge single cells with eukaryotic nuclei—can move toward food and away from irritants in a fashion that seems to be intelligent or at least resembles neural networking.1 But it’s not going to build a rocket and come visit us.

When I think about aliens, I imagine the kind that will leave their planet and come out among the stars, as so much of Western-civilized humanity apparently hopes to do one day. And until we go out there, we’ll just have to wait for them to come to us.

So, first question. Will they look like us? Or even come close—like the various humanoid species that populate a Star Trek episode? I don’t believe it. As Carl Sagan once said, we’d have more success mating with a petunia than with an extraterrestrial lifeform.2

Earth has a long history of large, active lifeforms that might have developed intelligence but as far as we know did not. The dinosaurs come to mind: the family Tyrannosauridae and their cousins were bipedal, oxygen breathing, hunters and perhaps also scavengers, and probably—maybe—at the top of their food chain. But we have no evidence that they exhibited any real intelligence greater than that of a lion or house cat, wolf or dog, or even a shark. And yet the dinosaurs’ distant progeny, the Corvidae family—crows and ravens—as well as many other species of birds have a kind of intelligence we cannot explain. Even octopi—what? a mollusk?—exhibit a high level of intelligence. So size, shape, and mammalian ancestry are not necessarily prerequisites for intelligence. Still, none of these animals from Earth and its history is going to build a radio or a rocket anytime soon.

But the examples from Earth also suggest that we cannot expect to find our own kind of intelligence out among the stars, even if it wears an unexpected shape or inhabits an environment—like the earthly ocean of octopi and whales, or perhaps the liquid water under Europa’s ice, or the methane seas of Titan—in which humans don’t particularly thrive. Life on Earth did not, after all, start out on the land, although that’s probably the best place to build a radio or launch a rocket above the atmosphere.

One particular axis we are likely to encounter in alien psychology is that between the individual and the group. So far, on Earth, the sort of intelligence that is likely to expand to encompass curiosity, technology, and eventually space travel is fixed in individual entities. We humans are separate and complete persons inside our own bodies. We are socialized into groups, certainly, in which we can function for enhanced performance. But we do not become lost, stricken, enfeebled, and die when separated from our group—or at least not right away. And we see this pattern not only in human tribes but also in monkey troops, wolf packs, whale pods, cattle herds, and other social groupings.

Each of us knows or tries to find our place in the group, establish a niche where our capabilities and levels of aggression or empathy best fit, and seek comfort and contentment—or at least a subdued level of rebelliousness—in that placement. We are social animals. And that is the basis of all culture and intergenerational achievement. The fabled lone wolf, the mad scientist, the antisocial genius who works alone and keeps his notes in a secret code—such beings are of interest to us as fiction but they are not the creators of lasting culture or enduring civilizations. They build no great cathedrals, establish no great cities, lead no great social or political movements—and they don’t send rockets to the Moon.

So, we think, the kinds of intelligence we will find out among the stars will be like us in that: socialized individuals, each with his, her, or its own personality, preferences, anxieties, and dreams.

But we have another example on Earth to draw on: the hive mind. Whether in the beehive, the anthill, or the termite colony, the individual entities—the minds inside the separate bodies—are not really individuals as we humans understand the term. They are physically adapted to their tasks and place in the hive structure, and their minds are shaped—one might say innately programmed—to perform those tasks and not question their role. Even the queen is not a ruler or leader but simply the pampered sexual progenitor, the mother of them all, that ensures the colony’s survival and renewal.

Something of this was captured in the movie Star Trek: First Contact, where we are introduced to the Borg Queen (pictured nearby with Alice Krige playing the part). But although the Borg are a collective of mechanized humanoid lifeforms whose brains are electronically networked, they are not really a hive and the queen is not their mother nor their first member. The queen speaks with a voice and persona that can call up the collective mind but can also examine it, see its options and possible choices in context, contrast their existence with what she knows of humanity, and evaluate the Borg from the outside. She is more like a leader or first speaker than the sexual progenitor of the collective.

When we look to more imaginative literature on the idea of the hive as a society, the offerings are few. To my thinking, Frank Herbert has done some of the best work on this. His novel Hellstrom’s Hive examined what it would take to change human beings into the sort of social insects that could function most efficiently in the politically denuded world. And his novel The Green Brain imagined a hive of Amazonian insects that functioned as a single conscious entity, in the same way that the cells in our human bodies work together to create the reality—or perhaps it’s just the illusion—of a single person with independent will and desire.

We might encounter some variation of this colony structure, this collective intelligence that is not separated by the strands of individual personhood, out in the universe.

The question in my mind is whether this kind of intelligence is creative or merely reactive. A colony of honeybees or ants can adapt to its environment, find flowers or other foodstuffs when the weather is right, make its nest or hive nearby, and deal effectively with changes in environment and temperature, or else swarm to find a new nesting site. But can they only think and react to present and immediate needs? Could they eventually engineer changes in that environment? Could they look beyond the immediate locale and imagine ways to make it different? Could they look out at the stars and dream of visiting them? Or are they bound to the world as they know it, in a way that human societies are not?

Every group of socialized individuals is built of both leaders supported by their pack of generally submissive followers and then the potential outsiders, the rebellious youth and the mad geniuses, who question the social order, its structure and purpose, and seek something new or just different. At least, that’s how the human tribe has functioned and flourished. That is how we broke the bonds of merely reacting to our hunter-gatherer environment, engineered a better life through agriculture, created written records to preserve intergenerational knowledge, adapted invention and technology to improve everyday life, and then looked outward to the stars.

In our experience, based on that group of socialized individuals, progress depends upon imperfections in communication, upon differences of opinion and individual dreams, upon disagreements and conflicts. These are the one thing that the anthill or the beehive cannot survive. The resolution of these disruptions is never pretty and neat, and it’s never complete and finalized.

But without disruption and disquiet, you have the structured cooperation, the orderly processing and virtual stagnation, of the colony animal. That, or the brainless neural-net reactivity of the slime mold. And neither of them, I warrant, will be coming here anytime soon.

1. See, for example, Mycologist Explains How a Slime Mold Can Solve Mazes, from Wired.com in 2019.

2. But maybe we are connected after all, as in the panspermia hypothesis. See, for example, my meditation on The God Molecule from May 28, 2017.