Sunday, February 27, 2011

Automation, Work, and Personal Meaning

Many people these days are concerned about our “jobless recovery” from the Great Recession of 2007-10. Others are concerned because they see manufacturing jobs being sent overseas—usually to China and India, where people will do manual labor and rule-driven, narrowly focused work like writing computer code for a fraction of the prevailing wage in the United States. If only we could recover our economic momentum, they think, and take back those jobs now done overseas, prosperity will return.

It’s not going to happen. We at the turn of the 21st century are in the middle of an economic sea change in manufacturing as momentous as the one in progress at the turn of the 20th century in farming. In 1860, at the start of the Civil War, farmers accounted for 58% of the labor force: more than half the jobs in this country were on the land. By 1900 that had dropped to 38%—still over a third of the jobs. By 1930, farm jobs were 21% of labor opportunities. By 1960, only 8.3% of Americans worked on the farm, less than a tenth of the work force. And today, it’s less than 3%.

If you were laid off on a farm in 1911, what do you think your prospects of finding another job in agriculture would be? What would your children’s prospects be? Needless to say, during this dwindling of farm jobs, actual farm output did not decline but instead grew dramatically. Mechanized farming, pesticides, new technologies in food storage and processing—they all created more food than we could eat while employing fewer people. At the same time that farm jobs were disappearing, the number of factory jobs was rising, and they absorbed the laid-off workers. So, for the last century at least, the problem of employment was solved.

It’s common to say that U.S. manufacturing is in a slump these days, as more and more goods appear to be made in China and India. Don’t believe it. Chinese industrial output hasn’t even caught up to the U.S. We still make about 40 percent more goods than the Chinese. They only recently surpassed the Japanese economy—which has been stagnating for two decades—to become the world’s second largest.

So it’s not that the United States doesn’t make things. We make lots of things—often larger and more complicated things than the Chinese. Sure, most of the hand tools like hammers, consumer goods like pots and pans, toasters and toys, and personal electronics like iPhones and iPads are stamped “made in China.” But in the high-stakes, high-profit-margin, cutting-edge industries like jet airliners, earthmoving machines, and scientific equipment the U.S. still dominates.

The difference today is that automation is turning most of this manufacturing over to machines—usually machines designed and made in this country.1

It’s a truism to say that we live in the “information age.” Most of the new jobs are in a field that’s collectively called “symbol manipulation.” You analyze the information needs in one industry or another and design the software that will find, develop, and display it.2 You interpret computer simulations and design the cuts that machines will make to create the prototypes of faucet handles or jet engine vanes or fuel pump housings. You read marketing trends gathered from computer surveys and put together a marketing plan. You monitor the displays on automated machines and call out the fixit teams when they break. You interface between the computer screens showing procurement or inventory or logistics status and the people who want to sell you products, make your products, or buy them. You write songs or video games or books or screenplays for people who hunger for entertainment.

The actual work of cutting metal, molding and casting, and moving product will be done by machines, assisted by an ever dwindling number of human hands. The power of the machine amplifies human power many fold. The backhoe replaced the ten humans who once wielded picks and shovels to dig a ditch with one person who drives the backhoe and then a tenth of a person more who maintains it (while simultaneously tending nine other machines). Similarly, one driver of an eighteen-wheel semi hauls about 22 tons of merchandize, or twenty times the load of a wagon driver. One faucet designer using a computer-assisted design and manufacturing (CAD/CAM) program can replace ten human craftsmen shaping and fitting brass parts to make faucets by hand.3

Jobs lost to automation and mechanization are simply not coming back. You would not want to pay the price of a lawn mower assembled in America by the hands of a factory worker making a living American wage. You wouldn’t even want to pay for a handcrafted lawn mower made in China by someone making Chinese wages.

The good news is that greater productivity, making more goods available more cheaply, creates greater wealth. The economy is like an ecology: the more economic activity there is, the more economic opportunity there will be.4 Greater wealth and leisure mean more money and time for people to consume songs, video games, books, and movies—not to mention those handcrafted coffee mugs and clothes and furniture that artisans want to make and sell.

But greater productivity also turns the economic equation on its head. Consider that for a hundred thousand years of hunter-gatherer culture, good times were defined by having more food on hand than you wanted to eat—feast at the table. Bulging granaries are a good thing. That turned over sometime around the 1900s, when a bumper crop meant an oversupply of grain (or corn, or pigs, or cattle) and falling prices for those still in the farming business. In our economy, not having enough production of cars or television sets or cell phones or new housing starts is not the problem. It’s when demand falls and inventories start backing up, driving prices down and leading to the layoff of those workers and productive assets who remain in manufacturing (and in marketing, logistics, finance, and human resources) that the trouble starts.

We’ve solved the production part pretty handily. It’s the rest of the equation that is going out of whack.

The economic question of the 21st century—the only economic question of the 21st century—is what do the rest of the people do? Blog and write their memoirs? Make handcrafts that collectors can buy? Write marketing brochures for artisan shops? Get advanced degrees in folk dancing? Play golf? Smoke dope? We are coming upon a time when everyone's basic physical needs will be taken care of by machines with the active, employed participation of perhaps ten or twenty percent of the population. So what do the rest of us do? This is not just about making money so people can buy this stuff—supplying buying power is the easy part—but how do we make their lives meaningful?

For most people, the meaning in their lives comes from work and family. You are what you do. You are who you love. We’ve pretty well exploded the family component through our culture’s narcissistic focus on personal happiness, with a corresponding rise in divorce rates and out-of-wedlock children. We’re now losing the work component through rising automation. This also suggests why young people are rioting in Tunisia, Egypt, Libya, and Britain: they don’t see a lot of personal future in the way things are going.

The human race has faced this problem before. When hunter-gatherers sat down in river valleys to grow grains and tend livestock instead of walking over the landscape picking up what food was available, the sudden productivity of the peasant classes meant that large numbers of people in urban concentrations were suddenly without useful employment. They were fed, but they weren’t busy. The response of the growing managerial and priestly classes was civic works: pyramids in Egypt, Nazca stone-paintings in Peru, cathedral and castle building in Europe. Another response was weapons training and war all over.

Once again we have large numbers of people who will have enough to eat but not enough to do. For those not content to write poetry and smoke dope, what is an acceptable purpose for a human life? Answer this question robustly—without any genie-in-the-bottle solutions like “smash all the machines”—and you can be the next great Nobel laureate in economics.

1. See “Gutenberg and Automation” from February 20, 2011 in My Blog History.

2. Note that you will, in most cases, only design the software, which is the creative work. The code itself will be assembled by an automated program or written letter-by-letter at the hands of Indians or Russians who closely follow the grammatical rules of the programming language to implement your design: you plan, they type.

3. Here’s another example of how technology is amplifying the input of human labor and so eliminating jobs. While working at Pacific Gas & Electric Company in the 1980s, I learned that the basic work unit for tending the electric distribution system is the “line truck,” the crew that drives out to a downed power line and fixes it. Long before I got there, in the 1930s, the truck crew was five people who climbed ladders to make repairs, up and down until the job was done. By the 1950s and through the 1980s, the crew had been reduced to two “linemen.” They climbed the wooden utility poles with belts and leg spikes, like lumberjacks, to make repairs. They were supported by a third person, the “groundman,” who passed tools up to them with a pulley system (so they didn’t have to climb up and down repeatedly) and who could assist if one lineman belted to the pole had an accident (in which case his partner, similarly belted and spiked to the pole, would have trouble helping him down). By the end of the 1980s, the basic line crew was reduced to two people operating out of a bucket truck, which hoisted them to the top of the pole on the arm of a cherrypicker. The groundman had disappeared from the equation. The line crew went from five people to two through successive waves of technology adoption.

4. See “It Isn’t a Pie” from October 3, 2010 in Politics and Economics.

Sunday, February 20, 2011

Gutenberg and Automation

It’s hard to believe that, up until the late 1700s, every item you could buy—with the one exception described below—was made individually by hand. From a piece of pottery or textile to a complex mechanical device like a firearm, every item was planned, shaped, and assembled by the hands of a craftsman. Each was unique. Even the screws in the gunlock were cut by the hands of a mechanic. This was the pre-automation age.

The exception, of course, was books, pamphlets, and newspapers. Beginning with Johannes Gutenberg in the early 1400s, printers were turning out paper documents with an entirely new system. Rather than hand-copy each document letter by letter on parchment or vellum, a printer set up a master image of the page in lead type, placed it in a press, and copied out whole pages and folios one after the other. The printer was essentially creating a mold or a prototype and then making faithful copies of it as needed.

In 1778, at the suggestion of a French general, Honore Blanc began producing firearms from interchangeable parts. This required parts made to exacting specifications, so that they could be assembled into a workable musket without artisan-style filing and fitting. That was the beginning of an industrial mindset.

In 1803, Joseph Marie Jacquard saw that the patterns of woven cloth, which tend to be repetitive, could be described by a number of steps that manipulated the threads of warp and weft. He set up an attachment to a loom that read those steps from a series of punched cards and so automated the weaving process. Like Gutenberg, Jacquard used a complicated setup procedure to turn manufacturing into a simple, repetitive process. He also foreshadowed the concept of the executable “program,” which was later adopted by Charles Babbage.

Gutenberg-type thinking—complicated setup, easy execution—came to dominate the world of manufacturing through reusable molds and injectible substances like plaster and various resins (also known as “plastics”), or through the dies that shape hot metal in the forging process. A master craftsman designs and makes the prototype to be cast into a mold or cut into a die, and machines do the rest.1

More than that, molding led from Edison’s first phonograph in 1877—in which the sound waves were mechanically inscribed for immediate playback on tinfoil sheets wrapped on cylinders—to master disks on which those waves were captured only so that they could be stamped into wax or vinyl copies and sold cheaply. And so reproduced sounds came into the home and the broadcast studio.2

With this running head start, the trend toward reproducibility and automation has grown to the point that, today, virtually anything you buy is machine processed or machine made. The only exception would be high-status items that you might buy precisely because they are hand made and reflect the craftsmanship or artistry of an individual. For everyday consumables, it’s machine made all the way.3

Automation implies many things for the industrial process in addition to machine manufacture. Usually, with the standardization of parts, it means the standardization of products—that is, more sameness. But with a few programming changes and a few extra steps in the processing line, remarkably customized products are also possible.

Automation trends toward fewer parts and more modularized parts, as engineers rework the manufacturing process again and again. When I worked at Northern California’s major energy utility, Pacific Gas & Electric, in the 1980s I had the chance to see this in action when I toured the plant of one of our newest customers, the New United Motor Manufacturing, Inc. (NUMMI). This manufacturer had just set up in the old General Motors assembly plant in Fremont, California, to make Chevrolets and Toyotas.

The first thing I noticed was that the parking lot, big enough for a thousand employees’ cars, was only a third full. I asked the manager if it was a holiday or a reduced shift, and he said no, they were at full capacity. In the body assembly area, workers pushed pallets of frame parts, fenders, and doors into position. Robot arms reached out to place them on a jig and welding heads dipped down to make the hundreds of spot-welds that hold the car together. In the paint shop, whole car bodies were dipped in primer and paint and dried in thermal tunnels. In the final assembly, workers added the dashboard instruments by clicking a module into place and connecting a multiplug. (In earlier days, a car’s instruments were separate gauges and dials, which required someone to drill holes in the dashboard, insert and secure these instruments, and connect each one into separate wiring harnesses for information input and power input.) Partly, this modularization was due to digital advances—the new instrument cluster was essentially a computer display with various readouts from the car’s engine management system—but partly it was due to engineers redesigning the product to fit a streamlined assembly process. All of this automobile assembly work employed complex machines at the expense of human workers, and hence the need for a much smaller parking lot.

Automation also enables the manufacture of parts that human hands simply cannot make. When I reported for Read-Rite, maker of the read-write heads for computer disk drives, in the 1990s I leaned that people could assemble the heads when the transducers were fairly large, about the size of a grain of rice. But when the transducer became too small, about the size of a grain of sand, machines had to take over. More recently, a maker of the memory chips used in flash drives and cards, Lexar, released a video describing its plant in Utah as employing thousands of people. But most of them are engaged in testing, final case assembly, and retail packaging. Making the chips themselves—described as requiring 800 different process steps and taking a month to complete—is basically a Gutenberg-type printing and etching operation performed by machines in clean rooms. Humans are simply too imprecise, not to mention dirty, to be allowed to do this detailed work.

As we move into the 21st century, which has been billed as the century of the life sciences, Gutenberg raises his head again. Many of the manufacturing processes of the future will involve the creation of complex chemicals—new pharmaceuticals, refinable oil from the lipids in algae, exotic fibers like spider silk, vulnerable commodities like raw latex4—from animal, plant, and microbial cells with modified genomes. Once again, scientists will put their creative energy into the up-front work of programming these genomes; then they will simply turn the cells loose in vats of nutrient broth or exposed to sunlight so that they can multiply and make product.

Automation is the way of the future—not because managers and factory owners are greedy and would rather employ tireless machines than people who constantly demand better wages, benefits, and perks—but because humans simply can't make these things. For anything more complicated than a hammer, you need a precision machine. For something as complicated as a jet engine, you need vanes, shafts, and bearings all manufactured to micron tolerances. A blacksmith or even a master tool and die maker can’t work this reliably, over and over again, to make the thousands of parts needed for one engine. You need the craftsman to shape the first vane, perhaps creating the design by following a computer simulation of the airflow. After that, computer-controlled machines will forge, cut, and finish the thousands of actual working vanes in the engine.

We in the United States are feeling badly because we let the Chinese make our hammers, our pots and pans, and do the final case assembly on our iPhones and iPads. We let them do this work because China has lots of hands willing to work for a fraction of U.S. prevailing wages. But the guts of the iPhone or iPad, the chips, displays, and other high-tech goods—whether made in Utah or Guangzhou—are still made by machines. And when Chinese hands become too expensive to assemble them, they will be made in Bangladesh or the Sudan. Or, more likely, by a really fast machine located anywhere in the world, even back in the U.S. if the tax and infrastructure conditions are right.

That is the way of the future. More on what this means for the economy next week.

1. It’s ironic that in the early 2000s the world of documents—books, pamphlets, and newspapers—is now moving away from the printed word and toward direct display of the electronic word on screens. Digital electronic bits and bites, being more stable and reproducible than analog waves, lend themselves better to copying and transmission.

2. See “Coming at You” from October 24, 2010 in Science and Religion.

3. If you don’t believe this, watch any episode of How It’s Made on the Science Channel. What you see is machines making things and dropping them in bins. Human hands are generally visible only loading the raw materials and transferring finished products.

4. The industrial world’s dirty little secret is that natural rubber, or latex, has advantages over the best synthetics but is susceptible to a leaf blight common to its native South America. That’s why densely planted acres of rubber trees at Fordlandia in Brazil failed. The world’s rubber comes from plantations in Southeast Asia, where the blight has not yet arrived. A future infestation—all too possible, given the travel and transport opportunities of the modern world—could wipe out our supply of this valuable commodity. Production with modified cells in a vat would change all that.

Sunday, February 13, 2011

Athens and Rome

At the end of World War II, the United States found itself the last superpower standing. We were so strong, with an economy built up by wartime production, that we could afford to rescue our enemies, the Germans and Japanese, from economic collapse and rebuild their economies while simultaneously offering aid and support to our former allies. But the U.S. was quickly confronted by the rising power and ambition of the Soviet Union, whose ordering principles based on collectivism and personal repression were antithetical to American ideals of a market economy and personal liberty.

The U.S. was compelled to expand its sphere of influence worldwide and attempt to shape local politics and economics in its own image in order to prevent a rising tide of darkness. The result was 45 years of Cold War chess games: stalemate in Eastern Europe, check in the Caribbean and South America, check in Southeast Asia, checkmate in Russia.

In the past 20 years, the U.S. has once again become the last superpower standing. We are, indeed, being challenged economically by the Asians: first Japan and the Four Tigers, then China and India. And we—that is, the entire western democratic world—have been challenged on issues of religion, politics, and social organization by a rising fundamentalist interpretation of Islam fueled by our own petrodollars.

With the impending collapse of the secular states that U.S. and western policy1 created and have long supported in Egypt, Jordan, Syria, Iraq, Yemen, Turkey, and Afghanistan—a collapse that many fear will follow the revolutionary pattern established by the mullahs of Iran—the basic question of intent now confronts us: How far will we go to promote our view of the world?

It was one thing to oppose a totalitarian philosophy of militaristic socialism having world-dominating ambitions, first in Germany and Japan, then in the Soviet Union and the People’s Republic of China. Their brand of economic and personal repression was relatively new—generally less than a century old, found in the writings of a few radicals like Marx and Lenin—and won the hearts and minds of most converts only at the point of gun and bayonet. But what does it mean for the U.S. now to oppose a 1,300-year-old religion with a rich and consistent approach to religious, political, and economic questions through submission to God and tradition, that wins hearts and minds through teaching and charity?

I am suddenly reminded of two ancient examples for ordering the world to your liking: Athens and Rome.

Athens was a city-state, first and last. Yes, she joined with other Greek states to oppose the Persian invasion, because her citizens would rather be free and Athenian than wealthy and protected as vassals of a distant King of Kings. Yes, she formed her own tiny empire in the Delian League, consisting of a circle of Aegean city-states, but that was a protective measure and a way to pick up some tribute. Yes, she founded colonies in southern Italy and Sicily, but they were not closely held, being as distant as moon bases would be today. Yes, she fought a long and complicated war with Sparta over issues—similar to the U.S. opposition to the Soviet Union—of totalitarian control versus personal liberty. Yes, she was technically a democracy that had a long history of military despots and tyrants.

Through all of these ups and downs, Athenians mostly cared about what happened in their own garden. The rest of the Greek-speaking world was sometimes ally, sometimes enemy, but not really important as any kind of equal. And the world beyond Greece was the realm of slaves and the barbaroi, people whose language sounded like “bah-bah-bah” and could safely be ignored. The important people and issues were found in Athens. For all their wars and alliances, the Athenians were isolationists.2

Rome started as a city-state, too. But rather than an enlightened community of scholars and philosophers, the earliest Romans were a camp of brigands who had to steal women from their neighbors the Sabines in order to start families. In time—actually about three or four hundred years—Rome acquired the tribes of central Italy as vassals, then allies, then as members of something approaching a nation. She fought a trade war with the Phoenicians, suffered invasion, and hit back so strongly that Rome ended up controlling North Africa. From that point on, Gaul and Spain, Greece and Syria, Egypt and Judea, Germany and Britain were just steps along the path to empire.

Unlike Athens, Rome was out to civilize the world. Conquered peoples became colonies, then allies, then part of the state. Romans looked down on the newly conquered barbarians, to be sure, but they also offered them a path to citizenship and equality: build up administrative centers with aqueducts and a municipal water system so you can bathe properly; learn Latin so you can speak and read intelligently; build roads across your land so we can move our armies to defend you against the barbarians on your far borders; learn our administration; supply men for our legions; become one of us. The offer was genuine. The old Roman families might snigger at Gauls in togas taking their seats in the Senate, but they didn’t kick them out.

To the Athenians, the threat of Persian domination and Spartan competition was a reason to resist and turn inward, to build long walls and Aegean alliances for defense. To the Romans, the threat of Phoenician competition was a speed bump on the path to ordering the entire Mediterranean world and most of Europe to their liking.

The United States of America spent two hundred years as, first, the various colonies of England and Spain, then another hundred years establishing a republic and taming the plains, the great northwest, and the Pacific Coast, and a further hundred taking our place on the world stage as the cleanup hitter in two world wars and winner of the Cold War. We’ve entered the period that Athens and the Greeks were facing in the fourth century BC and the Romans were facing in the second century BC. We have pretty much gained control of the local environment and are reasonably free from neighborhood squabbles and foreign invasion. The big world outside now confronts us: Asia represents a competitive threat based on cheap labor, loose laws, and political corruption; fundamentalist Islam represents a return to medieval religiosity and cultural darkness; Europe still struggles with the residues of war and socialism.

What’s it going to be for the United States of America? Do we turn inward, like Athens and tend our garden, dismiss the “bah-bah-bah” people as unworthy of our time, and cultivate the arts of civilization? Or do we turn outward, like the Romans and remake the world in our image, teaching the barbarians to bathe and speak clearly, and cultivate the arts of war and administration?

I don’t know the answer. But I think I can recognize the question.

1. In this case, “the west” must include the Soviet Union and its Russian inheritors, as a legacy of Cold War meddling.

2. Until, that is, they and the rest of Greece were taken over by the Macedonians, whose young prince went adventuring and created a Greek-speaking empire from Egypt to India, overrunning and smothering the once-feared Persians. But this was never the Athenian way.

Sunday, February 6, 2011

The Value of a Liberal Arts Education

These days the liberal arts curriculum and especially a degree in English literature are treated as objects of fun—“And the English major asks, ‘Do you want fries with that?’ ” They are considered a kind of intellectual thumb-twiddling and waste of four years. But when I went to the university in the latter half of the 1960s, a degree in the “humanities” still had some respect.

We were following a tradition, laid down three or four generations earlier, that a university career spent reading history and literature, with an overview of philosophy and the sciences, gave a person an understanding of human nature and civilization. Such a “well rounded” person was prepared to administer the higher functions of business or government: seeing and interpreting complex issues, relating them to historical cases, applying insights about human goals and emotions, responding with intellectual integrity. The liberal arts degree was not the sum of a person’s education, but instead prepared the mind for a lifetime of further development through reading, thinking, and intelligent conversation. Not an end point but a beginning.

The curriculum was already slipping when I inherited it. On acceptance to the College of Liberal Arts, I was mailed a list of books to be mastered between my high school graduation and matriculation as a freshman. Included were such light reading as The Bible, The Iliad and The Odyssey, Das Kapital, The Decline and Fall of the Roman Empire, and so on.1 About a year’s worth of deep study to cram into three months that were supposed to be fun. Needless to say, I didn’t read a tenth of the books and my classmates probably not even that.

The texts we used in class, when they cited examples from Greek philosophy and Roman history, used English translations from the classics.2 I noted that older books, prevalent in the generation before mine, offered the original Greek or Latin in the text with the translation only in a footnote. And in my grandfather’s generation, at the end of the 19th century, the author would give only the untranslated original: if you didn’t read Greek or Latin, that was your fault not the author’s.

But the last years of the 1960s were also the flowering of the antiwar and civil rights movements and the rise of campus radicalism, which demanded “relevance” in the curriculum. Why teach all that ancient, dead, white history when the important history is being made right outside the door? We were also seeing the rise of a new “scientific” approach to the humanities through curricula like sociology and psychology, which tried to concentrate the study of human nature and civilization as rules rather than let the student form his or her own understanding from wide reading of literature and history.

At the same time, university education was taking on a distinctly vocational tone. Literature and history were for teachers.3 Those who wanted to write were sent to the school of journalism. Modern business managers went to the school of business administration. Modern government workers went to study political science, which soon got its own school of public policy. These prepared you for a profession in the same way that pre-law, pre-med, or engineering prepared you for a technical career. Skill at reading and writing—let alone thinking—was relegated to a few elective courses.

In my own course of study, English literature, the “scientific” approach developed into a form of textual criticism known as deconstruction. Although I never studied deconstruction—which flowered a decade or more after I graduated—and never hope to, I have some understanding of its principles. In my generation, we worked under the “New Criticism,” which treats the book as a “found object” and focuses on the text itself. What the author has to say about the work as the creative force behind it (“Now what I really meant to do here …”), and explanations of how the book might apply to the wider world, were ruled out of court. The key point is what the author has actually put on the page and how the reader responds to it.4

Deconstruction, as I understand it—and it should be noted that I’m hostile to the whole notion, and so may state it wrongly—goes well beyond the New Criticism’s hands-off approach to a meaning. The work and the meaning of the very words it uses are the creation of a mind fixed by the socially determined values and perceptions dominant at the time of creation. Similarly, the work was received and interpreted in its own time by minds fixed by those same values and perceptions. So the work, any work, is an “historical object,” fixed like a fly in amber.

According to deconstruction, there are no “universal truths” and no “great books,” like the ones on my college reading list, because every book is dependent for meaning on the society that created it. Shakespeare does not rise above the Elizabethan court’s politics and manners. Homer does not rise above the Bronze Age warrior ethos. We might learn a bit about those societies from their works, but even then, because of these hanging veils of perception and value that cloud the meaning of words, we won’t really understand them.5 And we certainly won’t learn anything universal about human nature and civilized action. All books are flawed at their birth.

The trend away from universal human understanding goes further. If you believe that the “scientific” methods of sociology and psychology establish a modern and untainted view of human nature and social forces, then you don’t bother to go back to history and literature to learn and know about people. Since all books are flawed, you think you will obtain the greatest insights from those created in the time and place of your own experience: current literature and politics.6

I believe this premise is flawed. Human minds are not isolated, locked away in the air-tight bubbles of their society’s limited views and values. (Of course, someone who wants to manage other members of his own society like cows on a feedlot might find that isolationist view very attractive: people are a lot easier to manipulate if you limit the options for thought and action.) I believe humans and their developing natures form a continuum, from prehistoric hunter-gatherer, through agrarian village to hydraulic empire, with episodes of raiding and revolution along the way. Human minds and feelings haven’t changed so much in the last 5,000 years that life in the past is unknowable to us moderns.7

There are enduring thoughts. Not the only thoughts, to be sure: history and literature allow for amazing diversity, and the intelligent reader is invited to choose and so make up his or her own world view. But among the books that have been valued down through the ages, each has something to teach us. Oedipus is mankind struggling against fate.8 Antigone is the individual struggling against the state.9 It is easier to understand and remember these lessons if you know the story than by reading some dry, theoretical description of mental states in a psychology text.10

The premise of the old liberal arts degree was intellectual freedom. Read widely, think deeply, and make up your own mind about what it means to be human and live in a society. Proponents of the liberal arts saw themselves as the inheritors of and contributors to an enduring tradition. Proponents of the modern view see the end of tradition and creation of a new, universal, and everlasting “scientific” truth—one that they think they will control. But when you knock all the walls and building blocks flat, you become subject to strange and uncontrollable winds.

1. Notice all those “the’s” in the titles? These were books distinguished by their status as singular monuments to important thought.

2. I had two years of Latin in my public high school—and mine is probably the last generation to get this kind of education—along with four years of French and two years of Russian. In my last year at the university I took two terms of Greek as an elective and dropped it halfway through the second term: translating the opening chapters of Xenophon’s Anabasis (“And then we marched out …”) felt too much like the drudgery of Caesar’s Gallic Wars.

3. Actually, future teachers were sent to the college of education, where they studied pedagogy first and subject matter like literature and history second.

4. This fit well with my own experience of writing. At age 16 I had written a science fiction novel—475 typewritten pages—and was already familiar with the creative process. The author does not always intend what he or she creates. You write a passage that seems good and appropriate at the moment of creation, and then only later do you realize, “Oh! This action ties back to what happened in Chapter 2!” or “So the loss of the sword is actually a symbol of his failing confidence as a man!” You don’t always plan this stuff; it just happens in your subconscious. And so the book stands there, and the reader either gets it or not. The author is not the final authority about what there might be to see in the complexities of the story.

5. Consider the recent proposal for a revised edition of Huckleberry Finn that would replace the ubiquitous “n-word” with “slave.” The n-word has in our day become such an expression of scorn and hatred that no reasonable, fair-minded person can utter it. In Huck’s day, which was the childhood of Mark Twain, it was a simple descriptive. (Consider also that the n-word identified Jim by his skin color, which was a fact of life, while “slave” would imply the permanence of a condition that Jim wanted desperately to change.) Huck would have been amazed at the emotions the n-word has accumulated by the early 21st century. Language does indeed change. Words do slip around. My point is that sensitive and thoughtful readers can pick up on these changes with relative ease and proceed to an understanding of the work.

6. Or, as one irate audience member said at a panel discussion I once attended: “What other basis for a story is there than class war, race war, and gender war?” What indeed—if you happen to believe that human beings are only little wind-up dolls trundling around the racetrack of current political thought?

7. The current intellectual climate is even less certain about what is and is not knowable. Many people believe there is no unified society at all, but only affiliations based on gender, race, ethnicity, and class. They believe only women can discuss and write about feminist issues, and even when these matters are explained at the hands of the best female writers, no man can really understand them. Similarly, no one can speak to or understand the black experience but another person of African-American heritage. Such willful blindness creates a rich field for antagonism.

8. If you don’t know the story of Sophocles’ play and know only of the Freudian analogy, you might think Oedipus hated his father and was sexually attracted to his mother. Nothing could be more wrong: when Oedipus found out how ignorance of his parentage had led him to kill his father at a crossroads and marry his mother upon coming into town, he gouged out his eyes in horror and despair.

9. I once had a coworker whose college education came about six years after mine. I made a passing reference to “Antigone,” and got only a blank stare from him. “Sophocles?” I prompted. “The Theban plays?” And he just smirked. Reading these works—or at least reading about them in a survey course—was part of bonehead freshman English when I was in college. A dozen years later, they had been whisked away as part of the despised “dead white” tradition.

10. See “Hungry for Stories,” from December 19, 2010, in Various Art Forms.