Sunday, January 29, 2017

The Insurrection of 2017

I’ve written about the changing political winds before.1 And the events of the past couple of months have shown that this is definitely an historically interesting time.

Since the election on November 8, we’ve had sporadic street protests in various cities, sometimes attended by breaking glass and burning cars. The same thing happened on January 20 while the new president was inaugurated several blocks away. And the following day, January 21, we had what many sources are calling the largest demonstration in U.S. history. The multiple events in cities around the country were called for and primarily attended by women, but many sympathetic men were also there. Estimates include 500,000 marchers in Washington, DC; 750,000 in Los Angeles; 250,000 in New York City; 200,000 in Chicago; 145,000 in Boston; 130,000 in Seattle; and comparable numbers in smaller cities. If these estimates are correct, we may have had as many as three million people on the streets of this country protesting the projected policies of the new administration. That approaches one percent of the U.S. population. And sympathy marches of nearly equal size took place elsewhere in the developed world.

Even if these numbers are wishfully overestimated, even if the protests were joined by many people who just went out for a sunny day among friends, that is still a huge effort in communication, coordination, and logistics. The images, the numbers, and the statement of public discontent are sobering. I am reminded that the Tsar of Russia fell after fewer people—but perhaps including more discontented workers and distraught soldiers—appeared on the streets of Moscow and St. Petersburg.

I am also reminded that the last election was close—hardly a mandate for the new president. The number of people appearing on the streets on January 21 roughly equals the edge in the popular vote won by his opponent—although it would be fatuous to say that these people represent, one-for-one, the number of Americans who now feel disenfranchised. The new president won because of the way our governing documents and our representative democracy are structured.2 His election was the result of some brilliant campaigning, sharp calculation of electoral votes, and a raw emotional message. Add to that a relatively weak message from his opponent—other than “I deserve this”—and embarrassing communications that were leaked from her campaign.

As a country, then, we seem to be sitting on a knife edge in 2017.

For half the country, the last eight years of a Democratic administration have been about moving the nation towards the sort of regulatory state and social democracy practiced in Europe, Asia, and much of the rest of the developed world. The promised “fundamental transformation” has been toward a larger role for government in the economy, expressing direct concern for people who cannot or will not compete against their fellow citizens for their share of the “American dream.” The focus has been on providing safeguards for poor people, various disadvantaged and marginalized groups, and the environment itself and protecting them from the uncaring and undisciplined practice of free markets and a capital-funded industrial sector. For this half of the country, generous welfare benefits, redistribution of incomes, and centralized control of the economy are all positive goals. And what has been achieved so far is a start, but it must continue to move forward before everyone is safe.

For the other half of the county, the last eight years have been a rolling back of what they see as the strengths of America. Regulatory control by centralized bureaucracies increases the costs of doing business and throttles the initiative and creative spirit of the entrepreneurs who have built the strongest, most innovative, freest economy in the world. The focus on welfare rather than opportunity creates a downward spiral: as the average person becomes unable to participate—or indifferent to participating—in the economy, fewer goods and services are created and provided, and fewer people can afford to purchase and consume them. The centralized controls engaged by the executive branch represent an unearned place in the productive process, where politicians get to exercise their animus against certain industries or else create skewed incentives to reward their political donors. For this half of the country, greater economic activity, popular skill building and labor flexibility, and the expansion of product lines and choices are all positive goals. And this “natural” state of things has been under attack in the recent Democratic administration.

In international affairs, half the county wants the United States to accept a reduced role and diminished influence, becoming no different from or better than any other country. In this role, we have no need for a large naval fleet, a readily deployed military force, or nuclear capability. The other half sees the United States as the paragon of strength and fairness, inheritor of the fundamentally free and benign Western Tradition, and needing a strong military to project power in its role as “the world’s policeman.”

In cultural matters, half the country wants to erase the old definitions of and relationships between male and female, parents and children, artists and their viewing/reading/listening public, the rich and the poor, the weak and the strong. They reject the “melting pot” of America in favor of a racial and ethnic “mosaic.” They want to redefine and thereby control human nature itself, in order to create a fairer, more just, more equal society. The other half views these attempts at redefinition and revision as both frivolous and dangerous, and considers them a weakening of our national character.

In scientific matters, half the country wants the other half to become more scientifically literate, to base its policies and programs on good science and hard evidence, and to beware of science “denial” and perpetrated “hoaxes.” The other half wants the same thing. Just no one can agree on where truth lies and where biased interpretation and unsubstantiated reporting begin.

For the last dozen years or so, I have heard, both in the alternative news media and in social media—less so in the mainstream media, but now growing there, too—the terms “culture war” and “soft civil war.” These are code words for all of the above tendencies. One half of the country wants progressive, collectivist, and socialistic adjustments made to the regulatory state that has been a long time building here, at least since the Great Depression if not before. The other half wants a return to more the individualistic, unfettered, self-reliant, and self-determined approaches to our perceived problems—and this half also perceives far fewer problems with our society and economy in the first place.

In the last three months, I have begun to hear, in all the different media, the term “second civil war”—without the qualifying adjectives “soft” or “cultural.” In California, I also hear the word “secession” bandied about, and not by kooks.

In my own writing, I am no stranger to the idea of another civil war in America. In my early novel First Citizen, from 1987, I showed a possible breakup paralleling the civil wars of the Roman Republic, with factions based on competing economic and cultural spheres à la Joel Garreau’s The Nine Nations of North America, which was published in 1981. And in my two-volume novel Coming of Age, from 2014, I show the country falling apart along the lines of coastal states vs. central states when the international holders of the U.S. national debt call for an accounting.3

What I’m sensing in the often-violent protests that have periodically broken out since the election, and now in the peaceful but determined march that took place on January 21, is a further hardening of these viewpoints. Half the country wanted the changes that took place in the most recent Democratic administration, and the other half voted just as strongly in the recent election to slow, halt, or abolish those changes. Maybe we will have another eight years of rollback and consolidation of the old economic and cultural conditions. And maybe after that will come a renewed push toward more social and economic change. Maybe we will teeter on the knife edge indefinitely.

Or maybe things will come to a head much sooner. The protest at the Democratic National Convention in Chicago in 1968 drew perhaps 10,000 participants. They were focused on two issues: ending the war in Vietnam and voicing the discontents of the new “counterculture.” The keynote of that protest was “The Whole World is Watching.” Now we have two to three million people protesting in the streets—peacefully on January 21, thank goodness—and the whole world seems to be joining them.

I don’t know how this can be reconciled peaceably. Maybe half the country will just get tired and quietly accept a new status quo—in one direction or the other. Maybe these January 21 protesters did just come out for a sunny day among friends, and maybe other protesters at other times have come out simply for the animal joy of screaming their lungs out, breaking windows, and torching parked cars. Maybe one side or the other will finally admit that they’ve been foolish, wrongheaded, and stubborn all along … but I doubt it.

I begin to entertain the notion that the insurrection has already started, and that it will end in blood.

1. See Something Happening Here from May 8, 2016.

2. Yes, the Electoral College keeps us from having a simple democracy and appears to block the “will of the people.” Every party that wins the popular vote and loses the electoral vote makes this claim. But the time to pass a constitutional amendment changing the rules would be well before the next presidential campaign season starts. Party organizers and campaign managers on both sides tailor their planning and efforts towards the realities of the electoral college. If the winner were going to be decided on a straight popular vote in 2016, both candidates would have campaigned very differently. It’s not too much to say that neither one would ever have left the hot spots in California and New York. And people in places like New Hampshire and Iowa might as well have stayed home and not voted at all. No one really wants that.

3. And that scenario still haunts me. A debt of $20 trillion will have to be paid back somehow, and I’m betting that at least half the country will balk at scrimping and sacrificing in order to repay it. Even if the government decides to make the debt disappear through a roaring, Weimar-style inflation, half the country may still find its voice to object.

Sunday, January 22, 2017

A Money-Making Enterprise

This is another rant, inspired by a fellow novelist’s observation that good editors in traditional publishing—the sort who can help you take your book apart and put it back together again, let alone catch typos and correct the grammar—seem to be in short supply these days. And that got me thinking about the current state of the arts in popular culture.1

I recently saw the 2016 sequel to the 1996 movie Independence Day, this one subtitled Resurgence. I really liked the first movie, have watched it many times, and still enjoy the visuals, the characterizations, and the snappy dialogue. But, after sitting through the sequel, I was stunned when the credits showed three people involved in the “story” and “screenplay.” The movie had almost no story—or at least no new story. It was, in sum, an uninspired gloss of the first film, with cameos and throw-away lines by the earlier actors in their characters, as well as dull portrayals by new young actors playing their supposedly grown-up children. The new alien ships were so much bigger and badder, and their actions so haphazard, ludicrous, and almost unexplained, that it was clear the director, Roland Emmerich, told the CGI department to go have fun and not bother adhering to any script. The entire movie was just a blitz of imagery and walk-on acting without any focus on telling a succinct and involving story.

Why is this relevant to books that don’t get the editorial love they deserve? Because I know that the people responsible for the Independence Day sequel knew they had a bankable property and they didn’t have to care much about engaging the audience’s full attention or respect. They weren’t out to tell an interesting story. They weren’t intending to make any kind of art. They were intent on making ninety minutes of passable scenery and recognizable characters that would draw boobs who had liked the first movie into theaters and then not actively disgust and disappoint them—as they might have been with, say, an hour and a half of a blank screen or a play performed with finger puppets. The filmmakers had nothing new to say, show, or share, but that didn’t matter, because the fame of the first movie was going to sell it for them.

The J. J. Abrams treatment of the recent Star Trek movies works on the same principle. And I think a lot of editors handling the manuscript of a famous and bankable author are working from the same mindset. “It doesn’t have to be good. There’s a built-in audience for this stuff. They’re fools anyway. So this book or movie just has to not be terrible.” In other words, this enterprise isn’t about art or imagination of any kind, it’s about packaging a two-hour film clip or a wad of paper filled with black marks that will be “good enough” for commercial purposes. It’s a money machine, not an artistic endeavor. Get the butts into the theater seats. Get the boobs to pick up the book or DVD and take it to the register.

It may not always look that way, but in my own writing I will often spend a good ten minutes—sometimes much longer—working on and worrying over one verbal image, sentence, or paragraph. I am trying to get the meaning, the tone, and the flow just right. Sometimes these things simply come out of my fingertips and onto the screen as I type. Sometimes I have to sweat for them. But I’m not satisfied with a book and won’t let it go out to my readers until every scene fits—at least according to my sense of the story—and every image and line of dialogue strikes the right gong note—at least to my particular ear.

When I worked at Howell-North Books, which was self-consciously a money-making operation, we still spent time and effort trying to create good books that would satisfy our readership, who were variously interested in railroad histories, steam technology, California history, and Western Americana. We were choosy about selecting our manuscripts. And I was given all the time I needed to edit and polish them, sometimes taking apart the work of non-professional writers and putting it together again to make an easily readable and intelligible story. Mrs. North—the company’s president, who was also our expert at page layout—would spend days over layout sheets with her pica rule and sizing wheel, creating the finished pages with an eye to flow and fit between text and photos. We all read galley proofs twice, went over page proofs line by line, and inspected every cut and mark on the blueline proofs2 to make the books as flawless as possible. We respected the readers who would buy our books and wanted to make each volume meet their expectations, even when we were publishing the second or third or later book by a successful author. We knew that if we produced anything half-hearted, or started cynically playing on a big author’s following, we would lose customers.

In these days, I think, the empires of publishers and moviemakers have become much more dollar driven, and more cynical about the taste and expectations of their buyers. We still have the occasional gem. But most of what gets produced is a slick wrapper around a neglected product. Their motto isn’t “Let them eat cake,” but “Let them eat stale Ding-Dongs.”

But then, crass commercialism has been the order of things among lesser lights in New York and Hollywood over the past century. For every Edgar Rice Burroughs and Louis L’Amour who came up with something new and exciting in popular fiction, there have been thousands of volumes, millions of pages, of “dime novels” and “pulp fiction” that were published with no other purpose than to coach those dimes and dollars out of readers’ pockets. Wads of paper filled with black marks.

For every big-budget movie—or “tent pole” in the current marketspeak—with name stars which might become a classic, there have been thousands of “B movies” set in noir New York or Los Angeles, or in the Old West, or in outer space on Planet Mongo, where actors who would never be stars spoke forgettable—or laughably embarrassing—lines while dressed in cheap costumes in front of papier-mâché sets as the cameras rolled. Millions of feet of celluloid dedicated only to getting butts into theater seats.

Whenever I start to think this way, however, I remember and invoke Sturgeon’s Law: “Ninety percent of science fiction is crap. But then, ninety percent of everything is crap.” And I add Thomas’s corollary: “By the crap shall you know the good.”

1. For further thoughts on the writing process, see the email exchange between myself and a former colleague who also writes novels in Between the Sheets: An Intimate Exchange about Writing, Editing, and Publishing.

2. The blueline is a photo proof of the stripping process, which puts together the bits of negative film representing text, screened images, hairline rules, page numbers, and everything else that will appear on the finished plate for printing. These days, the blueline has been replaced by a PDF of the final layout from a software package like Adobe’s In Design.

Sunday, January 15, 2017

True Leadership

For a while when I worked in employee communications at the public utility, I edited—which really meant researching and writing—a newsletter for managers and supervisors. The basic theme of the publication, at least in my mind, was the art of leadership. I believed then and still believe now that leadership is one of the highest of human callings. Its basic function is to perform work through the good will and participation of other people to achieve goals that could not otherwise be attained.

This definition immediately rules out the person in a position of authority who views his or her subordinates as merely helpers, hangers-on, or dependents. Such a person usually believes he or she has all the skills and knowledge necessary to achieve the goals, just not the time or energy to do so. During critical phases of the effort or at crunch times, such a person swats aside the subordinates’ hands and initiative and takes on the task him- or herself. This is not leadership; this is solo mastery.

Teamwork is a major part of leadership. But in its common usage these days, the word “teamwork” focuses on the responsibilities and attitudes of the team members. Teamwork is considered to be a communal quality, arising from the actions of participants who subordinate their own interests, ideas, and energies for the good of the group. Teamwork is usually characterized as a kind of sacrifice, where highly competent people stop working for themselves so that others may prosper equally. Thus conceived, teamwork is supposed to be an antidote to competition among members of the group. For example, in a sales department, competition would have each sales rep trying to contact the most customers and ring up the most orders, so that he or she could win the most commissions. In this environment, stealing customers and failing to transfer calls would be a winning strategy. The commonest form of teamwork, on the other hand, would have the sales reps sharing their leads, passing off calls to each other, and going out of their way to satisfy each customer, even if someone else on the team got the credit and the commission.

Teamwork may or may not be a better approach to good effort compared to competition, depending on how the teams are structured, how incentives are distributed, and how the group’s values are stated and enforced. Still, the usual notions of teamwork are that it somehow arises on its own, out of the good will and creativity of the group members. But that structure and those incentives and values do not simply float around in the air, waiting to be applied. Someone must take a hand in creating, proposing, and enacting them. That person is usually the unidentified and unrecognized member of any team’s story, its de facto leader.

The leader may be someone in a position of authority over the team. Or it may be an individual on the team who senses the existing group dynamic; sees opportunities for improvement; voices a new structure, relationships, and values; and then advocates for them with the rest of the group. In this non-authoritarian position, the leader can do the necessary structuring and value creation, but he or she still cannot revise the incentive program—at least not in a business setting—without recourse to and buy-in from upper management.

The leader who is also in a position of authority might simply order the new structure and announce the new values—but he or she would be a fool to do so. Perhaps forty or fifty years ago, the industrial and commercial culture of this country favored top-down, command-and-control leadership. This was probably a hangover from the previous forty years in the 20th century, which endured two world wars separated by, first, a decade of wild economic success and, then, a decade of economic collapse, precipitating a more robust and authoritarian form of leadership.

This top-down leadership style could work in an organization which, like the U.S. military, had a mostly captive workforce.1 The expectation in business and industry through the late 1940s, ’50s, and ’60s was that an employee joined the company or the union for life, looked to the organization to provide not only work and pay but also health benefits, scheduled vacations, regular advancement, moving allowances, and a pension upon retirement. In return, the employee performed whatever job he or she was told to do, did not moonlight or freelance, relocated to another part of the country or overseas when asked to, and offered the organization his or her unfailing emotional support and allegiance.2

But along about the 1970s—and certainly in full swing by the ’80s—a new style of employee was created, mostly from the pages of bestsellers by strategy gurus and management consultants. The new employee was not supposed to simply take orders but to anticipate them, foresee opportunities and directions that would benefit the company, and pursue them with the blessings of management. The new word was “entrepreneurial,” and in that guise the average employee in the average position within the company was expected to exercise the eagerness and foresight of an Andrew Carnegie, a Hewlett or a Packard, a Wozniak or a Jobs. But, where the true entrepreneurs of industry were usually following a hunch or a dream, operating on a shoestring of finance, and working without guidance on a venture that would all too likely fail, the corporate entrepreneur was still working within a defined product or service area, on an annual budget, and with plentiful if not mandatory guidance on a venture that had better not fail.

This situation was, of course, unstable. So along about the ’90s—and growing through the aughts and teens3—a newer style of employment was created, characterized by the paradigm “Me, Inc.” This employee was usually not actually hired by the company but worked as a contractor or temporary staff supplied by an agency. This employee had no expectations of the company which actually needed the work to be done—not continuing employment, not advancement, benefits, or retirement. And those people who were still formally employed by the company were understood to be working “at will”—which meant they could be laid off or fired immediately and without cause. These formal hires also received from their employer a “defined contribution” to each employee’s personally managed retirement account, rather than the “defined benefit” of guaranteed retirement at a certain age with a certain residual income.

Leadership in the era of Me, Inc. is a different proposition from that in the top-down era. In this new work environment, the leader becomes less of an authority figure and more like the individual team member who sees opportunities, proposes solutions, and enlists the participation of others in trying them out and making them work. This kind of leader does not give orders except in unusual situations or from dire necessity.4 Instead, he or she points out necessities and opportunities in the organization’s current situation or the economic environment. And rather than propose solutions directly—as if he or she possessed all the answers—the leader invites others on the team to come up with the ideas. The delicate step, then, is for the leader to guide the discussion of options and force the group into realistic appraisals, so that appealing but harebrained notions don’t capture the group’s imagination and let people run away into foolish or reckless actions. The leader stays fixed on the hard and indisputable realities of the situation, rather than making appeals to authority—which always, in the end, come down to “because I said so.”5

Letting others devise and implement solutions is a form of delegation. The good leader delegates where appropriate—meaning once the subordinate has been prepared with the organization’s and the leader’s values (“What’s important around here”) and standards (“How we do things” and “What’s acceptable around here”). Setting values and standards are probably the biggest part of the leader’s job. A “natural” leader, if there is such a thing, has both a feeling for group sentiment and group dynamics as well as the capability to appeal to—and direct the group toward—a higher vision. That vision might be one involving morality, fairness, efficiency, personal honor, or some other good. The vision is almost always positive (“Things work better if we do it this way”) rather than negative (“You’ll get in trouble if you do it that way”).

With a positive vision, the leader aligns him- or herself with the belief that most reasonable people want to do the right thing, and most employees want to create a satisfactory product or service experience. Every job and every market sector or political function has its own canon, whether written or unwritten, of acceptable practices and work product. People who have chosen a career or a position on their own—rather than being dragooned into or enslaved by the organization—already have notions about what is the right and proper way to act and to do the job. The leader works within those canons and notions, rather than against them, and builds on or shapes them to fit the particular task at hand.

Leadership, like much else in this life, is an art form. It is a blending of personal force with perceptive deference to the ideas and opinions of others. It enlists the motives, creative potential, and dreams of the team members. And it works best when the leader is positive, relaxed, and confident—even when he or she might not actually feel that way. True leadership is the highest expression of personal strength and capability.

1. But at the highest levels of military command, the good leader is not always a top-down order giver with his or her immediate staff. Soldiers on the battle line are expected to follow orders implicitly and without question, but the headquarters personnel who originate those orders and the colonels and majors—or, at sea, the captains and commanders—who must execute them should always be given the freedom to offer suggestions and then to exercise initiative in acting upon them. A good general or admiral invites comment and criticism, within bounds, to elicit trust and participation.

2. Many employees also met their romantic interests, significant others, and future spouses in this environment. They would even, at the company’s prompting but without irony, consider themselves to be part of “the XYZ Corporation family.” Work represented a cultural as well as an economic proposition.

3. And the trend was further exacerbated by the employment conditions spelled out in the Patient Protection and Affordable Care Act of 2010, which put economic pressure on employers either to provide more comprehensive medical benefits or to limit the scale of their employment.

4. To quote from Frank Herbert’s Dune: “Give as few orders as possible. Once you’ve given orders on a subject, you must always give orders on that subject.”

5. No one liked hearing that line of reasoning when Mother or Father used it with them as a child. No adult really likes to hear it now.

Sunday, January 8, 2017

Between Perception and Reaction

We have a small dog, a terrier-mix rescue named Sally, who has separation anxieties. If we leave the apartment for even a few minutes, she will be up on her hind legs, waggling her whole body, and smiling1—not to mention pawing and licking—when we return. If we leave for a couple of hours, the greeting process is longer and more energetic.

Since this is California and it never gets really cold—not by East Coast standards—and because my feet often get hot, I usually wear sandals2 without socks when we go out. After years of wear, my sandals are a bit loose and tend to slap against my heels as I walk down the hallway to our apartment door. But even before I’m halfway there, I can hear Sally dancing and whining on the other side of the door.

All this got me thinking. She hears the sound of the sandals slapping. She knows from experience that this sound heralds the joyous experience of her “big guy” returning home and ending her loneliness. So … familiar aural stimulus equals predictable emotional response. At some level, a human being might have a similar reaction. You hear the jingle of keys in the hallway, you know your wife is home.

But a human being—at least during the first or second time of receiving this stimulus—would interpose words between perception and reaction. The human brain would automatically ask, “What’s that sound?” The mind would then sort through comparisons in memory and come up with not only a mental image of jingling keys but also a word, “Keys.” And from that follows the thought, in words or perhaps just in images and sense memory, “My wife.” We humans are such verbal creatures—made so by an environment that showers us with spoken and written words; with captioned images in our books, magazines, and even our advertising;3 with vital information spelled out on warning signs and labels;4 and with demands that we respond aloud or in writing to specific questions—that supplementing our thoughts with words is second nature to anyone over the age of six.5

I know Sally understands some spoken words. At the appropriate time in the evening I might say casually to my wife, “Do you want me to take the dog?”—meaning but not bothering to add, “out for a walk?” Sally will immediately lift her head and begin dancing. She knows “take” and “dog” are associated with the worship-words “out” and “walk.” Our previous dogs could even understand what we meant when we spelled, “T-A-K-E,” and I’m sure Sally will graduate to interpreting spelled-out words one day soon.

But spoken words and spellings are still just learned stimuli in the dog’s brain, like the sound of flopping sandals and jingling keys. Or rather, I’m almost sure of that. The dog may associate them with memories of the humans coming home or taking them outside, and these memories may be connected with visual imagery and, probably, scent cues for the imminent and enjoyable experience of sniffing the bushes. But I don’t think that the dog, when it wants to go and relieve itself, supplies the word “out” or “walk” from its own recalled memory, as a human would. When a human feels a full bladder, he or she will often think and even say, “Gotta find a bathroom”—even if no one is nearby to receive this timely information.

Supplying words as an intermediary step between stimulus and reaction enriches and modifies the human experience. For a dog, it may be enough to hear [jingle] and think [returning-human-happy-happy]. For a human, the mental insertion of the word “keys” can lead to other thoughts. A husband may remember that his wife had left her keys on the counter that morning, and so someone jingling keys in the hallway must be the occupant of the apartment across the way returning home, not the wife—or it could be a stranger trying the lock on the door. When confronted with visual, aural, or tactile cues for which the brain has no learned referent, the dog will either ignore the stimulus or become confused. The human will sample and compare past cues and fit names as well as images to them. The process will insert knowledge acquired from past training, through reading as well as from direct experience, to identify the cue and decide whether it is a cause for reassurance or a threat.

This verbal dimension of human thought allows us to categorize and compress information. The word “key” encompasses may meanings: the toothed metal probe used for aligning the tumblers in a lock; the coded list of references on a map; the text used as a starting point for solving a cipher; the charm or plaque used to identify a fraternity or sorority; as well as visual images of my household keys, my car key, my wife’s keys, the huge iron keys used in medieval locks, and the diamond-studded charms sold at Tiffany & Company. Having all these meanings associated with one word, the human brain is a field of rich connections. We are not limited to simple, singular mental connections like [familiar-jingle] equals [return-happy].

These word associations give power to particularly human activities like storytelling and poetry. A word captures a number of visual—or aural, tactile, and other sense—images that cascade through the mind of the listener. The storyteller uses these images to put listeners or readers inside the scene and make them part of the action. And the wonder of it—from my point of view as a novelist—is that the associations I make with a particular word can be trusted—most of the time, for most of the population—to arise in the minds of those who read my stories. Of course, there are differences. The word “clown” for most people has happy, funny, or outlandish associations, calling to mind red bulb noses, orange string wigs, squirting boutonnieres, and long, floppy red shoes. But for people with a morbid fear of clowns, the word gives rise to images of creepy things with leers and teeth.

I try to imagine a human being, a true Homo sapiens in mind and body, but who lived in a time—which would be the majority of our line’s history on Earth, sixty or seventy thousand years or more—before the invention of writing and our hyper-literate civilization. Words, their meanings, and the grammar and syntax of language would then have been a private thing within the tribe or even isolated within the extended family: rock, path, pot, stick, and a dozen inflections for words describing weather, game, edible roots and berries, and the ways to hunt and gather them. The tenses to describe action in the past or future would have been simple, with little need for the pluperfect or the subjunctive. I try to imagine a hunter-gatherer expressing “By this time tomorrow, if it doesn’t happen to rain, I will have tracked and shot the deer I saw yesterday.” The Greek and Sanskrit aorist indicative mood, denoting simple action without reference to completeness, incompleteness, duration, repetition, or any particular position in time past or present—“I hunt. I fish. I pick berries.”—would reign supreme.

And yet, within a few hundred years after learning to cut cuneiform wedges into wet clay, or scratch angular letters on potsherds, the Sumerians were inventing and reciting the epic struggles of Gilgamesh, and the Greeks were telling a convoluted story of old wounds and grudges as the gods and mortals vied for supremacy at Troy. And today we read translations of these stories into modern English and marvel at the power and beauty of each word’s imagery and its associations.

In the human mind, the word itself has become the stimulus to a reaction. We do not need visual, aural, or other sense cues and perceptions from the outside world to spark an intellectual or emotional reaction. We draw the images, ideas, and emotions from inside our own heads, reacting to nothing more than black squiggles arranged on a white page or screen. We all live inside our heads. Our brains and their pathways have no direct contact with the outside world except through chemical nutrients, drugs, and poisons. So we each make up the world inside our minds from sensations fed from our eyes and ears, the taste and smell receptors in our mouths and noses, and sensors all over our skin. For the human of ten or twenty thousand years ago, that world entered the directly mind from all these senses. For a modern, literate human, the world can also enter from a single source: the eye and its trick of interpreting those squiggles inside the visual cortex.

And for me, that trick is a continuing source of wonder and mystery.

1. I never noticed this with our other dogs, but Sally smiles by lifting her upper lip over her front teeth. I always thought this was a dog’s warning, prelude to growling and snapping. But from the way her eyes squint and her body gyrates, she is clearly happy. I think this is something she learned from watching humans smile.

2. The Keen sandals have good toe protection, unlike Birkenstocks or flip-flops. Because of the way the sides wrap up and connect over the instep, a wargaming friend who is deep into Roman history calls them “calyxes,” or boots, the Latin name for the legionary’s hobnailed sandals. And like the calyx, Keens even have sturdy, gripping soles with deep lugs.

3. I learned in the book-publishing business that, while a picture may be worth a thousand words, modern readers often have trouble understanding or giving full value to a picture without a caption to read alongside it. In a book about mountain climbing, for example, if you reproduce a photo of a beautiful, snow-covered peak, the reader will look around for a caption that tells the name of the mountain, elevation at the summit, and whether or when the author has scaled it. Even a picture of a beautiful woman holding a perfume bottle with the maker’s name clearly shown on the label will give that name again in bold type under the image.

4. In California, we have warning signs in English and Spanish. And just in case the viewer speaks only Cantonese or Vietnamese, they will include a stick-figure demonstrating the danger. (The polyglot Europeans long ago did away with words on their traffic and warning signs in favor of imagery and figures, but in California as in the rest of America we persist with words.) My favorite stick figure, in a warning about overhead high-voltage lines, shows a person sticking a length of irrigation pipe up into the wires and dancing like crazy.

5. This poses special problems for people who are either deaf or dyslexic. But although they may not be fully capable of either hearing spoken commands or reading complex information with easy comprehension, they are not relieved of the human association between thoughts and words. By now, in the modern form of H. sapiens, it’s hardwired into our brains.