Sunday, December 27, 2015

My Problem with Math

All right, I was never very good at math. But I do have a pretty good sense of timing and proportion. When I set my alarm clock at night, I tend to wake up one minute before it goes off in the morning. I also have a good sense of physical spacing and, when I was working with layouts and typography, I could spot a misalignment on the order of one point—about equal to a millimeter.1 In high school, I was good at geometry, which involves a mixture of spatial sense and deductive logic. But algebra and its equations were mostly a puzzle for me, and quadratic equations were my downfall.

Still, I can appreciate mathematics. And I understand that the difference between our lives today and the world and its technologies in ancient and medieval times has a lot to do with applications of trigonometry, analytical geometry, and calculus. But sometimes I think our popular notions of logical proof and certainty go overboard when it comes to mathematics.2

First, mathematics is a human language. But it is unlike other human languages, which can be innately fuzzy. Spoken language is always changing: evolving its pronunciations, such as the English vowel shift; contracting long words into shorter ones by dropping vowels, consonants, and whole syllables; trying to group word forms together to capture new ideas—the Germans are good at this—before chopping them down again; and tinkering with a word’s basic meaning and sense.3 By comparison, the unspoken and more academically based language of mathematics is precise and relatively unchanging.

But math is still just a language, useful for describing things and stating ideas. It helps to state an object’s velocity as so many miles per hour or feet per second, rather than just saying “pretty fast” or “very fast.” Mathematical equations are used to describe motions, transformations, and the actions of large groups, and so they are suited to express thoughts in physics and chemistry. More recently, mathematics has come to dominate certain biological studies such as genetics, cytology, and epidemiology. Most recently, mathematics in the form of numerical models—which are simply a series of equations joined together and set in motion over some arbitrary time scale—have come to describe and predict the action of complex processes such as macroeconomics and weather.

As a language, mathematics is merely the expression of an underlying human thought. The language of mathematics may accurately describe a situation that would be hard to express accurately and unequivocally in English or any other spoken language. It might be harder still to hold that thought immobile in any spoken language over a span of several generations, given the tendency of languages to evolve. Anyone who has grappled with the word choices and shades of meaning in Shakespeare or the King James Version knows this.

But expressing a thought or a relationship or a transformation in the language of mathematics does not prove the truth of the underlying idea. Languages—including mathematics—can express and describe, they can invoke and use logic. But by themselves description and logic are not a proof of anything.

Second, mathematics and its underlying logic may be misapplied. It is possible to make mathematical statements that may be arithmetically and perhaps even logically true, but that do not describe anything in the real world.

I can write a series of logical syllogisms or propositions about black swans and white swans that may be rational and lucid and have nothing to do with real birds. I can describe my trip to Los Angeles, not in so many miles, but in gas stops and bathroom breaks, or in the number of Starbucks® Frappucinos® consumed between them. I can then perform complex transformations with these coffee drinks, but the computations still won’t get you anywhere. Lewis Carroll had exquisite fun with these kinds of cogitations in Alice’s Adventures in Wonderland and Through the Looking-Glass.

A great deal of modern physics—from the calculations underlying fusion experiments to the mathematical models underlying much of quantum mechanics, string theory, and cosmology—smacks of this sort of dealing. Schrödinger can describe the state of his cat in a box with a wave function showing the cat to be both alive and dead until the box is opened and the wave resolved. It’s a perfect metaphor—a language equivalent—for the unresolved and unknown state of a particle traveling through space, which can only be resolved and known by physical observation. But for anything so large and complex as a cat in a box with bottle of cyanide, applying equations is ludicrous. The cat is either alive or dead—and in the former case, it knows itself to be so. The underlying idea of a cat actually suspended between life and death, although elegantly expressed in mathematics, is nonsense.

Similarly, in string theory, the unification of the four basic forces of physics requires that reality in its smallest measures—too small to be noticed in everyday life or detected directly by observation—be composed of multiple dimensions wrapped around themselves. It’s an elegant proposition and mathematically impeccable, but it’s also unprovable by physical experiment. The underlying idea, expressed in mathematics, leads nowhere.

Third, the application of a mathematical idea may not in itself be complete. A system of equations that tries to express very complex systems, like weather or currency movements or animal migrations, must make choices involving the number of variables to include and how to interpret and weight their actions and effects. The main choice is always between completeness—and so an increasing complexity that may approach an infinity of variables—and manageability and comprehensibility.

Here I am reminded of another type of modeling: the choice that a war game designer must make between playability and producing realistic feeling and results. Simple games with few exceptions and die-roll modifiers are the most playable, but they can become so abstract that one loses the feeling of battle and gets all-too-predictable results. Complex games that include many calculations for actual battle conditions—unit morale, lines of communications, visibility and cover, state of supply, weapons calibers and penetration values, windage, time of day, etc., etc.—are elegant and yet unplayable. A three-minute battle turn can take half an hour or more to play. But, in the other direction, consider chess, with its simple rule set, as a model of diplomacy and warfare: the play becomes so abstract that it no longer feels like either diplomatic maneuvering or war.

Fourth and finally, math itself is still a human creation. Some of its principles—the order of the integers, the proportions of the fractions, and oddities like pi and the golden ratio—would seem to be ordained by a non-human hand and descended from nature. But other principles, used in everyday equations, are the invention of human minds working against human-imposed limitations. The invention of imaginary numbers—for example, the square root of a negative number, where arithmetic defines multiplication of two negatives to always have a positive result—is an attempt to climb over this limitation. Or to identify a proportion that cannot be expressed as a ratio of two whole numbers, like pi or the square root of two, as irrational is just a species of fussy bookkeeping. The human mind wants the world to be flat and all fractions to express themselves simply—but it isn’t, and they don’t.

In the same way, the application of mathematics—especially when it comes to creating equations that map and study problems in economics and meteorology—is still a human activity. In that way, math is an expression of human ideas—not much different from creating a story or myth where all the parts come together neatly to satisfy the human psyche. But how many economic models have accurately predicted the state of the stock market or the national money supply at the end of the next quarter? How many weather models—absent satellite observations that can be measured and tracked on a map—can tell you exactly when the rain will start falling?

While mathematics may follow fixed, discoverable, and rational rules, its applications do not. Human beings can still fool themselves and tell half-truths and whole lies in any language.

“My equations show”—
be they ever so precise—
does not make it so.

1. And I’m a demon—practically OCD—about straightening pictures, rugs, placemats, and other things that can go crooked. I believe this comes of having an landscape architect for a mother and a mechanical engineer for a father. They ran a ruler-straight home with everything spaced just so.

2. This is a theme carried forward from past blogs. See, for example, Fun With Numbers (I) and (II) from 2010, as well as Fun with (Negative) Numbers from November 3, 2013.

3. By “sense” I mean the basic nature of the word. A word can go from ameliorative (tending to praise or find the good in something) to pejorative (tending to fault or demean) over time. An example is our English word “nice,” which used to mean “simple” and now means “pleasing” or “charming.” It can also mean “precise” or “perfect,” as in “That was a nice golf shot.” When my family moved to New England, I learned that “wicked,” rather than meaning “evil,” could be used to mean “extremely” or “cool,” as in “That was a wicked fast ball.” And so, of course, a wicked fast ball can also be a nice fast ball.

Sunday, December 20, 2015

Needs vs. Wants II1

I love what I call “cleavage questions”: those that split the issue down the center, like a diamond cutter, and expose its heart. And I think I’ve found one. What does each side in our political spectrum aim to achieve? People on the left side of the aisle aim to provide everyone with what he or she needs. People on the right side aim for everyone to have the opportunity to get what he or she wants.

And that leads immediately to the question of methods.

To provide everyone with what he or she needs, you must first determine what those needs are. Unfortunately, any mass of individuals so large, diverse, and unwieldly as a city, state, or nation cannot express any simple hierarchy of needs that will apply to every member.2 Even if you could query each member individually and create a huge database of individual needs, then fulfilling them in a timely fashion would be next to impossible. We can barely keep up—either administratively or financially—with providing the barest necessities to people in need under existing blanket programs like Social Security, Medicaid, food stamps, or general welfare. To go into those systems and reorganize their allocations based on individual preferences would be a bureaucratic nightmare.

So the definition of need is left to public advocates. These may be the administrators of existing fulfillment programs like Social Security, or the congressional representatives in charge of mandates and funding for these programs, or self-appointed public advocates who attempt to speak for the recipients of the programs. Given that all of these advocates are also human beings—and not artificially intelligent, imagination-neutral robots—they will tend to temper their recommendations with their own ideas of human nature, adequate sustenance, sustainability, reciprocity, and fairness. They are in a position to decide what other people need—and so they will decide. And self-appointed public advocates, whose pronouncements focus solely on the beneficiaries, are even free to ignore constraints like financial resources, program sustainability, and moral or economic parity.

It’s too easy, also, for advocates to attribute to the individual men and women in their charge the kind of brute stupidity and errant foolishness that only shows up in the movements of large, mindless groups. And so the public policy mavens will to override a simple statement of wants, such as “I like peanut butter and jelly on white bread,” with their own views about the the allergy potential of peanuts, the sugar content of jelly, and the benefits of whole wheat bread. When you speak for others, it is easy to replace living, breathing people like yourself and those you hold near and dear with a kind of fiction: gray, faceless, nominal “people,” not too bright, with poor insight, limited education, bad habits, and no self-respect. This vision comes with many names: the “booboisie,” “hoi polloi,” “Joe Sixpack,” and “the people.” When your business is taking a statistical average among people you don’t know—and without doing any rigorous statistical work to begin with—it’s almost impossible not to aim low.

And so your recommendations as a public advocate tend to be colorless, drab, unexciting, and devoid of spirit. In this I am reminded of why I loathe the work of Consumers Union® and the evaluations and recommendations in their publications. All of their choices are for basic, no-frills, least-cost options that put a premium on safety, reliability, and sustainability and care not a fig for innovation, style, or charm. They would have every car be as exciting and intriguing as a refrigerator—if you didn’t particularly like your refrigerator and didn’t care for optional functions like automatic defrosting and ice making. Consumer Reports is a guide for people who buy things they need but don’t want and will use but not enjoy. These are products for people who’ve had the nerves connected to excitement, adventure, love, and occasionally coping with the quirky defects of innovative products burned right out of them.3

Further, when you have this low opinion of human nature based on an imagined “Joe Sixpack” or “the booboisie,” the temptation exists to try to fit all the square people into your rounded hole. You tend to teach to the lowest common denominator, inform and entertain at the lowest acceptable levels, and expect the people you serve to exhibit passive conformity instead of active individuality.4

On the other side of the aisle, when your aim is to provide people with the opportunities to get what they want, your methods immediately change. You are no longer in the business of polling everyone about their desires, because it is not your responsibility—from the standpoint of a single entity, organization, or government body—to provide for that galaxy of divergent wants.

You also assume that other people are not unlike yourself, in that they may have different interests, requirements of taste and color sense, thresholds of boredom and excitement, appreciative abilities, and imaginations. As you prefer pistachio ice cream over plain vanilla or chocolate, and would rather drive a fast and expensive car over a staid and practical one—all without thinking of yourself as either a weirdo or a jerk—you can imagine that other people will have desires and ambitions that lie outside the mainstream as well as outside your personal preferences. And you can accept their differences of taste and perception without pausing to consider these others as weirdos and jerks.

From this viewpoint, it doesn’t matter if people are innately smart or stupid. They are as they have been brought up to be, and their life situation is not your immediate concern. You are free to view people as varied and diverse: some smart, some stupid, some foolhardy and likely to lead short lives, some prudent and likely to live long, some elegant, and some slovenly. You are also free to pick and choose your friends and associates, and to hope for the best for your children. You will take people as you find them and not as you wish them to be.

This is because you do no adopt the mantle of satisfying either their needs or their wants. When you trust that other people may be about as smart as you are, then you trust that they will know the difference between a basic necessity and a needless frill. You trust that they will see to their own needs and those of their families first, and then spend their resources either on novelties and excitements or on saving for the future. Which they choose is their lookout and not your immediate business.

From this viewpoint, you know that people are also ambitious and creative—or at least a large fraction of them are. With those innate drives, and with a random mixture of skills and talents in the basic population, you can trust that people and organizations will arise to provide for needs and desires. Some will cater to the bland and boring basics, like whole wheat bread and bran muffins, while others will serve niche markets in artisanal jellies and Viennese pastries. Who arises to provide which need or desire is still not your business or responsibility.

Your business is to provide opportunity to access the goods and services available in a free market—not to provide the good or service itself. Your responsibility as an entity, organization, or government is then to guard against things that fall under the rubric of “restraint of trade.” In this role, you will certainly want to guard against monopoly power, because monopolists tend to ignore market signals and promote decreasing quality while increasing price. You guard against confidence tricksters and scam artists who sell promises not backed by actual goods and services. But it is not your business to sample and approve all goods or license all levels of service, because you know that the market will eliminate those who do not provide quality commensurate with price. And believing in the innate sense of your fellow human beings, you trust them to be a little bit suspicious and to subscribe to “buyer beware” as an economic principle.

When your aim is to provide for needs, you tend to believe in conformity and sameness. When your aim is merely to guarantee opportunity, you tend to believe in freedom and possibility. As someone with no faith in Utopia and with a basic belief that the world will spin as it spins in good times and bad, I cheerfully opt for the latter.

1. For parallel thoughts on this same topic, see Needs vs. Wants from April 5, 2015.

2. Some attempt at this was made in the 1940s with the Second Bill of Rights proposed by President Franklin D. Roosevelt. His “economic bill of rights” included employment and the right to work; food, clothing, and leisure; a farmer’s right to a fair income; freedom from unfair competition and monopolies; housing; medical care; social security; and education. Many of these “rights” have become the basis for later federal programs and departments of the executive branch.
       As others have already pointed out, the original Bill of Rights attached to the U.S. Constitution was about people being left alone to live as they wanted—free to worship, free to speak, to defend themselves, to be protected against unreasonable searches and seizures, against witness that incriminates him- or herself, and so on—which are rights that cost nothing to provide, except in terms of placing limitations on government prerogative. Roosevelt’s Second Bill of Rights, however, is about providing tangible assets, goods, and services—paid employment, food, clothing, a house, doctor services, insurance, teacher services—which all require someone else to provide them at either personal or government expense.
       And still these needs may not satisfy everyone. For example, my definition of employment might include a level of interest, achievement, and compensation—for example, I’d really like to be paid a million dollars to write an epic poem about the Apollo space program—that others are not prepared to provide under any conditions. And don’t even start asking about my needs for food, clothing, and leisure activities!

3. I am reminded here about the comment from Democratic presidential candidate and U.S. Senator from Vermont Bernie Sanders, that we don’t need twenty-three different brands of deodorant. Of course, we don’t “need” them if we all don’t care how we smell or if we all have the same skin conditions. The fact that twenty-three brands are viable in the marketplace should tell you something about human nature.

4. So the Soviet Union thought to engender a new race of Homo sovieticus, a conforming human devoid of initiative, ambition, desire, inquiry, or family feeling. A drab people to drive cars not even as exciting as refrigerators.

Sunday, December 13, 2015

After the Wheel

A Facebook friend recently posted a short video showing a team of horses pulling a stalled tanker truck out of a snowbank. The four horses were harnessed in parallel to two doubletrees, an elaborate amount of tack to have on hand for an emergency procedure. And the animals pulled well together, suggesting they were a team sent to the accident site from a nearby farm. The video was shot in Central Pennsylvania, which is small-farm country, but I would also have guessed somewhere in Eastern Europe—anywhere people might still work with trained horse teams. Certainly, our factory farms out West use tractors, and fairly large, high-tech, GPS-guided ones at that.

But all of this is reflection after the fact. What struck me in the moment of watching those four horses pull the front of the truck back around on the snow- and ice-glazed road was how surefooted they were. The horses didn’t need snow tires or chains, or salted roads, or even a smooth road surface, as our wheeled vehicles require. And that gave me a vision of the future—or one possible future.

The origin of the wheel as a transportation device dates back to Mesopotamia in about 3,500 B.C. This makes it a fairly late invention, compared to the domestication of crops and farm animals, and about contemporaneous with smelting of copper, one of humanity’s first metals. As suggested at the link above, inventing the wheel required a good set of metal tools to work wood, because you’re not going to make useful wheels out of quarried stone you’ve hammered into shape with some harder kind of rock. You need chisels and planes to fit and join sections of plank into a stable, round form, because you can’t just saw off the end of a log and get four wheels of almost identical dimensions, or not dependably. And you need drills to bore a hole for the axle, and a draw knife to shape the end of the axle to fit inside the hole. You also need to invent and affix a locking nut or cross peg to the outer end of the axle, in order to keep the wheel from working off and rolling away. That’s a lot more sophisticated than just shaping some round logs for rolling one of Pharaoh’s stone building blocks across a pathway of smoothed, compacted sand.

But the wheel you make also needs a fairly level, smooth surface to roll on—a flat riverbed or graded road. As any Jeep owner will tell you, driving across open country littered with chuck holes, downed logs, and medium-sized boulders is hell on wheel rims and tires. In desert conditions, you can probably stick to level ground and the occasional dry wadi. In the more hilly and wooded parts of the Mediterranean and Europe, the Romans built elaborate, paved roads so they could move marching men and supply wagons quickly to access conquered countries. In America, we pour millions of cubic yards of concrete and roll out tons of asphalt to maintain our roads, which lace together every household, town, city, and state. We nail or bolt millions of miles of steel ribbons onto wooden or concrete crossties embedded in gravel to guide the wheels of our railroads. We put a lot of work into all this infrastructure to keep those wheels of all dimensions and materials turning.

Interestingly, the indigenous peoples of North and South America—those who walked across the Bering land bridge during the last Ice Age—never invented the wheel and so had no need of roads. The Incas created a fairly sophisticated empire based on footpaths through the mountains. Other cultures used dogs for carrying and hauling their burdens—that is, until the Spanish brought horses into the Western Hemisphere and let enough of them escape to form wild herds that the natives could catch and tame.

The wheel is considered one of the six “simple machines,” along with the lever and the wedge.1 It has also forced our modern technology into a very specific pattern. The earliest source of non-animal, non-muscle mechanical power was the water wheel—which was the thinking man’s adaptation of the wagon wheel. The thinking must have been something like: “If the wheel turns when it travels along the road, what would happen if the road were to run under the wheel? Like that stream there, where the water runs down the valley?” That and the windmill—another adaptation of the wheel—meant that our earliest machines imparted a circular motion to all our tasks. The first grain mills not powered by human muscles rubbing the kernels between one stone and another used water or wind power to drive large stone grinding wheels. When steam was first applied to pumping out mines and then to driving riverboats, the up-and-down, in-and-out action of the cylinder and its piston had to be converted to the more familiar circular motion through a crank shaft and a series of gears.

The first electric motors mounted magnets in a circle around a shaft and spun them inside an armature of other magnets. But, as demonstrated by magneto-hydrodynamic (MHD) generators for making electric current, and linear induction motors (LIM) and solenoids for applying it, spinning shafts are not the only way to use electricity. However, the concept of an “engine” was so firmly fixed on the idea of circular motion and rotating shafts that almost all our inventive effort went into making these things go round. And certainly, a spinning crankshaft or rotor mated well with turning the wheels of our “horseless carriages.”

Perhaps things would have come out differently if those indigenous American cultures had advanced far enough, without early influence from the Old World, to discover and make use of electricity. Then, inspired by human and animal muscles, the leverage in elbows and knees, and their experience of grinding corn and cocoa beans under oblong stones going back and forth on the flat-rock metate, the first electricity use might have been linear instead of circular.

So the vision I had, watching those horses drag the truck out of the snowbank, was that we may one day return to that linear motive force—but not by reverting to the use of animal muscles. With our advances in computer technology and sophisticated control systems, people are now working on robotics and other automata—things that move by themselves. And right now, the science seems to be at a fork in the road.

Many robots roll around on wheels, but they all require smooth surfaces, flat floors, and inclined ramps or elevators instead of stairs to go from one floor to the next. True, there are neat ways to make a trio of wheels rotating around a common axle climb a set of stairs—but that’s a one-time application, designed for stair step with particular dimensions of elevation and depth. That triple wheel totally breaks down if the gadget encounters a step that’s too tall or has to move across a boulder field. The Mars rovers use cleated wheels, but they must stick to the wind-swept plains and the fortuitous ancient streambed.

Various laboratories are now working on walking robots, which use linear motors based on pistons, pulleys, tension cables, and other straight-line drives to work the joints in hypothetical hips, knees, and ankles. They require a sense of balance—usually through gyroscopes and strain gauges—to remain upright. And they must be coupled with sophisticated control systems, usually combining visual and tactile sensors, to place their feet accurately and in proper order. From what I can see in the video clips that occasionally get out to the public, these robots can walk or run on level surfaces and up modest inclines—about the same achievement as a wheeled vehicle.

One of my articles of faith is that we are not at the end of our discoveries in any field. In many applications, linear motors, computer controls, and sensor systems to emulate animal models are lagging behind rotary motors driving shafts and wheels. But we are still in the early stages of discovery and development in robotics. I can envision a future—not perhaps in the next hundred years, but certainly within the next thousand—where cyber-controlled devices will be so ubiquitous and the use of electro-mechanically adaptive materials that emulate muscles will be so common, that the most useful vehicle for human and cargo transport will not be wheeled. Instead, when we must travel over the ground, instead of flying, we will use multi-legged transports. Being as surefooted as a horse or mule, they won’t need a prior investment in the infrastructure of flattened surfaces, graded roadbeds, or steel rails. Riding on or inside one of these cyber-creatures would mean that your route and destination are not limited to a layout of prepared pathways. You could travel cross-country, or even climb the side of a mountain, if you chose.

Of course, by then we humans will probably have paved the surface of the Earth and even extended our roads a good ways out beyond the low-tide mark. But on the other planets we likely will not have—or take the time to make—such conveniences. A mechanical horse would go a long way on Mars or Titan. And its only disadvantage would be that, unlike a real horse, you can’t eat it if your supply of food packs happens to run out.

1. Classically, these six machines are the lever, wheel and axle, pulley, inclined plane, wedge, and screw. The definition of a simple machine is a device that changes the direction or magnitude of an applied force. But if you want to get mystical about it, the wheel and pulley are just different applications of the same invention, as are the wedge and the inclined plane. So definitions of the earliest inventions will vary.

Sunday, December 6, 2015

The First Great Revolution

One of the great dividing lines in early evolution on this planet—the first great revolution, if you will—was the move from single-celled organisms like bacteria to multi-celled organization like plants, animals, and the rest of us. That is, in the terms that biologists use, to go from prokaryotes to eukaryotes.1

The main difference in cellular structure between prokaryotes and eukaryotes is the presence of a nucleus. Bacteria have none. Their DNA is loosely scattered throughout the cellular protoplasm. This means their DNA gets randomly and continuously transcribed and translated as part of their basic lifestyle. All of their DNA, all the time, is templating the proteins needed to make up and maintain the bacterium. When times are good and the bacterium can absorb nutrients, it feeds and grows. And when the amount of proteinaceous material made by the translated DNA becomes too great for the cellular membrane to contain, the bacterium copies those loose strands of DNA, divides the membrane down the middle, and becomes two new cells.2

What the multi-celled eukaryotes had to learn through the random mutations of evolution—in order to solve a problem that the prokaryotes never encountered—was how to regulate all that transcribing and translating of internal DNA as their various cells acquired different functions and became different types of cells. The first question was how to keep order among the cells in a multi-celled organism, so that each one doesn’t keep swelling and dividing, swelling and dividing, until whole organism becomes too big and ungainly to feed itself, manage its environment, and maintain its basic functions—and then swells up, chokes, and dies. The second question was how to use the encyclopedic DNA that comes down from the organism’s parents, as contributed by both egg and sperm and combined in the zygote, to create different types of cells in different types of tissues. Loose strands of DNA getting endlessly translated into all the potentially available proteins just doesn’t work in a multi-cellular context.3

But, almost from the beginning, it appears that geneticists never stopped to think about these problems. The discipline early adopted the “Central Dogma” of genetics: that DNA transcribes to RNA, which codes for proteins. The code written in deoxyribose nucleic acid (DNA) inside the cell nucleus gets copied—almost at random, it would seem—into a similar type of coding in ribose nucleic acid (RNA), which goes out into the cell body as messenger RNA (mRNA) to be captured and read by the ribosome—a structure made out of proteins and strands of RNA itself—which thumbs through the messenger RNA code and assembles proteins out of amino acids, which are bits of carbon molecules free-floating in the cell. How the transcribing of all this nuclear DNA was controlled, and how segments of it were chosen to produce only those proteins that the cell type needed—those were questions that the Central Dogma either failed to consider or put aside for later.

Now it’s later, and we have the answer. It’s an answer we probably could not have discovered without sequencing the entire human genome, which was only completed in the year 2000. And it would have taken longer if the Human Genome Project had stuck with its initial approach of fishing out genes by searching through the genome for the start codon that is common to every protein-coding gene, represented by the bases for adenosine, thymine, and guanine, in that order, ATG, which also codes for the amino acid methionine. The approach was simple: look for an ATG codon anywhere in the genome and sequence from there until you reach one of the messenger RNA’s “stop codons,” which are TAA, TGA, or TAG.

That method would have been fruitful, except it would have run out of gene candidates way early. A few years after the Human Genome Project started, however, genetic scientist Craig Venter proposed another approach, which he called “shot sequencing.” His idea was to chop the entire human genome up into random strings of a couple hundred bases each, sequence them all, then toss the newly discovered A’s, C’s, G’s, and T’s into a supercomputer and let it sort them out. The computer did this by finding identical sequences, considering them to be possible overlaps, linking the overlapping strings together, and putting the whole thing into one long, coherent order. Venter’s rationale was, why fish for the genes individually when you could just drain the lake and pick up all the fish? The head of my former company, Applied Biosystems, maker of the dominant form of gene-sequencing equipment, backed Venter in this approach, creating a company called Celera—from the Latin for “fast”—to make its own run at the human genome. They speeded up the process so much that the other labs in the Human Genome Project soon adopted shotgun sequencing and finished the program well ahead of time—about seven years before their originally projected end date.

What they discovered then was that, of the three billion base pairs in the genome—all those sequenced A’s, C’s, G’s, and T’s—only about ten percent coded for proteins according to the Central Dogma. And the use that these protein-coding genes made of each sequence was highly complex, involving patches of expressed sequences (called “exons”) and patches of intervening sequences (called “introns”) that allowed each gene to be spliced together and interpreted in alternate ways to make different but related proteins. That was a really clever trick of evolution. But the rest of the genome, the other ninety percent, seemed kind of dumb: just nonsense coding. Scientists started calling it “junk DNA,” and imagined it represented genes that we humans no longer used—from our early heritage first as fish, then amphibians, then reptiles, and finally mammals. And now those discarded gene sequences were slowly mutating themselves into nonsense.

One day in 2002 or 2003, however, I was crossing the Applied Biosystems campus with one of our scientists and she told me, “I don’t believe in junk DNA.” Her reasoning was that copying all that excess DNA each time a cell divides consumes a lot of energy. The backbone of the DNA molecule is, after all, a series of phosphate bonds. These bonds are also part of the cell’s energy molecule, adenosine triphosphate. Since phosphorus is a relatively rare element in the human body, it makes no sense that we would store a lot of it in DNA sequences that have no meaning.

Along about this time, also, genetic scientists began isolating and studying short strands of RNA, only about twenty to fifty bases long, called “microRNAs.” These were found in the cell nucleus, never seemed to go out into the cell body for protein coding or any other work, and appeared to be associated with “gene silencing” in a process called “RNA interference.” At first, these microRNAs looked like some kind of accident that could occasionally turn off a gene. One of the earliest incidents studied was microRNA interference with a petunia’s ability to produce purple pigment, so that the flowers turned out white.

And finally, about 2004, I heard a presentation at Applied Biosystems by Eric Davidson from the California Institute of Technology.4 He and his colleagues had been studying sea urchin embryos—because, as he explained, they could start 10,000 embryos in the morning and sacrifice them in the afternoon, and nobody was likely to complain. You can’t do that with puppies or human babies.

What Davidson and his group were learning was that shortly after the fertilized egg begins dividing and the cells form a spherical structure called a blastula, the cells start differentiating based on their position within that sphere. Some become a bony spine that goes on to form the urchin’s skeleton. Others become skin or gut or nerve tissue. By sacrificing those 10,000 embryos in tightly spaced time frames—on the order of ten to fifteen minutes—and examining their nuclear DNA, Davidson and his colleagues discovered the mechanism whereby one patch of DNA begins to transcribe a bit of microRNA that stays inside the nucleus, finds a complementary promoter sequence somewhere else in the urchin’s genome, triggers the transcription of another bit of microRNA, which settles in another place, and this continues on and on, starting a cascade of differentiation effects. From the first transcription, the course of this cascade will alter depending on what quadrant of the blastula the cell originally occupied and the elapsed time since the inception of egg and sperm.

By studying a similar process in other animals that haven’t shared a common ancestor with the sea urchin in millions of years, the Davidson group determined that this DNA to microRNA to DNA to microRNA action is highly conserved, which suggests that something very similar happens in other animals and in humans as well. In short, the Caltech lab uncovered a gene regulatory network that controls the development of each cell and its differentiation from other cells in the organism. It is why some cells become liver cells and produce only the proteins needed for liver cell functions, while others become skin, brain, or bone cells and produce only the proteins needed for their own functions. And, in the case of stem cells, the process goes only so far and lets the partially differentiated cell remain in an immature state until needed to repair damaged tissues—at which time the cell completes its differentiation.5

While the ten percent of the genome that codes for proteins are the body’s parts list, that other ninety percent—once thought to be “junk”—is actually the instruction manual for the body’s self-assembly. And that’s an even more clever trick of evolution.

This great revolution came in several parts, all of which more or less depended on each other. First, the DNA in the cell body had to be gathered inside a new structure called the nucleus. That way, the creation and deployment of these regulatory microRNAs could be contained and focused. Second, a whole new regimen of non-protein-coding sequences had to develop to govern gene promotion and regulation. These non-coding sequences had to become important to, and transmitted along with, the DNA load in each cell. And third, a new type of cell, the gamete, which carries only half the organism’s full DNA complement—based on the normal cell having two copies (called “alleles”) of each chromosome—must be developed, must multiply by a new cell-division process called meiosis, in order to halve the chromosome content, and then must be released through egg and sperm to form the next generation of offspring.6

That’s a lot of systems and structures to ask a one-celled creature to develop in short order. I would almost think the origination of all this complexity would need a divine hand or intelligent guidance—except that I can appreciate how evolution works. Mutations are always happening. Some are bad and kill their subject either immediately or soon after inception or maturation. And those mutations that aren’t lethal might actually be beneficial in forming new proteins that can take advantage of a changing environment or other molecular opportunities. But, more likely, any one mutation just hangs around in the organism’s genome until it’s either eliminated by accident or becomes combined with some other mutation, either in the same protein string or in another protein. Eventually, the modified protein may find a purpose. And if that happens often enough, the whole organism changes slowly over generations.

They say “rust never sleeps,” and neither does evolution. Its actions and effects are constantly shuffling genes and proteins in millions of individuals individual, in every species, and from one generation to the next. And remember that one-celled organisms go through hundreds of generations in the time that human babies are just forming in the womb, getting born, and starting to grow. The process pushes forward in all tissues, in all sorts of configurations, during every moment of time over a scale that goes back three and a half billion years, almost to the beginnings of this planet’s formation in a bombardment of hot rocks.

Seen that way, without regard for individual fortunes or for the thousands, nay, millions of embryos that Mother Nature herself hatches and sacrifices in every generation, you can imagine some pretty remarkable things occurring. Like a bacterium with its loosely organized DNA strands growing up to become all sorts of highly organized and differentiated creatures—including us humans with the organized brains to discover and start to think about this process.

1. Of course, there are single-celled eukaryotes, too. They are generally in the kingdom Protista and include many forms of algae and single-celled protozoa like the paramecium.

2. And when times aren’t good, when the environment becomes too cold or dry, or the food source goes away, then the bacterium stops processing its DNA, loses its internal water content, shrinks down to a hard little spore, and prepares for a long winter.

3. What follows is a story I’ve told before—see, for example, Gene Deserts and the Future of Medicine from December 5, 2010; The Flowering of Life from August 25, 2013; The Chemistry of Control from May 11, 2014; and Continuing Mysteries of the Genome from October 12, 2014. But the story bears repeating in this context.

4. Eric Davidson died this past September from a heart attack. We lost a great mind there.

5. And you’d better believe that genetics labs all over the world are searching for and studying those regulatory networks, trying to figure out how they can induce stem cells to grow outside the body and become complete new organs that we can use for implantation.

6. Of course, some eukaryotes like the paramecium reproduce asexually, without sharing genetic material with another organism. Most cases of this kind are found in one-celled protists, although in higher animals the process is called “parthenogenesis,” or virgin birth.

Sunday, November 29, 2015

Phlogiston, Aether, and Dark Matter

Having no formal training in much of anything except English literature and the martial arts, I tend to be an outsider in disciplines relating to science, religion, politics, and other art forms. That means I am not wedded to any particular doctrines or viewpoints. But I am interested enough, and read constantly—if haphazardly—enough that I tend to have opinions based on some knowledge if not deep, formal study. This means that, in the eyes of the conventionally trained, I am a dilettante and a meddling fool, while to the lay person and the general public, I am more of a voice lost in the wilderness. For myself, I am a contrarian who has not drunk the Kool-Aid of the formalists.1

As a science fiction writer, this can be awkward if not treacherous. I have to know the general principles of science, understand the nature of scientific inquiry, and be able to identify faulty thinking. But unlike any formally trained scientist, I must also be able to color outside the lines and engage my imagination in speculations beyond the limits of the known. My stories have to take us from what is already proven to what is possible without veering into what has already been shown to be dead wrong.2

A rich field for this sort of speculation lies in astronomy and its co-discipline of cosmology. We humans have been able to learn a remarkable amount about the universe, its origins, and its destiny just by looking outward from the surface of this planet with our eyes, our optical instruments, and now with technologies that can look beyond the spectrum of visible light into all the frequencies of photon radiation and measure a number of flying particles as well. We can analyze these energies, both on the Earth’s surface and from probes in intra-solar space. But the the more we look, the more we find questions and contradictions. It’s a place rife with speculation and theory.

One of the theories has to do with gravity. We understand it pretty well at the scales of human beings, planets, and stars—or at lease we think we do. But at the smallest scale, that of the elementary particles, gravity seems to have no effect. And at the largest scale, that of galaxies and galactic clusters, gravity seems far stronger than we can account for with the visible matter that makes up these objects. If we judge by the brightly shining stars we can see in the average galaxy, plus the unbright stuff we can infer from our own solar system—planets, moons, asteroids, comets, and dust, all of which are just a fraction of most any star’s mass—then our best calculations cannot account for the motion of the stars gravitationally bound in the galaxy.

With our current understanding of gravity and mass, the visible stars in a galaxy like our Milky Way or our sister Andromeda should move at varying speeds. Those in close to the center of galactic mass should trace their orbits rapidly around the dense galactic center, while those farther out should appear to move more slowly, taking more time to complete their longer orbits. This is the pattern in our solar system, where the inner planets orbit the Sun’s mass in a hurry of days or the Earth’s own year, while the outer planets saunter along in decades and even centuries. For visual effect, imagine chips of wood circling a maelstrom: those closer to the center move faster than those out on the edge.

But this is not what we see in most galaxies that have a pronounced spin. Instead of moving independently, the stars appear to move together in a flat rotation, as if they were painted in a fixed pattern on a spinning disk. It’s almost as if the mass of the galaxy increases with a star’s distance from the center. None of our calculations of the probable amounts of planets, dust, and cold icy bodies can account for this extra mass. No amount of mass attributed to a central black hole—the likes of which we now can predict lives at the heart of every galaxy—accounts for this extra mass, either.

Another observation suggests this extra mass as well. All massive bodies bend spacetime—that is, the effect of bodies existing and moving in space over time—and cause light rays to bend around them, either a little or a lot, depending on the amount of mass. Einstein’s prediction of this warping effect was confirmed almost immediately during the next full eclipse of the Sun: stars that should have been invisible because they lay behind the Sun’s disk were actually detected just outside its edge. Our Sun’s great mass and deep gravity well was bending the light rays from those more distant stars on their way to Earth. In the same way, galaxies that lie between observers on Earth and more distant galactic objects bend the light from those objects, so that they may appear offset from where we know they actually lie. In some cases, their shapes are distorted into a halo image surrounding the intervening body. This is called “gravitational lensing.” In all galactic cases, the amount of gravity required to achieve the effects we see is more than we can measure in just the stars of the lensing body.

To account for this flat rotation around a galactic core and its increased lensing power, many cosmologists propose the existence of “dark matter.” They envision this as some kind of invisible mass, perhaps a particle, perhaps not, that sheds no light and does not interact—except gravitationally—with the kind of matter that we know as atoms and their constituent particles.3

In some early theories, dark matter was proposed as an excessive number of normal interstellar bodies like brown dwarves, neutron stars, and black holes that just don’t emit much light. The common name for these objects supposedly scattered in a galaxy’s farther reaches was massive compact halo objects, or MACHOs. Sometimes they are called a robust association of massive baryonic objects, or RAMBOs. Although we haven’t seen evidence for enough of these dark bodies, when you add in a measure of heavy gases in a galaxy, you … still don’t come up with enough mass to satisfy observations.

The most common theory is that dark matter is composed of WIMPs, weakly interacting massive particles, which lie outside the Standard Model of quantum mechanics. They fall into the category of non-baryonic matter—that is, not like our protons, neutrons, electrons, neutrinos, photons, and other identified particles.4 These non-baryonic particles are also identified as “cold,” “warm,” or “hot,” based on how fast they are moving and their effect on the early universe. So, if you follow the WIMP particle theory of dark matter, rather than the MACHO brown dwarf theory, you are opting for massive particles that we cannot see or detect but that pass through every square inch of our bodies, the Earth, and everything else every second. They have no effect on us, except for their mass and inherent gravity—which would add to the gravity effects we can already account for on the local level pretty well from the masses of the Sun, the Earth, the Moon, and other nearby bodies. Uh-huh.

The more I think about this, the more I am reminded of past theories about phantom substances created to account for unexpected experimental results.

Early naturalists, before the development of commonly accepted principles of chemistry and molecular structure, observed fire consuming a log of wood and reducing it to a pile of fluffy ashes, which have not a tenth of the wood’s weight and content. To account for the missing mass, they proposed that burning released an invisible substance called “phlogiston,” which if recaptured from the air could be reconstituted with the ashes into new, hard wood. Now, of course, we understand that plants and other organic matter are rich in hydrocarbon molecules, which with proper heating in the presence of oxygen can be transformed into carbon dioxide gas and water vapor, leaving only the mineral residue from the wood as ash. A universal fire-like substance like phlogiston is not needed in this process.

Early physicists, before the development of quantum mechanics and special relativity, believed that light traveled in waves. Light couldn’t be a particle, because particles traveled in straight lines, like hurled rocks and fired bullets. Since ocean waves represent regular motion in water molecules, they proposed that light waves were regular motion in an invisible substance—not air, because light also traveled in the vacuum of space—that permeated the cosmos called “luminiferous aether.” It took an experiment by Albert Michelson and Edward Morley in 1887 at Case Western Reserve University in Cleveland to show that aether could not exist. If this substance was some kind of universal fluid, then the Earth must pass through it like a fish through water. In the experiment, Michelson and Morley split a beam of light at right angles and measured its travel in two different directions. Clearly, if light was a wave in aether, then the beam traveling in the direction of Earth’s passage, and so encountering a “headwind” in the aether, would be slower than the beam crossing the Earth’s passage and so encountering no headwind. They conducted their experiment on a rotating table, to allow for all possible angular effects. When light showed no preferred direction, they dismissed aether as a substance. This opened the possibility that light was a quantized packet of energy—equivalent to a lightweight particle, the photon—that travels at a constant velocity following an oscillating path whose frequency corresponds to its inherent energy.

Now, I may be indulging in faulty reasoning here. But if the phantom substances phlogiston and aether—materials created to explain our imperfect understanding of then-current observations—were later proven wrong, then might not the phantom WIMPs of dark matter also be an easy first approximation of a much more complex situation? In this I am reminded of the scene in one of my favorite movies, Aliens, where the slacker soldier Hudson is tracking the predator aliens with a motion detector. When the device shows them actually inside the room where the group has barricaded itself, Hudson bawls, “Hey, this thing ain’t reading right!” And the more dependable Corporal Hicks replies, “Maybe you aren’t reading it right.”

Maybe we aren’t reading the nature of gravity right. Other approaches have been proposed to account for the missing mass on a galactic scale. One is that the mass might be found in extra dimensions, but this is hardly better than massive, non-interacting particles. “Other dimensions” is becoming the broom closet into which physicists sweep any inconvenient observation that does not jibe with current theory. A possibly better approach is that the birth of the universe may have distorted and deformed the shape of quantum fields as we understand them. This is getting closer to an admission that the universe is strange and we haven’t worked out all the strangeness yet. A third approach is to admit that, as Einstein’s theories about time, space, and gravity enlarged upon those of Newton, perhaps at the galactic scale we are waiting for a new physicist to propose and prove further modifications to, and a refinement of, our understanding.5

I, for one, am more comfortable with that—with knowing our knowledge is incomplete and waiting to be furthered—than with accepting the existence of an invisible particle we can neither detect nor measure. I don’t begrudge the astronomic community spending money on search for this invisible particle, just on the off-chance it might exist. But I’m not about to become wedded to the idea as some kind of orthodoxy.

Our understanding of the universe is still in its infancy. As a sapient species, we have a lot of growing up to do. I can accept that.

1. My first publisher, Jim Baen, who was a contrarian himself, advocated this position. “Contrarians always win,” he once told me. My experience has been that, at least, they don’t get trampled by the crowd.

2. When I was first starting out as a writer, I submitted a short story—one of my very few, for I am by trade and inclination a novelist—about a harmless looking glass vacuum bottle, such as you might use for hot coffee or soup. The point was that the cap was stuck on tight and, when the thoughtless protagonist finally succeeded in opening it, the vacuum inside began sucking first the air, then all the matter out of first the room and then the universe. When the magazine returned this story, the kindly editor told me that science fiction about dead theories like luminiferous aether just don’t work. Being adaptable and easily trained, I have since been careful—or at least I hope I have—not to roam into the dead lands of bogus science.

3. Astronomers also speak of “dark energy,” which has to do with the fact that the universe seems to be expanding far more rapidly than you would expect from the impetus given by the Big Bang. Dark energy outstrips the tendency of the universe’s mass—even accounting for that dark matter—to pull everything back together. But let’s save dark energy for another time.

4. Ironically, the term “baryon” has its roots in the Greek word for “heavy.” So while normal baryonic matter is “heavy” stuff, this non-baryonic matter—which supposedly accounts for most of the mass in the universe—is the “non-heavy” stuff. Weird.

5. See my complementary blogs “Three Things We Don’t Know About Physics” from December 30, 2012 and January 6, 2013.

Sunday, November 22, 2015

A Spectrum of Self-Awareness

A recent article from Science Alert1 suggests that animals must have some level of self-awareness in order to project different choices into a future state and make decisions. For example, rats in a maze will pause and consider different possible pathways. Without some sense of self, the animal would not be able to distinguish a projected choice from the memory of an actual experience. This suggests that awareness begins with a sense of the difference between the knower and the known. One might also consider this sense the starting point, the far end of the spectrum, of self-awareness.

Finding that point is useful to writers and thinkers, like me, who ponder the human condition and the issue of creating a mechanical intelligence that has human-like capability.2 Certainly, artificial intelligence is more than simply a test of raw computing power or problem solving. Awareness and reflection—and the self-doubt and second-guessing to which they can lead—would only get in the way of straight-line computational speed and accuracy. But when we look for a mimic of the human mind, we look for that elusive quality of self-awareness.

This is more than the ability of a machine to create verbal responses that a human being cannot distinguish from those of another living intelligence—the Turing test. We already have programs that can do that, and they are not particularly intelligent and certainly not self-aware.3 For a program to become intelligent on the terms we mean by “artificial intelligence,” it would have to be able to distinguish among past experiences, current operations and future projections. It would have to be anchored in time, as human beings are.

We recently learned about a friend’s cat who had lost the use of its eye in a fight and would soon lose the eyeball as well. My immediate reaction was that most animals adapt to these injuries rather well because they are not burdened by the existential angst of considering their maimed and reduced state: “Oh, no! I’ll never see properly again! And, oh, I look so hideous!” If a cat that loses its depth perception because it has lost an eye even thinks of its handicap at all, it would be with a sense of befuddlement that arises only when the loss becomes immediately apparent: “Gee, it used to be easier to gauge the height of that cabinet. But this time I fell short. What gives?” This is akin to the puzzlement of an animal that’s used to going in and out of a “doggie door” that suddenly becomes blocked: “Gee, there used to be an opening here!”

Animals may be able—like the rats in the Science Alert article—to distinguish between present decisions and past experiences, but they don’t live in a conscious web of time. They don’t live in much of a past, other than to know that the cabinet used to be of a more accurate height or the doggie door used to swing open. Perhaps a dog, seeing its owner start to fill the tub, can recall the unpleasant experience of bath time, or seeing the owner put on his or her coat and reach for the leash, can anticipate the joy of a walk. But they don’t delve into and reflect on past circumstances and actions, and so they can have few regrets … although a dog can experience a limited sense of shame when the owner catches it in known bad behavior and will live with that sense of self-doubt or self-loathing for a few minutes—or until something distracts the dog, like a good shake. Similarly, the animal does not have much, if any, sense of the future. The anxiety the dog displays when its owner leaves the house without taking the leash and the dog is the generalized anxiety of separation from the adopted pack leader, not an internal estimation of hours or days left alone by the window or the possibility of absolute abandonment.

This opens a new thought, however. How could our dogs, cats, horses, and other pets love us—which I’m sure they do—without a primal sense of self-awareness? The affection they feel is more than just the instinctive following and responding to a known food source upon which the animal has come to depend. The relationship between pet and human includes trust, play routines, demonstrated affection, and emotional involvement—all of which require some sense of self. The animal can distinguish between its situation and that of another independent being. It forms a bond that reflects its own internal emotional state, an awareness of the other’s emotional state, and a sense of the difference between lover and loved one. This is analogous to understanding the difference between knower and the known.

The IBM computer program “Watson” could compete at the television game Jeopardy! because it could explore the nuances of language, word meaning, historical relationships, causality, and human concepts such as puns and word-play. It had command lines that would drive it forward through its comparative processes. And it had weighting factors that would help it decide when it was detecting a relationship based on logic, historical connection, or word association. It had incredible skill at answering the riddles on the TV show, and—if one thinks about the current company product offering called “Watson Analytics®,” these same techniques are now being used in mining complex data and answering human-inspired questions without specific programming and or resort to a structured query language.

But is the Watson machine aware? Does it know that it’s a Jeopardy! champion? And if someone were to tell the program of this fact, would it distinguish between its new status and any other fact in its capacious database? That is, does Watson know what Watson is? Can it know this in the same way that a dog knows it’s different from a chair or another dog or a new human being, and so can place itself in a mentally or emotionally projected relationship of trust or fear, pack dominance or dependence, fight or play, in relation to other animate beings? … Right now, I don’t think so, but at what point might Watson cross over to thinking about itself as a unique identity?

We humans live in a web of time. We have a past, present, and future—and invest a lot of our brain power and emotional stability in examining ourselves in all three temporal domains. We experience exquisite existential questions involving complex tenses represented by the words “might have,” “must have,” “could have,” and “should have” in dealing with the past; “may,” “must,” “can,” and “shall” in the present; and “might,” “must,” “could,” and “should” in the future. We can see ourselves in the past in relation to the deeper past when we employ the pluperfect tense (as in “I had been tried for murder”), and we can anticipate a possible but not certain condition with the future perfect (as in “I will have been tried for murder”). We swim in time as a fish swims in water, and all of this relates to a being we know, understand, and study as ourselves, our own condition, our cherished relationships and advantages, our perceived qualities and shortcomings, our known failings and our expected about-to-fails. We can also extend this awareness to situations and people outside ourselves and imaginatively outside our experience: other people’s strengths and weaknesses, what they did in the past, what they will think and do in the future, and how we did and will relate to them.

Can a computer think like this? Most programs and most processors exist in the now. Ask them to solve for x, and they will respond “x equals three.” Ask again two minutes later, and you get the same result, taking all the steps to arrive at the solution as before. Only if some human programmer has added the capability to assemble a database of previous problems and their solutions, plus a loop in real time that asks if the problem has been encountered in the past, will the program consider its own history. I don’t know if Watson has such a database and loop, but it might save a lot of time—particularly if the database preserved not just identical problems but parsed them into patterns of similar problems and their possibly similar solutions. “Oh, I’ve seen this one before! I know how to do it!”

The next step would be to program the computer to use its excess processing capacity for posing its own problems, possibly leveraging past experience, solving them in advance, and entering the patterns into another database to be consulted in potential future encounters.4 All of this could be done at about as much cost in processing power as operating an elaborate graphical user interface. But would the computer then have a sense of time approaching human-scale awareness? Probably not. We would still be dealing with a Turing-type simulation of awareness.

So, are we humans at the other end of the spectrum of self-awareness? The rat, the cat, and the dog are just beginning to perceive their own states as knower separated from the known. The more advanced species like dolphins can begin to identify themselves in a mirror; apes can recognize words as relational objects, appreciate social relationships and obligations, and communicate new ideas to other members of their troop; and elephants can draw pictures in two dimensions of objects in the three-dimensional world. So are we humans—the only creatures with which we can communicate in complete sentences for the sheer pleasure of this complex intellectual play—the end state of awareness?

I try to think of a more advanced intelligence—and that’s another part of a science fiction writer’s job. It wouldn’t have just better technology or a faster or more complex recall of data. It might become godlike in terms of how human beings imagine their gods: able to perceive past, present, and future as one continuous and reversible flow, because it stands outside of time; able to know all the answers to all the questions, because it invented both knowledge and questions; able to command space and wield infinite power, because it can apply all the possible mathematical formulas and manipulate their consequences in terms of mass and energy. But is this scope of capability actually better than human awareness? Wouldn’t standing outside of time imply an awareness caught in one permanent, eternal now? Wouldn’t absolute knowledge foreclose all possibility of wonder, desire, and choice? Doesn’t complete control of space, mass, and energy suggest an explosion of energy becoming mass reminiscent of the Big Bang? Gods are not superior to human beings because they are fixed in their potential. If they are at an end point, it is an eternal and unchanging stasis. And what fun is there in that?

No, if the rat in the maze is at the delicious beginning of knowing the difference between “I’ve seen this corner before” and “I wonder what’s around that corner,” then we humans are at the beginning—but nowhere near the end—of discerning, defining, deciding, and determining the shape of the maze for ourselves. And that’s a powerful place to be.

1. See Fiona MacDonald, “Humans aren’t the only animals that are self-aware, new study suggests,” Science Alert, June 18, 2015.

2. See, for example, the story line of ME: A Novel of Self-Discovery and certain motifs of the near future in Coming of Age.

3. See Intelligence or Consciousness? from February 8, 2015.

4. Something like this is part of the SIPRE approach to defensive driving in anticipation of the React step. See SIPRE as a Way of Life from March 13, 2011.

Sunday, November 15, 2015

Vegetable Vampires

A recent article in Science magazine1 describes the mechanism by which plants of the genus Striga, including the pretty, lavender-flowered witchweed, live parasitically off other plants.

The host plants, including crops like cotton, corn, and sorghum, release tiny amounts of chemicals called strigolactones into the soil as they grow. For the host, these chemicals are both a growth hormone stimulating the root system and also lures to attract and stimulate fungi, mycorhizzae, that provide nutrients from the soil like nitrogen and carbon. Any Striga seeds which lie dormant nearby also sense these chemicals, usually at lower concentrations than the host plant and fungi can detect. The strigolactones stimulate the witchweed seeds to germinate and then send tendrils to penetrate the host’s root system, sucking out nutrients before they reach the host itself and so blighting the crop. Each witchweed plant then produces up to 100,000 tiny seeds that disperse into the soil and wait, sometimes for decades, until another host is planted.

If that mechanism is not the perfect definition of botanical vampirism, I don’t know of one that would fit better.

Striga is a problem in Africa, where the plant apparently evolved in relation to sorghum, but it also affects crops in Europe and the United States, although to a lesser extent. The article in Science describes how researchers are studying the genes for strigolactone receptors in an effort to control infestations of witchweed and similar parasites.

As I read the article, it occurred to me that a 19th-century naturalist might have learned of such a relationship between host and parasite and then meditated on the grandeur and intricacy of God’s creation. My own reaction was similar but deflected. I meditate on the power of evolution represented by the DNA molecule and how persistent, random mutations can lead one plant to detect the growth hormones of another and turn them to its own uses. The difference, the deflection, is that I cannot think of a god or any intelligently motivated designer who would think up such a horror as witchweed, digger wasps—which lay their eggs in the larvae of other insects—and similar cases of parasitism and still consider them part of a benevolent world. A sensible god wouldn’t make witchweed, because it adds no glory or benefit or abundance to the world. A random force like evolution would make witchweed for the simple reason that it can.

This isn’t symbiosis or communalism or any of those beneficial relationships we learned about in high-school biology. There, for example, the bacteria that live in my gut consume undigested carbohydrates in the food I eat and benefit from the warm, moist environment I provide, while I benefit from their chemical processes that manufacture some of the vitamins I need, support my immune system, and protect me from more harmful bacteria. In contrast, the witchweed does nothing for the corn or sorghum plant but instead robs it of the nutrients its roots work to obtain from the soil and diverts them to the witchweed’s own use.

This is like the parasitism of the bark beetles, corn borers, and other pests which burrow into a plant and feed on its substance. We humans can understand this kind of attack, however, because it’s not too different from a logger cutting down a living tree for lumber or a farmer stripping the ears off a corn plant for food and then bailing the stalks for animal forage or burning them for fuel. In the wilds of nature, every animal feeds on either plants or another animal species, and the plants feed on the soil. This seems right and natural. The only time we humans get upset about this arrangement is when the bark beetle kills trees we plan to take for lumber ourselves or ones we cherish for their beauty and their shade, or when the corn borer robs us of a crop we plan to eat.

But I find the parasitism of the witchweed chilling. It’s not a member of the animal kingdom, like a hungry beetle or a caterpillar that will one day grow into a pretty little moth. This is one plant feeding on another, and not like a fungus growing on the underside of an already dead log. This is Dracula sucking the blood of a sleeping maiden.

In most cases of parasitism, we can eventually hope to achieve a state of balance. If the corn borer worm kills off all the corn in this field and the next, then the moth has no place to lay her eggs and the local population of the species Ostrinia nubilalis dies out. If the Ebola virus, which hijacks a cell’s genetic machinery to make copies of itself, is so aggressive that it bursts the cell membrane and then destroys so many cells that the human host dies, that particular strain of the virus dies out unless it can infect another host within hours or a few days, depending upon conditions. Most successful parasites either throttle back their aggression so that the host lives—think of the common cold, which manages to infect us again and again with new strains—or the parasite maintains a life cycle slightly less prodigious in terms of physical size or population than the host’s, so that new victims are always available.

Witchweed avoids this limitation by sowing seeds that can lie dormant for decades. If it kills off all of its hosts in the field, it can wait patiently until a new host arrives tens of generations later. This is the vampire that can sleep in its crypt for, comparatively, centuries.

I challenge any incipient gods out there to conceive of a set if tricks like this: mimicking the chemical receptors used by another plant, converting those chemical growth signals into both a trigger for germination and a tracking device to guide the parasite’s root structure sideways toward the host, and finally adapting its needs so that it can feed on nutrients manufactured by another species. Now couple all that with the ability to produce tiny seeds that can lie dormant for decades. It’s a lot to accomplish through random mutation and genetic drift.

Considering the complexity of the witchweed’s development, one might almost be tempted to think that an inventively inclined god must have had a hand in it. But as a convinced evolutionist, I know that for every Striga alive and functioning in the world today, millions of generations existed before it that had none, or one, or only some of these characteristics. The genus Striga had all of history since the first seed-bearing plants developed, about 390 million years ago, to learn its tricks. Some of the characteristics may have been immediately useful, as soon as the genetic mutation allowed them to occur. Others may have lain dormant inside the plant’s genome, like the seed waiting in the soil, until this mutated protein became useful in combination with those others to create one of the witchweed’s survival tricks.

Evolution is a long, slow process. It is not an act of creation but rather of accretion, of putting together genetic puzzle pieces, and adaptation, of leveraging the resulting proteins when accident finds a fit with the natural surroundings. If that fit works and contributes to the organism’s survival, then the genetic configuration and its protein products are preserved and become the stem for more random revision and development. The only rule concerns what can or cannot survive, given the current environment and its available opportunities.2

Working on evolutionary principles, the universe presents us with more possibilities than any sentient intelligence, no matter how powerful, can ever imagine. The universe is full of surprises. And that gives me hope, even if some of the surprises turn out to be nasty ones, like the vampiric witchweed.

1. See “How crop-killing witchweed senses its victims” by Elizabeth Pennisi, Science, October 9, 2015.

2. For more along these lines, see Evolution and Intelligent Design from February 24, 2015.

Sunday, November 8, 2015

Pounding the Slab

A couple of times a year, I take one of the motorcycles1 up to the Sierra to ride the foothills and high country with my son’s father-in-law. From the Bay Area, this means crossing the Central Valley, which for the most part means riding the freeway in the early morning or late afternoon. And on my pleasure outings and errand rides around the Bay Area, too, I spend a large fraction of my time on the highway. I call this “pounding the slab.”

Riding back roads that twist and turn through the hills is a test of skill and judgment. Your mind is occupied with holding a good line through each curve, applying the right amount of throttle and brake, paying attention to oncoming cars and straying wildlife, and simply enjoying the kinetic sweep, roll, and dive of a powerful machine negotiating random angles and trajectories.

But pounding the slab just takes a measure of watchfulness—of surrounding cars, cracks and potholes, and trash on the roadway. You can do it all with a six-second sweep of the path ahead and the mirrors on each side. And sometimes that sweep extends to fifteen and thirty seconds or more if the highway is relatively empty.2 For most of the time, my brain is on autopilot, exercising basic balance and control, employing subconscious SIPRE awareness,3 and … noodling.

Noodling may mean nothing at all. I’m enjoying the sunshine, the motion of the bike, the clouds in the sky, the feel of the wind. Sometimes, songs play in my head. I will pick up the throb of the motorcycle’s exhaust and the hum of the tires as low organ notes, usually with a subtly embedded rhythm, and compose wordless arias to match. It isn’t music you can reproduce, but it sings inside me.

I used to try to listen to music through speakers mounted inside my helmet. This isn’t illegal, the way ear buds are, because you can still—theoretically—hear traffic sounds, horns, and the bullhorn instructions of approaching police cars. But the problem is that I can’t hear the music. My favorites are usually classical symphonies,4 early rock,5 ballads,6 or movie soundtracks and show tunes—and all of them, like most sensible music, include both loud passages and soft. With wind noises coming off the helmet’s geometry, plus the bike’s exhaust and the rush of traffic, the soft parts completely disappear unless you crank the volume until the loud parts blast beyond recognition. Better to ride unfettered and enjoy the song inside my head.

More often these days, I take a problem that I need to think about and let it run through my mind while cruising at 70 mph. Usually, this is something to do with the next part of the novel I have in hand: working through a plot problem, or groping for the beginning image, effect, or line of dialogue that will kick off—I call it “the downbeat”—the next scene I am going to write.

You might think those Shakespearean coincidences, mistaken identities, cross purposes, and avowed intentions that you find in any interesting play or novel just came to the Bard while he held a quill in his hand, or that they flow naturally while I’m sitting at the keyboard doing what I call “production writing.” Sometimes they do, because the mind is a surprising mechanism when it gets going in the flow.7 But more often these plot points and twists arrive out of the dark, or from your blind side, while you’re driving, taking a shower, falling asleep or just waking up, making dinner, or doing something else completely unrelated to writing. If you’re lucky, you have paper and pen at hand to write them down. If not, I’ve learned to think the idea through in a comprehensive way, compress it as if I were making notes on paper, and assign it a key word that I will remember later. Then, with pen in hand or at the computer, I recall the word and the idea unfolds like a flower. It’s a neat trick for when you’re out on the slab doing 70 mph.

Sometimes, also, I can “prime the pump” by putting a question to my subconscious mind—such as “Well, how will the evil mastermind lure the fair damsels into his lair?”—right before I get in bed, step into the shower, or set out on my ride. Then, distracted by the business of falling asleep, lathering up, or navigating my route, my mind will sometimes churn the problem and toss back its own solution. Maybe other writers can sit down and brainstorm these issues with themselves, or open the Big Book of Plot Complications8 and find something suitable in there. But for me, it’s noodling the problem while conducting the business of life.

And for the rest, my time on the road is spent just pounding along over the pavement, avoiding cracks and potholes, and dodging trucks. Oh, and enjoying the clouds and the wind.

1. See The Iron Stable for the latest inventory.

2. And yes, traveling at my accustomed speed—which is usually a bit over the limit, so that I don’t get run down by a Volvo or a Prius on the California highways—I have sometimes been surprised by a member of the CHP motorcycle patrol, who usually sweeps by me going a good deal faster. I swear, everyone out there is doing 75 to 80 mph in a 65 mph zone.

3. SIPRE is a defensive driving technique that breaks the driver’s mindfulness down into five independent actions: Seeing, Interpreting, Predicting, Reacting—by which I mean choosing from a set of preplanned and practiced responses—and Executing. For more on this, see SIPRE as a Way of Life from March 13, 2011.

4. Beethoven, Brahms, Dvorák, Grieg, Rachmaninoff, Sibelius, Vaughan Williams—all the Romantics.

5. The Beatles, Fleetwood Mac, Journey, REO Speedwagon, Starship—back before popular music became just disorganized noise.

6. Judy Collins, Clannad, Neil Diamond, Enya, Loreena McKennitt—voices that can invoke a sense of time and mystery.

7. See Working with the Subconscious from September 30, 2012.

8. And if you know of a copy, please send it to me.

Sunday, November 1, 2015

If I Had to Live Abroad

If I had to live somewhere other than the United States—and I really have no plans to leave—then I would choose Italy as a place of refuge and exile. Of all the countries of my admittedly limited travel experience, I found Italy to be the most livable.

When we were traveling in the late 1980s to mid-’90s, my wife and I spent several weeks each on different occasions in England, France, the Netherlands (with a side trip into Germany), and Italy. And of the four, the friendliest, most spontaneous people—despite the language difference1—the easiest travel and accommodation arrangements, and the least day-to-day hassle were to be found in Italy. The art, architecture, history, and the food weren’t bad, either.

I know, the Italian government is in a perpetual state of chaos, the economy is in virtual collapse, and it takes forever—plus, it is rumored, a certain amount of discreet emolument—to obtain any kind of licensing or official action. But the Italians always seem to survive. And they do it with a cheerfulness, a flair, a sense of taste and style, that we earnest, plodding Americans can only admire.

Somewhere near the end2 of his massive history, The Rise and Fall of the Third Reich, William L. Shirer compares the German people’s attitudes toward Hitler and the Nazis with those of the Italians toward Mussolini and the Fascists. The Germans, Shirer suggests, belonged to a relatively young culture that only started to come together in the middle of the last millennium, and they believed implicitly in Hitler and his promises of national glory. But the Italians, having survived in a culture dating back two thousand years and more, which had gone from empire and world domination, through abandonment and barbarian invasion, the Renaissance and the Inquisition, territorial partition and foreign occupation, then revolution and reunion, were a relatively older and more sophisticated people. They tolerated Mussolini and his thugs so long as they provided jobs and made the trains run on time, but the majority of Italians never believed in him.

When your folk wisdom goes back two millennia and comprises the worldview of Roman legionaries, Gothic nomads, Byzantine clerics, Papal intriguers, Medici connivers, Carabinieri bravos, and Garibaldi patriots—not to mention the earlier influence of Greek colonizers and Phoenician traders—your personal attitudes become warm and rounded, like the stones in a fast-flowing river.

Although Italy is a constitutional republic with multiple parties, I’m pretty sure most of the people in office are some kind of socialist, and not a few are probably doctrinaire communists. But despite whatever private beliefs a man or woman in modern Italian politics might hold, I can’t see anyone seriously trying to move the country toward the sort of top-down, command-and-control economic system, with state control of all industry and commercial outlets, that the Soviets and the Cubans have tried and failed at, the Venezuelans are trying and failing at, and the Chinese once tried and have now abandoned in all but name only. For one thing, the Italians seem to be pretty bad at paying and collecting taxes. For another, they aren’t much better at respecting and obeying government edicts and regulations. The country seems to have two kinds of law: the one we honor, and the one we follow. Everything else is negotiable.

It’s a system that works, more or less. And it’s one that no clique, or party, or national movement will ever co-opt in order to lead the average Italian into some kind of far-off utopia.

Consider the food supply. In the U.S., our food production and distribution system is a wonder of biology, chemistry, market reach, and logistics. For example, when was the last time you or anyone in your family seriously considered the growing season? You want apples? Oranges? Avocados? Do you wait for the local harvest? No, you go to the grocery store. If you’re not buying apples that were fresh picked from the harvest in Washington State in the autumn, you’re getting apples that were cleverly stored for months in the right atmosphere at the right temperature—or were fresh picked from the harvest in Chile, where the seasons are reversed. Our growing, packing, and preserving techniques mean there’s not a flavor or a taste you for which you have to wait a minute or a month to enjoy.

The Italians can buy into this same bounty, of course. But when we visited the country, I also saw something uniquely Old World. From the train window as we traveled down the Po Valley, I could see small, obviously family-owned farms with one kind of crop growing in the fields, but also herb and vegetable gardens in the dooryard and fruit trees splayed out against every south-facing wall.

In Florence, we ate at a fine restaurant called Il Latini, where the dining room had hams, cheeses, and salamis hanging from the ceiling and shelves filled with jeroboams of wine. When I asked the waiter about these foods on display, he said they were from the farm which was owned by the family who ran the restaurant and that everything on the menu came from there. When your land has been overrun through the ages by the Vandals and Goths, the Austrians, the Spanish, the French, the Germans, and finally the Americans, you like to know—on a personal, familial, obligational basis—someone who has a wheat field and knows how to mill grain, a vineyard and knows how to press wine, and a pig sty and knows how to butcher and cure a hog. It’s just a matter of survival.

When I came to Berkeley in the 1970s, I became aware of “riot architecture.” That is, banks and stores along Shattuck and Telegraph avenues would have brick and cement fronts, and any windows would be narrow and high up. The free-floating protests of the 1960s had changed the way landlords and renters thought. But this style has nothing on the architecture we saw in Italy. In the oldest buildings, dating from the late Middle Ages, you don’t see any windows on the ground floor, and all those on upper stories have real, working shutters in solid wood.

In Rome, in the neighborhoods around the Piazza Navona, you see medieval remnants of the old Roman insulae: whole blocks turned into a single four- or five-story building. The outside, at street level, is all small shops—actually niches cut into the exterior wall—with no access to the building’s interior and protected at night with a roll-up door of steel slats.3 You enter the building itself through just one carriage-wide entry protected by a massive gate. The entire life of the building—which is usually cut up into several apartments, sometimes including a small hostel, a bed-and-breakfast, or an old convent occupying a number of rooms—is centered on the central courtyard and interior balconies. It’s a huge lifeboat where, at night or in times of social unrest, you can close that gate and shutters, roll down the shop doors, and survive for days or weeks at a time on your own well water and what’s in the larder.

In Florence, in the early ’90s, I got to talking with the old man who ran the cambio, the currency exchange, underneath the steps at the Uffizi Gallery. A proposal was in the air at the time for dividing Italy into separate countries north and south, and I asked how he felt about that. The man shrugged. “I’m a Tuscan,” he explained. Then he pointed up and down the narrow, cobblestone street. “In fact, this is my street. I would put a chain up at either end.”

You might chide the Italians for being backward, almost tribal, in this way. They have a traditional view of family, property, and personal obligations that is at odds with our modern, cosmopolitan, globalist perceptions and tendencies. And yet, for the most part, the Italians welcome visitors and will make accommodation for almost anyone who stops by—for in their long history they have seen almost everyone come through their country and either be assimilated or moved gently along.

I don’t know whether this is the worst form of primitivism or the most advanced form of sophistication. But I like it.4

1. My native language is English; I took six years of French in junior high and high school; and everyone in the Netherlands seems to speak English as a strong second language. Yet the Italians seemed to make a greater effort to make us feel at home, and they blossomed with what English they had if you just approached them with a shy smile and a cheerful Buon giorno!
       I remember an exchange I had with a security guard at the Sforza Palace in Milan. He had no English and I had no Italian, but I was intensely curious about the rows of square holes I could see on the inside face of the stone walls. Were they for ventilation? Openings to some kind of internal structure? With my pocket dictionary, much flipping back and forth, and a bit of pantomime on his part, we determined they were left over from original construction. Rather than erect a scaffolding, the stone masons just laid beams in the wall as the courses rose. When they were finished, they drew out the beams and left the holes.

2. I can’t find the exact quote now, so I’m paraphrasing here.

3. I saw these same roll-up door and window coverings on many individual houses in other cities. It may be nice to protect your home with an electronic alarm—but better to raise a shield that takes a battering ram or an acetylene torch to breach.

4. In fact, I borrowed this style of social organization—personal, familial, and obligational—as the eventual solution to some of our current problems in my novels of the near future, Coming of Age.