Sunday, December 10, 2017

Learning as a Form of Evolution

Neuron cells

I’ve been making some existential comparisons lately—Life Like a Sword and Language as a Map—so I thought I would round out the sequence of metaphors by looking at the way we form our knowledge.

The popular conception is that we acquire new knowledge the way a coin or stamp collector makes a new acquisition: pick up the fact, place it in our memory box, recall and retrieve as necessary. Our head then becomes a database of acquired facts, like a phone contact list or a Rolodex of business cards. Take one and slot it in. Modify it with an overlying note if any of the information happens to change. And remove the card—that is, commit ourselves to forgetting the subject heading and its content—when the information is shown to be wrong or is no longer of use.

But is that really how it works, all neat and tidy, like filing little pasteboard squares?

Actually, our brains are representational devices. We interpret our sensory input and apply it to the process of assembling a model or representation of what we know and think about the world outside our heads. We are model makers, map makers, myth makers, and story tellers. What we learn goes into a vast web or network or congeries of impressions, summaries, conclusions, and projections that collectively represent the world as we know it. We are constantly testing and expanding our worldview. We are always asking, consciously or not, “Is this the way the world really works?”

We are constantly—although perhaps unconsciously—looking for comparisons with and similarities to the things we already know. When we get a new fact or form a new impression, we test it against our worldview, the structure of our model of the world. We ask, “How does this fit in?” And if the fact or impression conflicts with what we know, our brain goes through a small crisis, a scramble for immediate understanding. We test the new knowledge against its background: “Where did I get that idea?” “What was its source?” and “Do I trust it?” We also experience a small—or sometimes large—tremor in our worldview: “Why do I actually think that?” “Could I have been wrong?” and “Is this new knowledge a better way of seeing things?”

The habit of referring back to our internal model runs deep. For example, when learning a new language, such as French from the perspective of an English speaker, we leverage the grammar and the words we already know and understand. When we learn a new French word like chien, we don’t immediately associate it with a four-footed pet of a certain size range, disposition, coloring, and similar physical details. Instead, we link it to the English word dog and then concatenate onto chien all the past impressions, learned attributes, and personal feelings we already associate with the concept in English. In the same way, we adapt French grammar and syntax to our known English way of speaking, and then we extend our knowledge with new concepts, like the issue of nouns and objects that we normally think of as inanimate and sexless now acquiring a specific gender. By learning a new language, we expand our general knowledge of both our own language and its place in the way the rest of the world communicates.

In this sense, each piece of new knowledge—both the facts, impressions, and understandings that we acquire by the happenstance of general reading and daily experience, and those we acquire by conscious study such as a new language, or the history of an unfamiliar place and people, or a closed curriculum like mathematics, physics, and chemistry—each discovery is making a series of minute changes in the brain’s internal environment. And the effect that these new facts and impressions have on our existing ideas—the current model or myth that is running in our heads—is like an organism’s response to accidental modification of a protein-coding gene: the new knowledge and the resulting change in our worldview either enable us to live more fully, completely, successfully, and confidently in the environment that we actually inhabit, or the changed worldview contributes to our failure to compete and thrive by causing us to interpret wrongly, make mistakes, and suffer feelings of doubt, denial, and depression.

But some facts or interpretations—perhaps most of them—don’t cause an immediate change in our relationship with the outside world. We can carry a bit of false data, a misremembered fact, or an untested impression in our heads for months or years at a time without it affecting our personal relationships, our social standing, or the decisions we make. And then, one day, we will learn something else that will contradict the comfortable model and bring on the crisis. In the same way, some mutations to a gene have neither a helpful nor harmful effect in the current environment. The modified gene and the changed protein it makes gets passed down from generation to generation without challenging the fit of the organism to its environment. But then the environment changes, and the organism is either better able to compete under the new conditions, or the changed environment shows up an inherent weakness, and the organism either thrives or dies. Sometimes the environment doesn’t have to change, but another mutation enhances the effect of that earlier genetic change, and the organism either excels against other members of its species or fails to compete.

As an example of the mutability of our worldview, both as individuals and as a collection of academics building a body of scientific or historical interpretations, consider the advance of human knowledge in the field of genetics.

At first, back in the early 1950s and the world of Watson and Crick, we valued the newly discovered DNA molecule and its messenger RNA strands solely for the proteins they made inside the cell body. Genetic scientists held to what was then called the “central dogma” of molecular biology, that DNA transcribes to RNA, which translates to proteins. Geneticists could point to the start and stop codes associated with the protein-coding genes. By finding and fishing out these codes, they could pull out sequences of DNA, copy them over to RNA, and analyze the resulting coded calls for each of the twenty possible amino acids in the developing protein string. These twenty amino acids are the universal building blocks for all of an organism’s complex proteins—in fact, for all life on Earth.

This central dogma held until about the year 2000, when the Human Genome Project and Celera Genomics published draft sequences of the entire three billion base pairs in twenty-three human chromosomes. Analyzing the code, geneticists then discovered that only about ten percent of this DNA was used for making proteins.1 So what was the other ninety percent doing? Many scientists figured that this genetic material was “junk DNA,” old code left over from our prior evolution, from genes that coded for proteins that our evolutionary ancestors might have needed as fish or reptiles, but with no meaning now and so abandoned to gradually mutate into genetic mush.2

The new facts about the frequency of protein-coding genes forced a reevaluation—a modification of the scientists’ mental model—of the nature of the genome. The scientific community remained with either the “junk” hypothesis or a condition of wonder until about 2004, when a new bit of knowledge emerged. Botanists working with certain flowers discovered that a short strand of a particular RNA, when introduced into a seed, can change the color of the flower. They hypothesized that the RNA either promoted a gene that had previously been silent or blocked a gene that had previously been expressed. They dubbed this effect “RNA interference,” or RNAi.

Soon, the genetic scientists were studying a class of short RNA strands, about fifty bases or less, that they called “microRNAs,” or miRNA. They began to see that these bits of RNA were used inside the cell nucleus to promote genes in different patterns of expression. And then Eric Davidson at Caltech, by working with sea urchin embryos, mapped out the network of genes in an undifferentiated embryonic cell that produced bits of microRNA to promote other genes to make different miRNAs—all without coding for any proteins. Depending on a cell’s position in the sphere of identical embryonic cells that develops shortly after fertilization, the pathway through this miRNA network changes. Some of these cells, through the proteins they eventually produce, become the internal gut, some the epidermal surface, and some become spines. By comparison with another organism far removed from sea urchins, the Davidson laboratory could trace out a similar network—which means it operates in most animals and plants, and likely in humans today. This miRNA network is the timing and assembly manual by which some embryonic cells in our bodies become liver cells, some brain cells, and some bone cells.

This discovery addressed a question that apparently no one had ever considered. If the entire genome is for producing proteins, then why doesn’t every cell in the human body make all the proteins required by all of the other cells? Why don’t neuron cells pump out liver enzymes and bone cells create and then, presumably, ignore neurotransmitters? Davidson’s work suggested that, while ten percent of the human genome makes proteins, functioning as the parts list of the human body, the other ninety percent is the sequential assembly manual.

But the story didn’t end there. Other geneticists noted that simple chemical compounds called methyl groups (CH3) often became attached to the promoter regions of genes—usually a sites where a cytosine base is followed by a guanine—and inhibited the gene’s expression. They at first considered this an environmental accident, randomly closing off gene function. But they also noted that an enzyme in the nucleus called “methyltransferase” worked to add these methyl groups to newly replicated DNA strands during cell division. If methylation was an accident, why was there a mechanism to preserve it in daughter cells?

From this question, the scientific community began studying methyl groups attached to DNA and learned that this was the cell’s way of ensuring that those brain cells didn’t begin producing liver enzymes or bone cells make neurotransmitters. Once a cell had differentiated to become a certain type of tissue, methylation locked out its other possibilities.3

So the community of microbiologists had to work gradually, discovery by discovery, to develop and refine their model of human genetics. From the central dogma of protein production being the only purpose of all DNA, to a whole new use of RNA to differentiate cell types, to the inclusion of “accidental” methyl groups to lock in that differentiation.

Every science goes through such an evolution and refinement of knowledge, discarding old ideas, patching in new discoveries, building, tearing down, and rebuilding the model, each time coming closer to what’s really going on in the world. In the same way, every human being learns certain skills and truths, discards old notions, patches in new understandings, building and tearing down his or her worldview, until the person attains something approaching … wisdom.

1. This did not leave them with too few genes to account for all of the body’s proteins, because they also discovered that many genes have alternate splicings. The scientists already knew that some gene sequences had “exons,” or patterns that expressed the code for the protein, interspersed with “introns,” or non-coding intrusions into that pattern. What they learned from the human genome was that the promoter region ahead of the start code could specify different ways to combine those exons to create variations in a family of proteins. Nature is more wonderful than we can imagine.

2. Not everyone agreed with this assessment. The human body spends too much time and energy replicating the entire genome each time a cell divides for us to be carting around this much junk. After all, the phosphate bonds (PO4) that are the backbone of each paired strand of DNA are also the working part of the cell’s energy molecule, adenosine triphosphate. And phosphorus is not so common, either in nature or in the body, that we can afford to hoard it and squander its potential with junk DNA.

3. Methylation also would explain why the early methods of reverting a body cell to a type of embryonic cell, by starving it until the cell almost dies, worked so poorly. This was how the scientists in Scotland cloned Dolly the Sheep, and in order to achieve the one viable Dolly, they had to sacrifice hundreds of attempts at cloned embryos and raise not a few genetic monsters. The starvation method must have essentially stripped out methylation as the cell approached its death, reverting the genome to is undifferentiated state.

No comments:

Post a Comment