Monday, Sep. 13, 1999

Smart Genes?

By MICHAEL D. LEMONICK

The small, brown, furry creature inside a cage in Princeton University's molecular-biology department looks for all the world like an ordinary mouse. It sniffs around, climbs the bars, burrows into wood shavings on the floor, eats, eliminates, sleeps. But put the animal through its paces in a testing lab, and it quickly becomes evident that this mouse is anything but ordinary. One after another, it knocks off a variety of tasks designed to test a rodent's mental capacities--and almost invariably learns more quickly, remembers what it learns for a longer time and adapts to changes in its environment more flexibly than a normal mouse.

This is a supermouse, no doubt about it, though it didn't get its better brain by coming from another world. It was engineered by scientists at Princeton, M.I.T. and Washington University, who cleverly altered its DNA--or, more precisely, that of its genetic forebears--in ways that changed the reactions between neurons deep within its tiny cranium. The result, say its creators, is a strain of mouse (which they nicknamed "Doogie," after the precocious lead character of the old TV show Doogie Howser, M.D.) that is smarter than his dim-witted cousins. Not only that, the scientists wrote in last week's issue of the journal Nature, "our results suggest that the genetic enhancement of mental and cognitive attributes such as intelligence and memory in mammals is feasible."

Their audacious use of the I word triggered an avalanche of criticism from many of their colleagues, who called their conclusions unwarranted and farfetched. And it's easy to understand why. The idea that intelligence is rooted in the genes has long been an inflammatory notion--witness the charges of racism put to Richard Herrnstein and Charles Murray, authors of The Bell Curve, their controversial study of IQ and race. Beyond that, the very concept of intelligence is slippery. It involves many qualities--some of them elusive, like creativity, others more clear-cut like the ability to solve problems. "This is a very important study," says Eric Kandel, of the Howard Hughes Medical Institute at Columbia University, but he goes on to sound a polite note of caution. "Intelligence involves many genes, many features," he adds. "There are many things that go into it."

Yet even if Doogie isn't the Einstein of the order Rodentia, as some headline writers have portrayed him, most psychologists and neurobiologists are convinced that its memory and learning ability have indeed been enhanced. That has important implications. It suggests that even though the gulf between mice and men is continent-wide, this sort of research may eventually lead to practical medical results for humans, such as therapies to treat learning and memory disorders, including Alzheimer's disease, a condition likely to afflict more and more people in an increasingly aging population. In fact, the Princeton scientists are talking to drug companies about commercializing their work.

And the research inevitably raises the possibility that healthy people will try to boost their performance or, even more likely, that of their children--a prospect that has bioethicists ruminating feverishly. (See following story.)

Therapeutic promise is only one key implication of the new research. More immediate, and for now more important, is that the work gives neurobiologists further evidence about what memory is and how it works--a mystery whose secrets have been slowly unfolding for decades.

One thing has become clear to scientists: memory is absolutely crucial to our consciousness. Says Janellen Huttenlocher, a professor of psychology at the University of Chicago: "There's almost nothing you do, from perception to thinking, that doesn't draw continuously on your memory."

It can't be otherwise, since there's really no such thing as the present. As you read this sentence, the sentence that went before is already a second or two in the past; the first line of this story went by minutes ago. Yet without a memory of what's been said, none of what you are now reading makes the slightest sense. The same is true for our lives as a whole. Memory provides personal context, a sense of self and a sense of familiarity with people and surroundings, a past and present and a frame for the future.

But even as psychologists and brain researchers have learned to appreciate memory's central role in our mental lives, they have come to realize that memory is not a single phenomenon. "We do not have a memory system in the brain," says James McGaugh, director of the Center for the Neurobiology of Learning and Memory at the University of California, Irvine. "We have memory systems, each playing a different role."

When everything is going right, these different systems work together seamlessly. If you're taking a bicycle ride, for example, the memory of how to operate the bike comes from one set of neurons; the memory of how to get from here to the other side of town comes from another; the nervous feeling you have left over from taking a bad spill last time out comes from still another. Yet you are never aware that your mental experience has been assembled, bit by bit, like some invisible edifice inside your brain.

And brain researchers might never have picked up on the fragmentary nature of memory without their studies of people whose memory has been damaged by illness or injury. The most celebrated such individual is H.M. In 1953, when he was 27, he had drastic brain surgery to cure severe epilepsy. The operation cured his epilepsy, but removing parts of his brain's temporal lobes, including a structure called the hippocampus, destroyed his ability to form new memories. H.M., who is still alive, has a reasonably good short-term memory. Once introduced to a visitor, he will remember the person's name and other information while a conversation lasts. But if the visitor leaves and returns, H.M. has no memory whatsoever of having met the person. In fact, H.M. has no permanent memory of anything that happened after his surgery. As far as he's concerned, it's still 1953, and that old man looking back at him from the mirror bears only a passing resemblance to the young man he knows himself to be.

That sort of impairment has convinced scientists that the medial temporal lobe and hippocampus are key in transforming short-term memories into permanent ones, and also that permanent memories are stored somewhere else; otherwise, H.M. would have lost them too.

But a remarkable experiment performed in 1962 by Canadian psychologist Brenda Milner proved that H.M. can form new memories of a very specific sort. For many days running, she asked him to trace a design while looking in a mirror. As far as H.M. knew, the task was a brand-new one each time he confronted it. Yet as the days wore on, his performance improved. Some part of his brain was retaining a memory of an earlier practice session, a so-called implicit--rather than explicit, or consciously remembered--memory. People who suffer from Alzheimer's disease exhibit the same sort of behavior--and it's the medial temporal lobe that is first affected by this devastating disease.

In patients with Huntington's disease, it's the part of the brain called the basal ganglia that's destroyed. While these victims have perfectly intact explicit memory systems, they can't learn new motor skills. An Alzheimer's patient can learn to draw in a mirror but can't remember doing it; a Huntington's patient can't do it but can remember trying to learn. Yet another region of the brain, an almond-size knot of neural tissue known as the amygdala, seems to be crucial in forming and triggering the recall of a special subclass of memories that is tied to strong emotion, especially fear. The hippocampus allows us to remember having been afraid; the amygdala evidently calls up the goosebumps that go along with each such memory.

These are just some of the major divisions. Within the category of implicit (a.k.a. nondeclarative) memory, for example, lie the subcategories of associative memory--the phenomenon that famously led Pavlov's dogs to salivate at the sound of a bell, which they had learned to associate with food--and of habituation, in which we unconsciously file away unchanging features of the environment so we can pay closer attention to what's new and different upon encountering a new experience.

Within explicit, or declarative, memory, on the other hand, there are specific subsystems that handle shapes, textures, sounds, faces, names--even distinct systems to remember nouns vs. verbs. All of these different types of memory are ultimately stored in the brain's cortex, within its deeply furrowed outer layer--a component of the brain dauntingly more complex than comparable parts in lesser species. Experts in brain imaging are only beginning to understand what goes where, and how the parts are reassembled into a coherent whole.

What seems to be a single memory is actually a complex construction. Think of a hammer, and your brain hurriedly retrieves the tool's name, its appearance, its function, its heft and the sound of its clang, each extracted from a different region of the brain. Fail to connect a person's name with his or her face, and you experience the breakdown of that assembly process that many of us begin to experience in our 20s--and that becomes downright worrisome when we reach our 50s.

It was this weakening of memory and the parallel loss of ability to learn new things easily that led Princeton molecular biologist Joe Tsien to the experiments reported last week. "This age-dependent loss of function," he says, "appears in many animals, and it begins with the onset of sexual maturity."

What's happening when the brain forms memories--and what fails with aging, injury and disease--involves a phenomenon known as "plasticity." It's obvious that something in the brain changes as we learn and remember new things, but it's equally obvious that the organ doesn't change its overall structure or grow new nerve cells wholesale. Instead, it's the connections between new cells--and particularly the strength of these connections--that are altered by experience. Hear a word over and over, and the repeated firing of certain cells in a certain order makes it easier to repeat the firing pattern later on. It is the pattern that represents each specific memory.

How this reinforcement happens was a puzzle for much of this century, until 1949, when Canadian psychologist Donald Hebb came up with a related notion: since most memories consist of a group of disparate elements coming together--the hammer again--something more must be happening than just an electrical signal in one brain cell setting off a response in another. Something in the brain must be acting as a "coincidence detector," taking biochemical note that two nerve cells are firing simultaneously and coordinating two different sets of information.

Over the past decade or so, neurophysiologists have been focusing in on a particular molecule they believe could well be at least one version of Hebb's coincidence detector. Called N-methyl D-aspartate, or NMDA, this substance sits at the ends of the dendrites, the branchlike projections that protrude from nerve and brain cells, waiting to respond to incoming signals. Like other receptor molecules, NMDA reacts to a chemical cue--in the case of learning and memory formation, glutamate--emitted by the axon from a neighboring cell.

But unlike other receptors, NMDA doesn't find this signal sufficient. It must also receive an electrical discharge from its own cell. Only when both cells are talking at once does the NMDA receptor turn on. It then permits calcium ions to flow into the host cell, which somehow--no one knows the details yet--makes the cell easier to turn on next time around. This phenomenon, known as long-term potentiation, is believed to be the essence of one type of memory formation.

NMDA's role in learning and memory isn't just theoretical. It has been known for years that blocking NMDA receptors with drugs, or knocking them out completely at the genetic level, makes animals learning-disabled, even amnesiac. Administering drugs that stimulate the receptor, conversely, tends to improve memory.

Tsien and his team took the next logical step. "A decade ago," says Stanford neuropsychiatrist Dr. Robert Malenka, "if you had asked, Would it be possible to manipulate a higher cognitive function like learning and memory by changing a single molecule? most scientists would have looked at you as if you were crazy."

Yet that's just what Tsien & Co. did, focusing not just on the NMDA receptor but on a particular component of it. Called NR2B, it's very active in young animals (which happen to be good at learning), less active in adults (who aren't), and is found mostly in the forebrain and hippocampus (where explicit, long-term memories are formed). The researchers spliced the gene that creates NR2B into the DNA of ordinary mouse embryos to create the strain they called Doogie. Then they ran the mice through a series of standardized tests--sort of a rodent sat. In one, the mice were given a paw shock while in a box; after a few rounds, they showed signs of fear from just being in the box, having learned that a shock was likely to follow. They learned in similar fashion to be afraid when a bell sounded--a variation on Pavlov's dog experiments. In each case, the Doogies learned faster than normal mice. The same happened with a novel-object test: after becoming familiar with two plastic toys, the Doogies would show special interest when one was replaced; normal mice tended to be equally curious about a familiar object and a new one.

The altered mice grow up looking and acting just like ordinary mice, with no evidence of seizures or convulsions, according to Tsien. That's critical. The NMDA receptor shows up throughout the brain, and though calcium is crucial to learning and memory, too much of it can lead to cell death. That's what happens during a stroke: when brain cells are deprived of oxygen, they release huge amounts of glutamate, which overstimulates nearby NMDA receptors and kills their host cells. Nature may have designed NR2B-based receptors to taper off in adult brains for a reason. Some scientists fear that the altered mice may be prone to strokes. "You might worry about what happens when these animals get old," says neuroscientist Larry Squire of the University of California, San Diego.

Premature cell death isn't the only possible complication. Stanford's Robert Malenka has shown that the NMDA receptor is involved in sensitizing the brain to drugs like cocaine, heroin and amphetamines, and others are investigating its role in triggering chronic pain--two more indications that it may not be wise to try to fool Mother Nature.

It will be a while before such dangers arise, though, and--as cancer researchers have discovered all too often--it isn't even certain that what works in mice will work in people. Tsien and his colleagues believe it's not unreasonable to think it will. "The NMDA receptor in humans is nearly identical to the receptor in mice, rats, cats and other animals," he says. "We believe it's highly likely that it plays a similar role in humans."

Even so, Tsien has no plan to try tinkering with human genes--nor could he under current ethical guidelines. Drugs that can boost the action of the NR2B molecule, however, are not only ethical but already being contemplated. "Princeton has applied for a use patent for this gene," says Tsien, acknowledging his contacts with drugmakers, "although we wouldn't try to patent the gene itself."

There remains the nagging question of what it means precisely to say that Tsien & Co. have created a smarter mouse. "What is it that is being tested?" asks Gerald Fischbach, director of the National Institute of Neurological Disorders and Stroke. "That's the problem with mouse behavior. It's not clear that we're talking about the same thing when we talk about learning in a rodent and learning in a human."

Tsien concedes that using the emotive word intelligence in the paper was sure to generate controversy. "We really don't mean to suggest," he explains, "that human intelligence is the same as animal intelligence. But I would argue that problem solving is clearly part of intelligence, and learning and memory are crucial to problem solving. And these mice are better learners, with better memories, than other mice."

But Tsien doesn't claim that he and his colleagues have found the unique genetic key to intelligence or even to memory. "It's likely that brain plasticity involves many molecules," he says. "This is just one of them." On the other hand, he asserts--and his critics would not disagree--that "intelligence does arise out of biology, at least in part." How much remains the great question. Whatever the answer, little Doogie surely represents an important step in unraveling what role our genes play in constructing not just memory but all the other attributes of the human mind. And clearly he won't be the last.

--With reporting by David Bjerklie and Alice Park/New York, J. Madeleine Nash/Chicago and Dick Thompson/Washington

With reporting by David Bjerklie and Alice Park/New York, J. Madeleine Nash/Chicago and Dick Thompson/Washington