Wednesday, September 30, 2009

No myths today, just fluorescent songbirds...

Songbirds have long been used to study vocal learning, because, like humans, they learn to imitate the sounds they hear from their parents in order to communicate vocally with others.  In our case, we learn to talk, in the case of the birds, they learn to sing.  Obviously, having an animal model for vocal learning is a powerful tool for understanding what happens in the brain that can lead to problems such as chronic stuttering or the difficulty associated with vocal learning in diseases like autism and other autistic spectral disorders.  The problem with using songbirds to study these disorders is that they are absent, or at least extremely rare, in the songbirds.  Animal models for lots of other human diseases have been created by genetically engineering mice, or fish, or frogs, etc.  By adding or removing certain genes that are thought to be important for a particular disease, scientists can test how important that gene is, or they can find out what part of the gene may be involved.  They can also use these "transgenic" animals to test other factors that may contribute to the disease if the disease is likely caused by both genetic and environmental factors (e.g. Parkinson's disease).  Making transgenic animals usually involves injecting DNA into the nucleus of a fertilized egg, or injecting stem cells into the zygote when it is at a stage where it is still a ball of cells.  This is obviously difficult to do in songbirds, whose eggs have a hard outer shell, that you can make a small hole in, but larger cracks tend to be fatal to the developing bird.  Also, previous attempts to introduce DNA have run into unexplained difficulties.  Now, however, it appears that those difficulties have been overcome through the use of a virus that has the capability to introduce genes into the cells it infects (known as a retrovirus).  This technique was first used by Carlos Lois and Benjamin Scott at MIT to make transgenic quails (close relatives to the chicken).  Now, Fernando Nottebohm and Robert Agate at Rockefeller University have figured out how to make this technique work in zebra finches.  Here's the link to the blurb about this on Science Daily.
(Songbirds are awesome.  And so is Fernando Nottebohm. The full article is to be in an upcoming edition of the Proceedings of the National Academy of Science, I will link to it when it becomes freely available.)

Tuesday, September 29, 2009

Humans aren't the only ones who pass on knowledge...

Okay, so we've known for a while that other animals are capable of social learning, that is passing on non-genetic information to other individuals.  Like teaching your kids to fish, or how to build a hut.  This type of learning is responsible for all of the great advancements of human culture and society, and has moved from teaching your children to fish to the widespread dissemination of information on all topics (from fishing to physics) that is our network of public schools, libraries, universities, and yes, of course, television and the internet.  For a long time, we thought we were special because we could pass on information from generation to generation, building upon what the previous generations taught us.  As it turns out, lots of animals can do this... even, as it turns out, the lowly fruit fly.  Check it out: http://www.sciencedaily.com/releases/2009/09/090916103428.htm

Monday, September 28, 2009

Myth: Neuroscience is hard.

I should have started off with this one, and I'm sure that it can be debated extensively.  And, yes, actually researching neuroscience, and/or practicing neurology can be incredibly difficult (which is why careers in these fields take a long time involving lots of hard work and extensive study).  BUT, a basic understanding of the brain and the rest of the nervous system is actually easily attainable, and not all that complicated.
Its obvious there seems to be some sense of mysticism associated with neuroscience in our culture.  For example, it's amazing how predictable the response is, when someone asks me, "what are you getting your PhD in?", and I reply "neuroscience".  Almost immediately, their eyes get wide, and they say "wow!", as if I just revealed that I taught David Copperfield every magic trick he knows.  And, how many times have you heard someone say "It's not brain surgery" when describing something that's hard, but not so unbelievably difficult as to be neuroscience related.  Apparently neuroscience is rivaled only by "rocket science" compared to which, many other things are also not so difficult.
And part of this is our own fault due to our fascination with big or obscure words that make us sound smart (like neurosurgery, neuroendocrinology, meningioma, cerebrospinal fluid, amygdalar lesions, excitatory post-synaptic potentials, etc. etc.)
I do appreciate that this sense of awe also comes from a shared interest that we all have in the mind and understanding how things like consciousness and memory work, and I am just as enamored with this sense of awe when I learn of all the cool and interesting things the nervous system does, which is why I chose to become a neuroscientist in the first place.  But just because the mind (of which the physical substrate is the brain) is so amazing, and still so remarkably uncharted and undiscovered, doesn't mean that we should be intimidated by what is already known about the nervous system, which can be presented in such easy to understand terms that there are even websites out there to teach neuroscience to kids.  And if kids can understand it, then I'm pretty confident the rest of us can too.  At least that's my hope in blogging about it...

Sunday, September 27, 2009

Can evolution go backwards?

A big question in evolutionary biology for some time has been whether or not a gene can revert to an ancestral state if the (environmental) pressures that selected for the initial changes to the gene were removed and the ancestral condition (or gene function) was again favored.  It now appears, that, at the genetic level, things can't go back to the way they were... at least not in the case of the gene that codes for the glucocorticoid receptor (glucocorticoids, a.k.a "stress hormones", got a brief mention here a few days back).
In a new article in the journal Nature researchers have shown how the glucocorticoid receptor evolved from a protein that was likely responsive to another hormone known as aldosterone.  The group identified 7 key mutations that were responsible for losing this sensitivity to aldosterone, but when they artificially manipulated the genetic sequence to reverse these mutations, what they got was a non-functional gene (coding for a non-functioning protein).  In order to figure out why the reverted gene didn't have the predicted function, the team, headed by Joseph Thornton of the University of Oregon, further analyzed the sequence and discovered 5 other mutations that were not directly involved in the function of the glucocorticoid receptor, but were apparently important enough to prevent a reversion to the ancestral aldosterone/glucocorticoid sensitive protein.  You can find a more detailed synopsis at Science Daily, from which I will quote:
"Suppose you're redecorating your bedroom -- first you move the bed, then you put the dresser where the bed used to be," Thornton said. "If you decide you want to move the bed back, you can't do it unless you get that dresser out of the way first. The restrictive mutations in the GR prevented evolutionary reversal in the same way."
So does this mean it is impossible for evolution to go backwards?  (to evolve the same gene or protein that was lost?).  Well, at the very least, it is so improbable (on the order of billions if not trillions to one odds) that it is, basically, though, I guess, not ultimately impossible.  The reality is that, if an ancestral condition is once again favored, it is much more likely that evolution will find some other way for the species to get what it needs to survive and reproduce... or else, that species will go extinct.  To clarify, in the case of the glucocorticoid receptor, in the time it would take for those 12 mutations to be reversed, it is highly likely that OTHER mutations will arise, that could be selected for, or, at least, not selected against.  These other mutations could either provide an alternative solution to the problem, or prevent the gene from ever getting back to the way it was.  This, however, does NOT mean that the glucocorticoid receptor can't evolve into a protein that again interacts with glucocorticoids AND aldosterone, but if this condition is selected for again, then it will likely be a different set of mutations that arise and are selected for to confer this functionality, or, if some workaround can't be found, and aldosterone sensitivity is correlated with increased fitness (more reproduction) then those organisms will eventually die out as they are selected against.
To extend the rearranging your furniture analogy, say you wanted the bed to be along the wall where the door to the room is located.  In order to put the bed where you wanted, you would have to block the door with the bed, thus you remove the function of the door as an entry/exit to the room.  Now, if you really want to keep things this way, you have to find another way in and out of the room.  Perhaps you decide to use the window as a door from now on (assuming your room is on the first floor), or maybe you knock out a segment of another wall and put in another door.  You didn't reverse the path of events that got you here (i.e. move the bed back to where it was and use the old door), you found a new way to achieve the same function: being able to enter and leave the room.  Nature is replete with examples of how organisms have evolved new and different ways to accomplish the same, or similar functions, this is called convergent evolution, and it usually happens when two or more different species evolve similar parts to accomplish a function that they both need to survive and reproduce (e.g. bats and birds both evolved wings and flight, though by different means).  It is not unreasonable to hypothesize that, should selective pressures change, a single species could revert to a similar body plan or set of proteins to accomplish an "ancestral" function in a similar fashion.  Thus, it may still be possible for evolution to "go backwards", but if it ever does, it won't to take the same path to get there, at least not at the level of the genes and the proteins.

Thursday, September 24, 2009

Lightning Bolts in the Brain

So, its probably not the most prevalent neuroscience myth out there, but I know a couple people who get really irked when they see the "firing" of neurons portrayed in movies and tv. Some really egregious examples can be found in the openings of the Spiderman movies and in the movie Deep Blue Sea, from which I have a couple of frames below, where the yellow arrow is showing an easily observable (well if my image quality didn't suck) lightning bolt extending between to very distant processes as the neuron "fires".



So what's wrong with this picture? Well neuronal signaling does involve electrical signaling, but it is electrochemical signaling, with the transmission of signals between cells being the chemical portion of the electrochemical. Also, synapses, the spaces between brain cells where the electrochemical signals are relayed from one cell to the next, are really, really small.  This depiction makes it seem like the gap between cells is really far when, in actuality, you couldn't fit a human hair through a synapse.  (The average diameter of a human hair is around 1/10th of a millimeter.  Though the distance across a synapse is likely variable depending on the types of neurons, or the part of the nervous system, or the species in which you are looking, the synaptic space is usually on the order of tens of nanometers, with an average of about 50 nanometers,  which is roughly 2000 times thinner than a human hair.)  Synapses are so slim, in fact, that you can't even see them at the highest possible magnification under a light microscope.
As for understanding how signals are transmitted across a synapse, we have to talk a little bit about the cells that are doing the communicating.  Neurons (or "nerve cells") are specialized cells that are found throughout the brain and the rest of the nervous system. They have a cell body and a nucleus like all of the other cells in your body, but they also have processes that extend out from the cell body like branches from a tree trunk, and these processes allow the cells to communicate (via electrochemical signaling) across great distances (well, great distances for a cell anyway... in humans, some peripheral neurons can extend to be over 1 meter long) Ultimately, all this communication gives rise to sensation, perception, thought, movement, and pretty much all of the main functions we associate with the nervous system. And, for the most part (though this is obviously oversimplifying), the really long branches of the cells (called axons) act like wires carrying electricity. They even have an isulated covering called myelin that wraps around the "wire" like the plastic that covers and insulates the wires running in your house or the power cord to your computer. An electrical signal is propagated in a neuron from the cell body (where the nucleus is housed) down the axon, until it reaches the end of the axon, called the terminal (like a bus terminal, this is the "end of the line"), where it needs to cross the synapse and activate an electrical signal in the next cell in the circuit.

If we were dealing with high voltage electricity, like in the case of actual lightning, or in the spark plugs in your car, we would be able to just have the energy transfer across the gap as a spark or small lightning bolt... but the voltage in a firing neuron is on the order of 30-70 millivolts (where 1 millivolt is 1/1000th of a volt) and, by comparison, the electricity from a common house outlet is around 120 volts (or 220V in Europe), and a bolt of lightning can discharge up to hundreds of millions of volts (of course how much current is flowing is a factor as well, but we don't need to go into that right now). Despite this low differential voltage in "firing" neurons, the cells have found a way to transmit the signal across the gap (synapse), and they do this by converting electricity into a chemical signal that we call neurotransmitters. When the voltage difference across the membrane of the axon reaches the terminal it opens voltage operated channels (like opening the doors on a submarine) and calcium ions flood into the cell. The calcium then causes a chain reaction that ultimately causes molecules of neurotransmitters (which can be anything from amino acids to gases like nitric oxide) to be released into the synapse. These chemicals then act on receptors on the post-synaptic cell to open its own channels (more submarine doors) that let positively charged ions (usually sodium ions) flow into the cell, thus changing the membrane potential (the voltage difference) that opens more channels down the line, ultimately creating an electric current that flows down the axon like a wire. The process is repeated from neuron to neuron until the signal reaches its ultimate target or is inhibited by something else.  Alas, however, there are never any lightning bolts, not even a little spark.
To help visualize the terminal, synapse, and neurotransmitter release, here's an image from brainconnection.com... (which is a great place to go if you want to learn more about the brain and other neuroscience topics).

Tuesday, September 22, 2009

Does torture yield reliable information?

Despite the claims of the Bush administration and the recent report by the Department of Justice, there is no convincing evidence to suggest that torture is any more effective than other interrogation techniques.  Additionally, there are risks with torture (which involves long periods of stress and sleep deprivation) of obtaining confessions that contain false information or where the suspect actually begins to believe or repeat what he or she is being accused of (i.e. confirming whatever the interrogators want).  Part of the reason for this is obvious, in that, under enough duress, a person might just say anything to make the torture stop.  But there is also a wealth of evidence from psychology and neuroscience labs around the world that demonstrate adverse effects of stress and sleep deprivation on processes of memory and decision making.  And a recent review article in Trends in Cognitive Sciences (here, or 1 of these 2 for popular press stories about the article) goes over much of this evidence to make the case that torture, as a means to obtain truthful, actionable intelligence, is unreliable.
This is of particular interest for me (though I am upset at myself for not making the connection) as my area of study pertains to the effects of steroid hormones on neural plasticity (where plasticity is a catchall term for the processes that underlie learning and memory).  We have known for a while now that there are only 2 areas in the brain that seem to be any good at adding new neurons in adulthood, and one of these is the hippocampus, which, as I've mentioned before, is critical for memory formation, retention, and recovery.  As it turns out, the ability of this part of the brain to generate new cells, and incorporate them into circuits is critical to processes of remembering.  Similarly, the ability of these new cells to survive appears to be another critical factor in memory retention and possibly recall.  Of the many types of steroid hormones out there, the ones we typically associate with reproduction (e.g. testosterone, progesterone, and estradiol) seem to have the most positive effects on cell proliferation and survival, and can improve learning and memory.  On the other hand, the steroids that are more commonly associated with, and produced in response to, stress (called corticosteroids because they are produced by the cortex of the adrenal gland), have been shown to be detrimental to the processes of cell proliferation and cell survival in the hippocampus, and thus also detrimental to our ability to remember.  This is of importance to the debate over whether or not torture is effective because, while there is merely a lack of evidence to support claims that torture is more effective than other methods of interrogation (and the absence of evidence is not evidence of absence), there is a wealth of evidence that shows how torture can actually impair the memory of the individual being tortured, and may thus result in distorted or false information. 

I will insert a link to the article when it becomes available on the cell press website, but for now, if you have library access, here's the citation:
“Torturing the Brain: On the folk psychology and folk neurobiology motivating ‘enhanced and coercive interrogation techniques.” By Shane O’Mara. Trends in Cognitive Sciences, Vol. 13 Issue 10, September 21, 2009.

Monday, September 21, 2009

Noetic Sciences

So, I, like millions of others, have bought a copy of the new Dan Brown book: The Lost Symbol.  I'm not very far into it yet, but an early mention of a character who is involved in the "Noetic Sciences" which are supposedly attempting to uncover the untapped potential of the human mind, piqued my curiosity.  Now, like I said, I haven't gotten too far in the book, so I don't know how Dan Brown will treat the subject, but here's what little I know.  There is such a thing as "Noetic Sciences", but it is NOT a widely accepted field of scientific inquiry.  In fact, it seems that the only group pursuing "Noetic science", or at least, calling it that, is the Institute of Noetic Sciences which was founded by astronaut Edgar Mitchell.  From their website, it's hard to pin down exactly what the full spectrum of the institute's research consists of,  though the mission statement appears to be geared toward the attempt to discover advanced human mental capacities and, through understanding these abilities (if they exist) manipulating or enhancing them for the benefit of society as a whole.  More specific mentions are given to questions of precognition, intuition, heightened states of awareness, healing at a distance, mind over matter, and de-stressing techniques for mothers.  These are interesting and ambitious ideas for scientific inquiry, and there are lots of researchers outside of the institute who are doing peer reviewed research into questions such as "what comprises consciousness?", "do intercessory prayer or the willed positive intentions of others have an impact on those who are ill or injured?", "can we stimulate the brain to recreate hallucinations or out of body experiences?", etc. and so, many of the questions that the Institute for Noetic Sciences (which I keep wanting to abbreviate as the INS) seems to be asking fall under the purview of either psychology and cognitive neuroscience, or, the less respected parapsychology (with the questions of "precognition" and "mind over matter" falling into the parapsychology heading).   The interesting thing is that its hard to get a read on the Institute as to how much of what they do is real science and how much of it is pseudoscience (i.e. quackery).  For example, the website lists some research projects aimed at increasing creativity and compassion in children and using yoga and other breathing techniques to help mothers deal with stress.  These are topics that can be studied scientifically, and are studied by many psychologists, sociologists, and educators.  Topics like precognition and mind over matter can also be studied scientifically, but unfortunately, they are often not pursued in a rigorous scientific fashion.  The difference is all in the methods.  For example, if I wanted to test someone to see if he/she had psychic abilities, I could have a computer generate random numbers (say from 1 to 1000) which would be displayed on a screen one at a time, where, between numbers there would be only a dark screen.  If I ask the subject BEFORE the number comes up, "what number will come up next?" and we do this a bunch of times, and he or she gets the numbers right more often than would be probable according to chance we would have to be open to the idea that this person can predict numbers generated by a computer (of course this still isn't conclusive evidence for any psychic ability to "see" into the future as the person may just be really good at "seeing" the pattern the computer is using to generate these "random" numbers... because computers run by programs, ultimately, a "random" number generator must still follow a program that involves an algorithm which ultimately means that there is a pattern, even if it is incredibly complex and seemingly random).  Now, if I asked the person "what number were you thinking of?" but I ask it AFTER the number has been shown on the screen, this is NOT science, and not evidence for anything because there is no way of knowing what the person was really thinking.  Also, if, instead of using a computer to generate the numbers, I just thought of a number myself and then asked the subject to guess what the number is, several things could happen which would make any results doubtful, the most obvious being that I (the experimenter) could lie in the interest of being "proven" right about my belief in psychic abilities, or that any pattern of "random" number generation I use would be even more recognizable than that of a computer.  In the past, all scientific attempts to answer these parapsychological questions have failed to demonstrate the presence of any extrasensory perception or precognition, and so, most groups who persist in asking these questions, particularly those who claim to be getting positive results, tend to be less than reputable. 
As for the Institute of Noetic Sciences, I remain skeptical, but am open to the idea of not throwing the baby out with the bathwater: their website (and downloadable "research report") don't link to or reference any peer reviewed scientific articles, only popular press books that you can purchase or the institute's own periodical (which is NOT peer reviewed).  As for the downloadable "complete research portfolio" it is remarkably slim on data, in that there is none, just some pretty pictures, vague descriptions, and seemingly scientific figures that don't show any experimental evidence (just a new age flow chart on how to "transform your consciousness")... and, the capper, of course, is the last couple of pages which are dedicated to how you can donate to the cause.  Like I said, I don't have enough information yet to form a complete opinion, maybe some real science does go on at the Institute for Noetic Sciences, but I would bet there's more than a fair share of pseudoscience going on there as well.  For now, I am leaning toward the opinion that the institute, and the field of "Noetic Sciences", is not much more than new age quackery trying to gain some sliver of credence by sprinkling in a little bit of actual science in with the rest of the "consciousness transforming" (new age astrology and eastern medicine) snake oil they hope to sell. 

Sunday, September 20, 2009

Kanye West and the Waterboy

So, I know that the Adam Sandler movie, The Waterboy is more than 10 years old now, but recently I was listening to the new Kid Cudi song "Make Her Say" featuring Kanye West (and Lady Gaga). And it had me reminiscing because in the song, Kanye says: "Gettin' brain in the library 'cause I love knowledge/When you use your medulla oblongata."
Which reminded me of The Waterboy, where the professor who looks like Colonel Sanders corrects Bobby Boucher (Adam Sandler) as to why "alligators is ornery":
Bobby: "Mama says alligators is ornery because they got all them teeth but not toothbrush"
Prof: "Yo mama said... wow! Anybody else? Yes, you, sir."
Other student: "Alligators are aggressive because of an enlarged medulla oblongata. It's the sector of the brain that controls aggressive behavior."
Prof: "That is correct! The medulla oblongata."

In both of these instances, the way in which the medulla oblongata is depicted is just plain wrong. Suggesting that the medulla oblongata is a part of the brain that is involved in aggression or in higher cognitive functions such as learning and memory is, as far as we know, incorrect.

The truth is that the medulla oblongata has nothing to do with aggression or with learning and memory. While the brain is very complex and it is difficult to pin down ANY function or behavior to JUST ONE area, there are brain areas that do appear to be very central to or primarily involved in controlling or initiating certain behaviors. In the case of aggression AND for learning and memory, we would look to the limbic system: an area in the center of the brain that is involved in several important processes, including emotions and learning and memory. Within the limbic system, the amygdala is primarily involved in the initiation of aggressive behaviors (it's also heavily involved in fear and the responses to frightening stimuli). Also in the limbic system is the hippocampus, which is primarily involved in learning and memory. (Though I should note that the amygdala has also been shown to play a role in certain types of learning and memory, reinforcing my point that anytime you attribute a behavior to one part of the brain and one part only, you're asking for trouble.)

Now, the medulla oblongata isn't even in the brain proper, that is, its not in "the brain" as most people think of it (what we call the forebrain, which is to say the large, very wrinkly, squishy gray mass of jelly that sits in our skull). No, the medulla is not in the forebrain, it is in the brainstem, which is part of the hindbrain, which is somewhere between being a part of the brain and being a part of the spinal cord. The brainstem, while not involved in higher thinking or cognitive reasoning does perform some incredibly important tasks: like regulating your heart beat and your breathing. These, in fact, are two of the tasks that are seen to by the medulla oblongata... And because these things are repetitive and require so little conscious thought, the medulla oblongata has an easy time taking care of these as well as numerous other functions, like salivating, sweating, digesting, and even sexual arousal (see? Like I said, stuff that you don't have to think too much about.) In fact, most of the things that you do all the time without thinking about them (these are called autonomic functions) are regulated to some extent by the medulla oblongata. (For more info on the medulla oblongata, you can check out wikipedia, the entry is actually pretty good: http://en.wikipedia.org/wiki/Medulla_oblongata) (The figure above comes from the Society for Neuroscience: www.sfn.org)

Saturday, September 19, 2009

The Neuron Doctrine... is a myth?

Afraid so.  I talked about this a little in my last post, and included a link to a really great (and short) article by science writer Carl Zimmer.  Here's the link again: http://discovermagazine.com/2009/sep/19-dark-matter-of-the-human-brain   You should go there and read the article.
I will only say this, based on the work of the founding fathers of neuroscience (Camillo Golgi, Franz Nissl, Ramon y Cajal, and others) the idea that the brain and the rest of the nervous system were largely (if not solely) controlled by specialized cells called neurons became the overarching dogma of neuroscience.  Over a hundred years later, most neuroscientists still operate on the premise that the actions of the brain that underlie physiology and behavior are the primary result of electrochemical signaling between neurons (or between specialized neuron-like sensory cells and neurons).  In the past decade or so, more and more evidence has arisen that suggests other cell types in the brain (those we call "glia") are also capable of forming networks, signaling from cell to cell (over long distances), and even heavily influencing the actions of the neurons themselves (acting like the master switch operators, controlling the electrical impulses being sent out across the neuronal networks).  One thing is clear, the neuron doctrine is likely going to be dramatically revised in the years to come as researchers find out more and more about the different types of glial cells and the functions they perform.
(If you didn't read the previous post, or the article, the other important thing to mention is that there are many, many, many more glial cells in the brain than there are neurons.)

Friday, September 18, 2009

The Other 90 Percent....

So, as the subtitle suggests, the reason for this blog is my desire to write about some of the neuroscience myths and misinformation that's out there and, hopefully, provide better information in an easy to understand manner. Of course, if I stick to just neuroscience, I will probably run out of material after 5 or 6 posts, so I will likely blog about other misrepresentations in the field of biology and science in general, and, if people actually start reading this thing, I will let their comments and feedback propel us on to other topics of interest.

Anyway, getting back to the subtitle: "The other 90 percent".... This has to be my favorite myth about the brain, and the main inspiration for the blog. You've probably heard it somewhere, maybe even in school (friends of mine say they were taught it as recently as 2 years ago in MEDICAL SCHOOL!) The myth is this: Humans only use ten percent of their brains.

Now, I realize that this seems very believable on the surface, as we all know someone, or several people, who are definitely not firing on all cylinders... And of course there are plenty of moments in our own lives where we forget what we were saying or thinking right in the middle of a sentence... or we are constantly forget little things (like what you had for lunch 12 days ago, or those ever elusive car keys...)
Conversely, we all know someone, or several people, who seem to be soooo smart that they obviously must have some huge advantage over the rest of us mere mortals... perhaps they are using 15 or even 20 percent of their brains! And what about the hope that if we could somehow tap into that other 90% perhaps we could become psychic, or we'd all be running around with abilities like Sylar or Peter in an episode of "Heroes".
But the truth is, everyone uses 100 percent of their brain; maybe not 100 percent of the time, but its all there, ready and rearing to go, and through the course of a day, trust me, you've used it all.

So, how did this idea get started? Well, no one really knows. I think the neuroscience for kids website has a pretty good set of guesses:

The 10% statement may have been started with a misquote of Albert Einstein or the misinterpretation of the work of Pierre Flourens in the 1800s. It may have been William James who wrote in 1908: "We are making use of only a small part of our possible mental and physical resources" (from The Energies of Men, p. 12). Perhaps it was the work of Karl Lashley in the 1920s and 1930s that started it. Lashley removed large areas of the cerebral cortex in rats and found that these animals could still relearn specific tasks. We now know that destruction of even small areas of the human brain can (and often do) have devastating effects on behavior. That is one reason why neurosurgeons must carefully map the brain before removing brain tissue during operations for epilepsy or brain tumors: they want to make sure that essential areas of the brain are not damaged. (you can read more at
http://faculty.washington.edu/chudler/tenper.html)

Another idea that I think may have contributed to the perpetuation, if not the origin, of this myth is the fact that the vast majority of the cells in the brain (about 90%) are glia (or other non-neuronal cells like the endothelial cells that line the ventricles), with neurons making up the remaining 10%. Neurons are the cells that are most heavily involved in communicating information to, from, and throughout the nervous system, while glia were originally thought to serve no other function than just holding the neurons in place ("glia" is the Greek word for "glue"). For a very long time, neuroscientists believed that glia actually did very little, and perhaps this has helped to perpetuate the myth of the 10 percent, but more and more, we have discovered that glia are actually doing quite a bit, and are, in fact, very necessary for proper brain function. For example a type of glial cell known as an astrocyte helps to modulate neuronal signaling by regulating how much neurotransmitter stays in a synapse. Glia can also regulate the electrical properties of neurons, and in the case of another cell type, the oligodendrocyte, can actually speed up the transmission of nerve impulses by acting like insulation on a wire. (for more info on glia, you can read this great article by Carl Zimmer: http://discovermagazine.com/2009/sep/19-dark-matter-of-the-human-brain)
If this is indeed the origin (or perpetuating factor) of this myth, then we have yet more evidence to overturn this long held belief, as glial neurobiology is one of the hottest and fastest growing fields in the neurosciences, where more and more discoveries pertaining to the functions of glial cells are being made everyday. And as an added side note (that I will revisit when we look at the "bigger is better" myth (at least when you're talking about the brain)): the only substantial difference that pathologists could find in Einstein's brain when compared to more "average" brains was that Einstein seemed to have more glia (not that he used a higher percentage of his brain nor did he have a bigger brain, just more glia).

In addition to the discovery that glia have important functions, there are several other lines of evidence that suggest that we do use all of our brains. For example, surgical lesions have shown that, unlike Karl Lashley's rats, it becomes much more obvious in humans when parts of the brain (even really small parts) are damaged or removed. This was probably also true for the rats its just that Lashley wasn't looking at ALL of the different behaviors rats are capable of. In humans, these problems become easier to detect because they either affect vocal communication directly, or whatever symptoms the patient has can be expressed through speech, or by a loved one who has spent a lot of time observing the patient's behavior. The classic example of this is a patient known simply as HM (http://en.wikipedia.org/wiki/HM_%28patient%29) who, in an attempt to lessen the symptoms of his epilepsy, had a part of his brain removed, and as a result lost the ability to form new memories. The part of the brain was a part known as the hippocampus (so named because it looks like a little seahorse, Greek, hippos = horse, kampos = sea monster, Hippocampus is actually the genus name for many species of seahorse). HM was studied extensively and his symptoms led to our current understanding of the roles played by the hippocampus in learning and memory.

Another important line of evidence for our using all of our brains came from the development of PET (positron emission tomography) scanning and MRI (magnetic resonance imaging) techniques. By allowing us to actually see the brains in live, conscious, individuals we were not only able to gather more evidence for what parts of the brain are most heavily involved in specific behaviors and functions, we actually found out that the brain works in a highly concerted manner for many thoughts and behaviors. That is, several different parts of the brain are involved when a specific task is performed, and even a brain "at rest" is still operating at a fairly high level throughout (this can also be seen by the fact that your brain uses more energy from the food you eat than any other organ in your body).

If you've ever seen an fMRI (functional MRI), you might ask, "But what about MRI images that only show part of the brain being used (as indicated by its lighting up in bright yellows, reds, or oranges)? Isn't that showing us directly that we're only using a small part of our brain?"

Actually, no. What you are seeing in fMRIs that only show one or a few areas being lit up is a subtracted image. That is, you are actually seeing the result of two images, where everything that is identical in the two images has been removed, leaving only what is different.

An fMRI (functional magnetic resonance image) measures bloodflow in the brain (because blood has hemoglobin in it, a protein containing oxygen and iron atoms, we can visualize it deep in the brain by using a giant magnet). And when you are shown a subtracted MRI image you are seeing the result of 2 images of the circulating blood. One image is taken when you are not doing anything and another while you are being asked to think about something specific or to look at certain images or answer certain questions. Both images will show the entire brain lit up with bloodflow, but the second might show that one area, or a couple of areas, had to work extra hard to perform the task (answer questions, remember something from childhood, etc.) When you subtract everything in the first image from the second image, only those one or two areas will show up on the subtracted image, thus showing you where bloodflow was different. Just like your muscles, the cells in your brain need more blood (and the nutrients and oxygen it brings) when they are working hard. So, if you see an increase in bloodflow to a certain area when performing a certain task, it is likely that that area is working harder to accomplish that task, and if only one area lights up, and lights up consistently in many individuals who are tested repeatedly, then it is likely that that area is heavily involved in eliciting that behavior or task.

So, hopefully I've convinced you that you use more than 10 percent of your brain. If not, try harder to use all of it, and check out some other sites on the web that corroborate this evidence. I'm pretty sure it you just type "ten percent brain" into google, you will find several articles that will provide a deeper history of the perpetuation of this myth as well as, perhaps, some more detailed examples of the evidence that suggests that we use most of our brain, all of the time, or all of our brain, most of the time. I know... I too wish I could improve my brain power 10-fold, but I guess it wasn't meant to be.

The image at right (from: http://www.sciencemuseum.org.uk/on-line/brain/190.asp) shows an fMRI. Increased blood flow is shown as orange and red, while decreases in blood flow are shown in blue.

Tuesday, September 15, 2009

Cortical Hem...ing and Hawing

So what is cortical hemming and hawing? Well hemming and hawing is probably much more well known than is the cortical hem. The former can be defined as speaking intermittently, or stammering, while going on and on about something. It may also refer to "beating around the bush" or not really getting to the point. Thus, hemming and hawing seems an appropriate descriptor for my writing style, as well as for blogging in general. As for the cortical part, the plan is to blog mostly about neuroscience and other brain and biology related topics (where cortical refers to the cerebral cortex, the part of the brain that is responsible for most of the things we think of as "higher cognitive functions", like consciousness, memory, learning and language), but there is also a pun in there (sadly, intended) where, during embryonic development, as the brain is being formed, an area that is made up of neurons and epithelia establishes a boundary between the hippocampus (the brain area most heavily linked to learning and memory) and the choroid plexus of the lateral ventricles (the part of the brain that produces cerebrospinal fluid), and this area is called the cortical hem. Ultimately, the hem does not persist in this form, but likely gives rise to reelin-expressing cells that can be found in the outermost layer of the cerebral cortex (reelin being a protein that is expressed by migrating neural cells). I won't go into any more detail on the hem as I was really just using it for the pun, and its not really the focus of the blog. My hope is to keep things a bit simpler in the future.
.
The figure is from Grove et. al. 1998 Development. Jun;125(12):2315-25.