Sunday, October 31, 2010

Saturday, October 30, 2010

Real Life "Lie To Me"

You may be familiar with the new-ish show on Fox called "Lie To Me" where the incomparable Tim Roth plays a psychologist who can detect when people are lying (and numerous other emotions) through revealing facial expressions he calls "micro-expressions" or, my favorite, "deception leakage".  The show is based on the work of psychologist Paul Ekman, and now, in the real world, so are some airport security screening techniques (or via the MindHacks blog).
Of course many experts question how reliable these techniques are, particularly in light of the fact that Ekman's research seems unreplicable, and since he has shied away from publishing in peer reviewed journals in recent decades.  (I'm always so disappointed when real life differs from Hollywood.)  Anyway, my own take goes something like this:  The principle is obviously intriguing.  After all, anyone who has played poker quickly learns that their facial expressions can betray what kinds of cards they are holding.  However, if you play cards a lot, you may also know that it can take some time to learn what each individual person's "tell" may be.  And experienced gamblers can obviously manipulate the situation by intentionally displaying their facial tick or other betraying behavior when they want you to think they are bluffing.  Like this scene in Casino Royale:

What this tells us is that any system of "deception detection" would have to rely on either an intimate knowledge of the person being interrogated (to know what their specific "tells" are) OR on a set of facial expressions and/or body movements that are common to EVERYONE when lying. Since different people have different emotional responses to telling a lie (or even to the type of lie they are telling) and since different people often have different "tells" regardless of the extent of guilt or shame they feel, it seems to me that coming up with a system that provides cues used by everyone would yield some (likely unacceptable) level of false positives (i.e. thinking that someone is lying when they aren't) and false negatives (i.e. thinking that someone is not lying even though they are).  Anyway, if you read the article over at Nature you will see that the Department of Homeland Security is promising a "rigorous review" of the scientific merit of the programs they have put in place, so maybe we will get some data to support Ekman's ideas, or maybe we will just get more that debunks them.  

Tuesday, October 26, 2010

Book Review: Fifty Great Myths of Popular Psychology

So you may or may not have noticed that this book has been listed under the "Currently Reading" heading for, well, forever.  To be fair, that had nothing to do with the book itself, but more to do with me having to write and defend my thesis, graduate, move to a new city, and start a new post-doc.  However, I am happy to announce that I have finished the book, and I must say, I cannot recommend it highly enough.  When I started this blog, my intent was to post about common myths and misperceptions in neuroscience.  After having numerous conversations with people who would say things like " we only use 10 percent of our brains" or "I'm more of a right-brained kind of person", I felt that someone needed to write up the research that debunks these ideas... and luckily, someone has, and well. While the book has a bit of an academic feel (design-wise this seems inevitable because the publisher, Wiley, is an academic publisher), BUT, aside from the fact that all of the research is meticulously referenced, the book reads like popular-non-fiction.  There are references to modern films, music, and even to episodes of the Simpsons.  The writing style is informal and the explanations are simply written, and there is even a bit of humor running throughout.  Of course, for me, the information was key, and while I wouldn't have stopped at just 50 myths, the authors did a good job of pointing out some really popular myths and debunking them clearly and eloquently, while also listing many more popular misconceptions at the end of each chapter.  As I said above, I highly recommend it.  And if you want an example of content from the book, you can check out an earlier post I had when I first started reading the book.

Sunday, October 24, 2010

Halloween Post: The "Bloody Mary" illusion

When I was a kid, a popular ghost story that we would all tell around this time of year was the story of "Bloody Mary".  The story is actually very widespread here in the U.S., to the point that it has a Wikipedia page, a post on the mythbusting/fact-checking site, a plethora of YouTube videos devoted to the subject, and numerous mentions in movies and television shows.   If, somehow, you have never heard this story, it goes like this:
In Colonial times, there was a beautiful woman named Mary Worth who found herself in the unfortunate position of being an expecting, but unwed, mother.  The fact that Mary didn't seem to be bothered by her sin, and that she still seemed to capture the wandering glances of many of the men in the town, infuriated her Puritan neighbors.  When Mary had her baby, the townspeople stole it away from her.  Claiming that it was the spawn of Satan, they buried it alive as it flailed and screamed. The townsfolk then accused Mary of being in league with the devil and decided she must be burned as a witch.  Mary was dragged to the center of town and tied to a stake as the townsfolk beat and slashed her face with the sticks that they would use to burn her.  One woman held a mirror up to Mary's face, taunting her to look and see how she was no longer beautiful, she was dirty, and broken, and bloody. "Bloody Mary" she called her, and as the pyre was lit and the flames began to climb, the crowd chanted the name over and over again.  Mary screamed as the flames licked her legs and her thighs, and as the acrid smell of burning flesh filled the air, the crowd became hushed.  In the lull, Mary cursed the townspeople for what they had done and claimed she would visit vengeance upon them and all of their future generations, they would know the anguish they had put her through.  As the flames climbed higher, the form that had been Mary Worth began to disappear, but the last words of the curse lingered in the ears of the townspeople, seemingly echoing off of the surrounding trees and buildings. Then, suddenly and without explanation, the mirror that had been held up to Mary's face shattered, slicing the hand of the woman who had initially taunted her.  About a week later, the woman fell ill and died.  Soon after, many of the townspeople who had taunted Mary began to meet with ill fated deaths, all in rooms with broken mirrors.  It is said, that Mary still seeks vengeance to this day.  All you have to do conjure her is to light a dark room with a candle, stand in front of a mirror, and say the name "Bloody Mary" five times in succession. Her face will appear in the mirror in front of you, and if you are descended from one of the townspeople that taunted her, or if she believes that you are taunting her, she will reach through the mirror and slash your face as hers was, or break the mirror cutting you all over, or, she may even pull you into the mirror with her so that you will never be seen again...
At this point, other people would usually chime in about how they heard about a girl from the next town over who had conjured Bloody Mary and was cut all over by shattered mirror glass, or about a boy that tried it and disappeared, never to be found, etc. etc.
Of course, at some point, we've all tried it, and, of course, nothing bad happens.  So, how does a story like this get started.  Well, a report that was published earlier this year, describing an interesting illusion (pdf), may shed some light on the subject.  The author of the paper, Giovanni Caputo describes what he calls the "Strange Face in the Mirror Illusion", and it may be that this illusion spawned stories like this one that revolve around ghosts in the mirror.  To characterize this illusion, Caputo got 50 volunteers, who had no idea what they were supposed to see, and had them stare at themselves in a mirror in a dimly lit room.  At the end of a ten minute period, the volunteers were asked to write down what they saw.  Two thirds of the participants reported seeing huge deformations of their own face, and nearly half reported seeing "fantastical" or "monstrous" beings.  Smaller proportions reported seeing the faces of parents, or of ancestors, and some saw the faces of strangers, including old women and children.  In all 50 cases, the participants reported some form of dissociative identity effect, which is to say, they felt like what they saw in the mirror was someone (or something) other than themselves.  Many felt like they were being watched by the "other" in the mirror, and some reported getting scared or anxious because they felt that the face in the mirror looked angry.  Caputo offers some speculations as to what might be causing these effects, but as yet, there is no complete explanation for all of the phenomena that were reported.
Likely, there are several things at play.  First, is the Troxler effect, which is an illusion where focusing on an object causes objects in the periphery to seemingly disappear (nicely illustrated by the following figure: stare at the + in the middle for about 20-30 seconds, and the purple dots should start to disappear, though you may still see the moving "green" dot that is the negative image your brain perceives when a purple dot disappears...)

 While Caputo discounts the Troxler effect because it should predict the disappearance of one's face rather than the appearance of a new face, it may be that an incomplete Troxler effect (due to lack of a solid fixation point) could lead to skull like apparitions (where the eyes and nose disappear) or other changes that could result in an unrecognizable face (when I tried this experiment myself, the Troxler effect was the first thing I noticed, and the strongest effect throughout, sometimes causing it to seem like my whole face had disappeared).  Also, it may be that the disappearance of one's own face causes the brain to fill in the void with imagined faces since it is expecting to see a face there.
Instead of, or perhaps in addition to, the Troxler effect, Caputo points to the "Multiple faces phenomenon" (pdf) which is an illusion that plays upon both the weaknesses of our peripheral vision, and the higher order neurons that integrate facial features to make the faces that we see recognizable.  When black and white photographs of familiar faces are viewed so that the face is centered on a blind spot, people have reported seeing different features and even different faces (i.e. white eyes, facial hair that's not present, upside down faces, the subject's own face, other faces than what is shown, etc.).  Many of these characteristics were similar to what was reported in the "strange face in the mirror illusion", and many of the same conditions appear to be necessary for both illusions to work.  For example, the "multiple faces phenomenon" works much better with black and white photographs than with color photos, while the "strange face in the mirror" illusion relies on low level lighting that makes it difficult for subjects to perceive color information.  Additionally, the multiple faces phenomenon seemed to work better when the photos were of faces familiar to the viewer, and the mirror illusion relies upon the most familiar face of all, the viewer's own.
Regardless of the cause, it is clear that these illusions are pretty common (84% of respondents for the multiple faces, and 66% for the face in the mirror), and they can be pretty spooky.  So if you want to give yourself a scare this Halloween, you can try it out and see for yourself.  All you need is a 25 watt incandescent light placed behind you so that you can't see the light directly or it's reflection, and five to ten minutes of staring at yourself in the mirror (from about 1.5 - 2 feet away).  If you get the conditions right, it might even be a lot of fun to convince your friends or family that your bedroom mirror is haunted, all you have to do is tell them to stare into the mirror for a few minutes and wait for the ghosts to appear.  If you do try it out, feel free to leave descriptions of what you saw in the comments, and have a safe and Happy Halloween!
 Caputo, G. (2010). Strange-face-in-the-mirror illusion Perception, 39 (7), 1007-1008 DOI: 10.1068/p6466

 de Bustamante Simas, M., & Irwin, R. (2000). Last but not least Perception, 29 (11),   1393-1396 DOI: 10.1068/p2911no

Wednesday, October 20, 2010

The Ig-Nobel Prize for Economics: Should companies promote people at random?

This year, the nobel prize for economics was awarded to/shared by Peter A. Diamond of MIT, Dale T. Mortensen of Northwestern University, and Christopher A. Pissarides of the London School of Economics.  These three economists were honored for their work relating to government policies and employment and economic growth during recessions.  Among some of the many contributions in these areas are the finding that greater unemployment benefits can lead to longer periods of unemployment and the finding that obstacles to matching (in this case employers finding potential employees) are a critical factor in determining the levels of unemployment.  In fact, the research showed that problems in matching are so important to unemployment that even with extensive government spending and works programs and even in economic boom times there will always be some level of unemployment due to the difficulties of matching employees with employers.  Of course, meanwhile, the Ig-Nobel prize for economics this year went to the big Wall Street Banks for creating hard to value derivatives and credit default swaps and other financial instruments that led to the overinflated bubble that ultimately burst.  Since that really isn't research related, I am going to claim that the Ig-Nobel prize for management serve as a proxy for the prize in economics (since there is no Nobel prize for management, and business and management are related to economics so...)  This year's Ig-Nobel prize for economics/management went to Alessandro Pluchino, Andrea Rapisarda, and Cesare Garofalo of the University of Catania, Italy, for "demonstrating mathematically that organizations would become more efficient if they promoted people at random."  The premise is an interesting one, and perhaps we've all experienced this to some degree, especially if you've ever worked for a big corporation.  Companies promote managers largely based on performance (assuming you ignore any nepotism, backstabbing, or other political gamesmanship), and so the best assembly technician, data entry specialist, scientist, factory floor sweeper, etc.,  gets promoted to manager.  The problem is, being good at floor sweeping or science (or at most any other task) has absolutely nothing to do with being a good manager, and so, despite any individual person's great performance at their first job, they may be the worst manager the world has ever seen.  Believe it or not, this observation has been somewhat codified by Canadian psychologist Laurence J. Peter, and is thus named the Peter Principle, which states: "Every new member in a hierarchical organization climbs the hierarchy until he/she reaches his/her level of maximum incompetence".  If this is true, or happens somewhat regularly, it begs the question of whether or not companies should promote the best person from any given level, or, perhaps instead simply promote people at random.  In the paper by Pluchino et al., the authors tested this idea questioning whether the "common sense" method of promoting the best people (i.e. promoting those who excel most at their current level) might make a company less efficient than if it were to promote people at random.  Of course, they didn't try this in a real company, but ran computer simulations, allowing them to test the idea over and over and average out the results.  Essentially, they designed "companies" that had a pyramidal structure: lots of low level employees, slightly less middle managers, even less upper level managers, and ultimately one person who would be in charge (see figure above).  
They then had the computer software randomly generate "individuals" who had "competence" values ranging from 1 to 10, and ages ranging from 18 to 60.  If an individual was incompetent (a value less than or equal to 4) or of retiring age (60) they were removed, a spot opened at that level, and an individual from the next level down was promoted to fill the vacancy.  Several strategies were applied: 1. the "best" approach, where the most competent at a given level was promoted, 2. the "worst" approach, where the least competent person was promoted, and 3. the random approach, where the individual that was promoted was chosen at random.  Each of these strategies was applied for the two hypotheses being tested: 1. The common sense hypothesis, where an individual's level of competence transfers from one level to the next (i.e. it is assumed that good floor sweepers generally make good managers, though the authors did build in a possible swing of plus or minus 1 point allowing that some floor sweepers could be slightly worse, or even better, managers than they were sweepers.) 2. the Peter principle, where a person's competence did not transfer to the next level with their promotion, but rather competence at a new level was again randomly assigned.  Finally, the measure of success for each of these methods was a valuation of the company's "global efficiency" which was calculated by adding up the competence values at each level and weighting them more as the level approached the top of the company (basically assuming that better or worse performance at the top of the company would have more of an effect on the overall performance of the company than competence or incompetence at lower levels).  What the computer simulations showed is that when the common sense outcomes applied (that is, when competence was basically the same from one level to the next) and you promoted the best people at each level, not surprisingly, you got very good global efficiency for the company.  When the worst person was promoted, the company had pretty lousy efficiency. What was surprising was that if competence at one level had no effect on competence at another level (the Peter principle) then promoting the "best" person at each level actually resulted in the worst global efficiency, and promoting the "worst" person at each instance resulted in the best global efficiency.  Finally, under both hypotheses (common sense and Peter principle) promoting people at random resulted in small increases in global efficiency. From this, the authors conclude that, if you don't know whether common sense principles or the Peter principle is at work, your best bet would be to promote individuals at random because even though the effect was small, you would always get an increase in global efficiency rather than risk the loss in efficiency that would result from using the best strategy if the peter principle really is at work.  And, of course, since we don't know if the Peter principle really is at work, you wouldn't want to risk promoting the worst candidates only to find the common sense principle was right.  Of course, there are definitely some considerations that need to be made before instituting the random promotion policy.  First, I think the assumption that a highly competent person at one level (a 10) could be so inept at the next level to be randomly assigned a 1 and then be fired (even if the probability of this is small, since the re-assignment is not totally random, but falls along a normal distribution).  To me, if you excel at one job, you likely have skills that apply at every level (being punctual, organized, responsible, hard working, smart, easily trainable, etc.)  Therefore, I would like to see the simulations re-run with promotions in the Peter principle assigning random values between 4 and 10, rather than 1 and 10 (or at least skew the distribution more to the right).  Second, I think that even if you tweaked the game this way, and it still came out that randomly promoting people was the better strategy, one still has to consider the repercussions of a random promotion policy that might kill the incentive for workers to excel at their job (since they know it will have no impact on whether or not they get promoted).  Ultimately, I think that this would lead to the majority of employees operating at a level of competence just high enough to not get fired.  Still, the article is interesting, and suggests that the Peter principle is something that companies and other hierarchical institutions need to be wary of, and perhaps, look for a better way to assess the skills that will be needed at each new level and base promotions off of a combination of excellence at the current level and this potential for excellence at the next level.

Figures were taken from the article, the reference for which is:
Pluchino, A., Rapisarda, A., & Garofalo, C. (2010). The Peter principle revisited: A computational study Physica A: Statistical Mechanics and its Applications, 389 (3), 467-472 DOI: 10.1016/j.physa.2009.09.045

Sunday, October 17, 2010

Sunday Comics

I think maybe they forgot to define all quantities.  (If you were curious, the Beer-Lambert law relates to the absorption of light by a solution and is the basis of spectrophotometry... though I don't think I could write out the actual equation from memory.  Guess I'll just Google it.)

Saturday, October 16, 2010

Same story as the last post, but with a twist

So, apparently a new article is out touting the benefit of exercise for preventing Alzheimer's and cognitive decline.  Basically it's a correlational study where they interviewed people on how much they walked per week and tracked them over time to measure how much their brains did or did not shrink with age (yes, our brains shrink with age, and this shrinkage may, on average, reflect our cognitive ability, or loss thereof).  They also tested the subjects for cognitive impairment 13 years after the study began (a much better measure than brain shrinkage in my opinion).  As it turns out, walking 72 blocks per week (about 7 miles) is correlated with less brain shrinkage and with less cognitive impairment.  The authors also reported that walking greater than 72 blocks did not seem to confer any additional benefit.  As Dennis Fortier over at the Brain Today Blog points out, the good news in this story is that you may not have to go to the gym or run marathons or take up some other grueling work out routine to keep your brain healthy, a simple, one-mile walk everyday should do it.

Wednesday, October 13, 2010

Actually, a pretty good Op-ed on "preventing Alzheimer's"

CNN has a piece about what we can all do to reduce the pace and severity of cognitive decline.  Unlike most articles one finds on the web that claim some preventative measure for Alzheimer's, but are really just touting some supplement with little to no evidence of any real benefit, this article offers some pretty good advice.  Though you may be surprised at what that advice is.  It is not a supplement or anything so simple as a pill or new antioxidant filled fruit because the bottom line is that Alzheimer's disease is a complex disease that results from several genetic and environmental factors, many of which still remain unknown.  So what can you possibly do to combat such a complex disease: keep your heart healthy.  That's right.  I have written before that the number one thing you can do for your brain is exercise regularly (healthy body, healthy mind), and most of the other advice in the article follows along these lines: don't smoke, try to keep stress and blood pressure low (i.e. positive attitude and lots of friends and social networks), and try to eat a diet that will keep your circulatory system healthy (again keeping blood pressure and LDL cholesterol low).  I know it's not nearly as appealing as taking a pill every morning, but if you have a history of Alzheimer's or dementia in your family, you may want to consider taking a morning jog instead and a good hard look at what you eat.  (And get some of your friends and family to join you.)

Wednesday, October 6, 2010

The Ig-Nobel Prize for Physiology and Medicine

While this is the time of year for the Nobel prizes (arguably the highest awards in science), it is also the time of year for the Ig-Nobel prizes which, according to the website "are intended to celebrate the unusual, honor the imaginative — and spur people's interest in science, medicine, and technology."  So, while the King of Sweden may be honoring Robert Edwards for his research leading to in vitro fertilization (IVF), the Ig-Nobels are honoring researchers in the Netherlands who discovered that symptoms of an asthma attack can be alleviated (sort of) by hopping on the nearest roller coaster.  While it is funny to think about researchers in white coats buckling people in to a roller coaster and then surveying them on their symptoms, there is a good scientific rationale behind the study...  

Asthma is caused when the bronchial tubes become inflammed or swollen, constricting the airway and making it hard to breathe.  For a long time, we have known that a hormone called epinephrine can reduce this inflammation and open the airway, which is why it is a common ingredient in over the counter asthma inhalers.  Of course, epinephrine is known by another name: adrenaline, which gets secreted by the adrenal gland in high stress situations, like riding a roller coaster or bunjee jumping, etc.  Of course, this study didn't look at levels of adrenaline, but rather they measured the lung function of the participants (using a spirometer to measure the volume of air that the subjects could inhale and exhale) and they measured the subjects's self reported description of how bad they thought their symptoms were.  As it turns out, there was no difference in lung function before and after riding the roller coaster, BUT, the subjects, on average, reported feeling like they could breathe better after riding the loop-dee-loop than they could before.  What this tells us is that it wasn't the adrenaline, or even any real improvement in lung function that made the subjects feel better, and yet they felt better.  So rather than actually alleviating the symptoms, riding the roller coaster seems to have changed how the subjects perceived their symptoms.  A minor distinction, perhaps, after all, if you feel better, does it matter whether or not you actually are better?  One might argue that having trouble breathing while unaware of this difficulty is a bad thing, but I will leave that up to you, as well as any questions about how changes in mood might bring about these changes in perception (endorphins immediately come to mind, but then, there could be so many other things). 

RIETVELD, S., & VANBEEST, I. (2007). Rollercoaster asthma: When positive emotional stress interferes with dyspnea perception☆ Behaviour Research and Therapy, 45 (5), 977-987 DOI: 10.1016/j.brat.2006.07.009

Friday, October 1, 2010

Holy Crap! A habitable planet! Besides this one!
I guess I don't have to worry about global warming anymore.  We can just destroy this planet and move on... like intergalactic locusts.  Of course, we'll have to figure out a way to travel the 20 light years it takes to get there (so, if we could travel at a constant 600 miles per hour, it would only take us about 18 TRILLION years to get there).