In light of yesterday's post,
Wednesday, December 29, 2010
Tuesday, December 28, 2010
A bit of advice from a fellow Memphian...
http://www.the-scientist.com/news/display/57895/
If you click on the above link, it will take you to an opinion piece by Douglas Green, a researcher at the St. Jude Children's Research Hospital here in Memphis, TN. The point of the piece is to provide some advice for those of us just starting out in science and looking to become successful, which I take to mean: get lots of papers and grants which are the currency that can be used to purchase a PI position at a college or university of good standing. (PI by the way stands for Primary Investigator, but, for all intents and purposes, it usually means tenure-track, or tenured, faculty). I agree with Green on several points, I think that being passionately curious is a great driving force that can keep you motivated regardless of the many setbacks one too often faces in the process of scientific investigation. However, this passion can also make it that much more disheartening if your grant proposal fails to convince your peers that what you so ardently want to know is something the rest of the world should want to know as well. It is here that Green boils down what he thinks is the essence of academic success, which appears to be, to paraphrase: "wow me." Or, rather, "wow us". "Us" being the members of the study section reviewing your grants, or the fellow scientists selected to review your papers and determine whether they are worthy of publication. I think this is a wonderful sentiment, and something that I believe we all try to do in coming up with original research ideas. Most of the scientists I know hope that their ideas will bring something completely new to the table, or that they will someday change the way people think about a particular idea in their field, BUT, I also think this idea is too simplistic to be complete in offering substantive advice for burgeoning scientists. The reason for my dissent is simply that "wowing" your audience of scientific peers is a somewhat limited goal. Not only is it poorly defined (some ideas are truly great, but may be seen as too risky) but also it seems that there may be numerous ways to garner such approbation from scientific peers, yet Green provides little road map for how to get there, nor does he address the road blocks one might find along the way. He diminishes "grantsmanship" in favor of astonishing or important ideas, and, while I agree with him that a really great idea would strike me as more favorable than a flawlessly put together grant for a lesser idea, grantsmanship (or salesmanship) can definitely mean the difference if your proposal floats dangerously close to the cutoff line. Similarly, dumb luck all too often plays a role in one's success in science. First, there is the fact that many important discoveries come from unforeseen results from sometimes unrelated fields of research (Thermus aquaticus and Taq polymerase, CFC refrigerants and Teflon, Staphylococcus and Penicillin, etc.) and thus those avenues initially proposed can only be identified as groundbreaking after the fact. Even if we leave serendipity aside, consider how important luck can be just in the sense of relying on fellow human beings for funding and for approval. If the political climate favors fiscal conservatism, then public funding for science will be scarce, and many very good ideas will fail to get funded, regardless of how "wowing" they may be. Conversely mediocre ideas can get funded or accepted in important journals simply because a particular field is getting a lot of attention in the media, where whole issues of Science and Nature get devoted to something like "swine flu" and any paper that happens to be ready for submission that month gets published. Often the fate of one's science can rest less on its merit and more on a reviewer's mood, how much time and attention they have to give, how open they are to contradictory ideas, or how well they can sell your idea to other scientists on the panel. As scientists, or perhaps as academics, we like to believe that we exist in a true meritocracy, where there are no corporate politics, no game playing or salesmanship, and certainly nothing so fickle as chance. We would believe that if you have great ideas you will be rewarded, if you work hard and support your ideas through grants and publications, you will be rewarded, and if your work truly impacts the field, you will be rewarded. And while this is true to some extent, an academic career is still a human endeavor, and like all human endeavors, an ability to play politics, an ability to be a good salesperson, and a bit of dumb luck are all likely going to be essential supplements to hard work and ingenuity if one hopes to be truly successful.
If you click on the above link, it will take you to an opinion piece by Douglas Green, a researcher at the St. Jude Children's Research Hospital here in Memphis, TN. The point of the piece is to provide some advice for those of us just starting out in science and looking to become successful, which I take to mean: get lots of papers and grants which are the currency that can be used to purchase a PI position at a college or university of good standing. (PI by the way stands for Primary Investigator, but, for all intents and purposes, it usually means tenure-track, or tenured, faculty). I agree with Green on several points, I think that being passionately curious is a great driving force that can keep you motivated regardless of the many setbacks one too often faces in the process of scientific investigation. However, this passion can also make it that much more disheartening if your grant proposal fails to convince your peers that what you so ardently want to know is something the rest of the world should want to know as well. It is here that Green boils down what he thinks is the essence of academic success, which appears to be, to paraphrase: "wow me." Or, rather, "wow us". "Us" being the members of the study section reviewing your grants, or the fellow scientists selected to review your papers and determine whether they are worthy of publication. I think this is a wonderful sentiment, and something that I believe we all try to do in coming up with original research ideas. Most of the scientists I know hope that their ideas will bring something completely new to the table, or that they will someday change the way people think about a particular idea in their field, BUT, I also think this idea is too simplistic to be complete in offering substantive advice for burgeoning scientists. The reason for my dissent is simply that "wowing" your audience of scientific peers is a somewhat limited goal. Not only is it poorly defined (some ideas are truly great, but may be seen as too risky) but also it seems that there may be numerous ways to garner such approbation from scientific peers, yet Green provides little road map for how to get there, nor does he address the road blocks one might find along the way. He diminishes "grantsmanship" in favor of astonishing or important ideas, and, while I agree with him that a really great idea would strike me as more favorable than a flawlessly put together grant for a lesser idea, grantsmanship (or salesmanship) can definitely mean the difference if your proposal floats dangerously close to the cutoff line. Similarly, dumb luck all too often plays a role in one's success in science. First, there is the fact that many important discoveries come from unforeseen results from sometimes unrelated fields of research (Thermus aquaticus and Taq polymerase, CFC refrigerants and Teflon, Staphylococcus and Penicillin, etc.) and thus those avenues initially proposed can only be identified as groundbreaking after the fact. Even if we leave serendipity aside, consider how important luck can be just in the sense of relying on fellow human beings for funding and for approval. If the political climate favors fiscal conservatism, then public funding for science will be scarce, and many very good ideas will fail to get funded, regardless of how "wowing" they may be. Conversely mediocre ideas can get funded or accepted in important journals simply because a particular field is getting a lot of attention in the media, where whole issues of Science and Nature get devoted to something like "swine flu" and any paper that happens to be ready for submission that month gets published. Often the fate of one's science can rest less on its merit and more on a reviewer's mood, how much time and attention they have to give, how open they are to contradictory ideas, or how well they can sell your idea to other scientists on the panel. As scientists, or perhaps as academics, we like to believe that we exist in a true meritocracy, where there are no corporate politics, no game playing or salesmanship, and certainly nothing so fickle as chance. We would believe that if you have great ideas you will be rewarded, if you work hard and support your ideas through grants and publications, you will be rewarded, and if your work truly impacts the field, you will be rewarded. And while this is true to some extent, an academic career is still a human endeavor, and like all human endeavors, an ability to play politics, an ability to be a good salesperson, and a bit of dumb luck are all likely going to be essential supplements to hard work and ingenuity if one hopes to be truly successful.
Sunday, December 26, 2010
A hectic time of year...
So, now that I have a few days "off" around the holidays, it occurs to me that I have been neglecting the blog, and that maybe I should get to writing. The good news is, a lot has been going on in the past month or so, and so I have a lot to post about, like the Society for Neuroscience conference, and a couple very interesting lectures I have attended on neuroethics and Alzheimer's disease. Now that I have a little bit of time, these, and other posts will be forthcoming... in the meanwhile, here is some of the online content for the book I am currently reading: Sleights of Mind: what the neuroscience of magic reveals about our everyday deceptions. So far, the book is a very good read, with lots of examples of illusions that take advantage of weaknesses in human perception. For example, if you go to the website, you can see numerous examples of illusions, like the ones in the following video, which take advantage of our limited ability to pay attention to more than one thing at a time. If you watch the video below, you will see a magician who is playing a different version of three card monty, or the shell game with you. He begins by placing a green ball under a clear glass and moving it around with two other glasses that are empty. Of course, we focus intently on the glass with the ball and track its position as it is moved around because we are expecting, like in a normal version of this game, that he is somehow going to make the ball disappear. Since we are focusing all of our attention on the one glass, we are not really able to pay attention to the other two, which allows for some slight of hand, and all of the sudden, it appears as if another ball has magically appeared in each of the other two glasses. Psychologists call this inattentional blindness or, conversely, our attentional spotlight. Outside of the spotlight, we think we are paying attention, but really we are not, and this makes things that are placed in our midst seem to have appeared by magic even though they have not.
A paper in 1999 by Simons and Chabris (pdf) demonstrated this principle quite clearly by presenting the following video to a group of subjects. They asked the subjects to pay attention and count how many times the ball is passed amongst the team members wearing white jerseys. Go ahead and try it...
If you watched the video to the end, you may have fallen for the same illusion that most people do when taking this test... That is, not being able to see someone in a gorilla suit walk directly in front of the camera. Now, if most people miss that, when it is right in front of them, imagine what a magician can do when they really try to sneak something by you.
A paper in 1999 by Simons and Chabris (pdf) demonstrated this principle quite clearly by presenting the following video to a group of subjects. They asked the subjects to pay attention and count how many times the ball is passed amongst the team members wearing white jerseys. Go ahead and try it...
If you watched the video to the end, you may have fallen for the same illusion that most people do when taking this test... That is, not being able to see someone in a gorilla suit walk directly in front of the camera. Now, if most people miss that, when it is right in front of them, imagine what a magician can do when they really try to sneak something by you.
Sunday, December 19, 2010
The Truth About Santa Claus...
I'm sorry kids, but there comes a time in all of our lives when we are old enough to understand the truth about Santa Claus. You knew this day would come, you brought it on yourself really as you started to ask the questions that demonstrated you were growing up and beginning to think critically: "How can one man visit so many homes in just a single night?" you asked, "Even if he travels from east to west to take full advantage of time zones, it's just not possible!" And of course, we told you it was magic, but you only bought that for a little while, hesitant to expose the lies and possibly miss out on the next year's presents. But no more, it's time for you to know the truth, there is no magic, in fact, there is a very simple and logical explanation for how all those presents end up under all those Christmas trees... Santa uses science. That's right, apparently in the off season, the elves, much like workers at Google, are given time to work on whatever projects they want to. The result of this innovative management style has been that, for the past hundred years or so, North Pole Industries Inc., LLC. has developed technology so advanced that we are only now beginning to understand it. At least, that's the claim of author Gregory Mone in his new book, The Truth About Santa: Wormholes, Robots, and What Really Happens on Christmas Eve.
According to Mone, our view of Santa has long been distorted by Arthur C. Clarke's third "law" which states: "Any sufficiently advanced technology is indistinguishable from magic." For example, Santa is able to travel to so many homes in one night by using wormholes and other means of bending spacetime, allowing him to travel around freely, while, to us, it seems that time is standing still. Also, if you've ever wondered why you could never catch a glimpse of the jolly old elf no matter how late you stayed up, it's because Santa's suit possesses cloaking technology, making him all but invisible. And, no branch of science appears to be off limits. Wonder why lumps of coal stopped making appearances in "naughty" childrens' stockings? Well because clearly the elves have been reading up on their psychology research and come to the realization that punishments are much less effective than positive reinforcement. And the list goes on. For more of Santa's gadgetry, check out the book, or this brief review and excerpt over at NPR, or here at Discover Magazine.
According to Mone, our view of Santa has long been distorted by Arthur C. Clarke's third "law" which states: "Any sufficiently advanced technology is indistinguishable from magic." For example, Santa is able to travel to so many homes in one night by using wormholes and other means of bending spacetime, allowing him to travel around freely, while, to us, it seems that time is standing still. Also, if you've ever wondered why you could never catch a glimpse of the jolly old elf no matter how late you stayed up, it's because Santa's suit possesses cloaking technology, making him all but invisible. And, no branch of science appears to be off limits. Wonder why lumps of coal stopped making appearances in "naughty" childrens' stockings? Well because clearly the elves have been reading up on their psychology research and come to the realization that punishments are much less effective than positive reinforcement. And the list goes on. For more of Santa's gadgetry, check out the book, or this brief review and excerpt over at NPR, or here at Discover Magazine.
Saturday, December 11, 2010
Does watching too much TV rot your brain?
Well, not really, but a study brought to attention by the newly designed Barking up the wrong tree blog suggests that watching too much TV is correlated with increased anxiety and decreased life satisfaction. Of course maybe watching TV is soothing, or helps people to forget how unhappy they are, and so unhappy or anxious people are therefore more likely to watch too much TV.
Tuesday, December 7, 2010
Currently writing, just not here....
So one of the things you have to do as an academic is write grant proposals. Usually, you get all of your data lined up, plan things out, and spend a few weeks, or maybe even a few months writing up the proposal (and then wait 6 months to a year to find out how you did). Of course, sometimes, your boss tells you about a funding opportunity 5 days before the deadline and then asks you to write a proposal... from scratch. This is what I have been doing over the past 4 days... that, and taking a "break" to run a half marathon. Anyway, I sort of feel like this...
However, I should be back to normal and blogging again soon.
However, I should be back to normal and blogging again soon.
Monday, November 29, 2010
Brain development and football.
A few days ago I posted about decision making in football, and apparently I wasn't the only brain blogger to be thinking about the pigskin over the holiday weekend. Jared Tanner, over at BrainBlogger, put forward the hypothesis that one of the reasons most college teams don't start freshman quarterbacks has to do with their underdeveloped prefrontal cortices, which leaves them less able to make good split-second decisions. It is an interesting idea, and there is some basis to think that the brain of an 18 year old would look a bit different from a 22 year old's. Still, there are numerous experiments that would need to be done just to demonstrate that junior or senior quarterbacks really perform that much better than freshman or sophomores, and that any difference in performance is not related to experience with the team, the coaches, the types of plays being run, etc. After all, my alma mater, Penn State, started a freshman QB named Rob Boulden this year, and he played pretty well, up until he got a head injury... which would be another variable that would have to be controlled for in any study of the brains of football players.
Sunday, November 28, 2010
Thanksgiving Re-post
Since some people may not think a football related post is "thanksgivingy" enough, here is a re-posting from last year: Does turkey on thanksgiving really make you sleepy?
Since Turkey-day is around the corner, I thought I would bring up the very popular myth that tryptophan in turkey is what makes us all feel groggy on Thanksgiving. In an earlier post, I talked about how the amino acid tryptophan gets converted into serotonin, and then melatonin. Melatonin, as you may or may not know is the "sleep hormone". It is secreted by the pineal gland to help regulate our sleep/wake cycles which follow a circadian rhythm of about 24-25 hours. During the day, when it is bright and sunny we feel awake, then, as the day turns into night, we start producing more melatonin, and we get sleepy. Considering this, it's not too hard to see why tryptophan became the scapegoat for our Thanksgiving day sleepiness, but the truth is tryptophan, or really turkey in general has gotten a bad rap. First, tryptophan is a fairly prevalent amino acid, and there is actually plenty of it in most of the protein containing foods that we eat. Furthermore, turkey does NOT contain a higher level of tryptophan than most other common meats, fish, and poultry. For example, per 200 calorie serving, duck, pork, chicken, soy, sunflower seeds, several types of fish, and turkey all have about 440 - 450 mg of tryptophan, with turkey being the lowest in the group. Of course, that being said, even if turkey did have significantly more tryptophan than other meats, it is still questionable as to whether normally consumed levels of tryptophan can make you sleepy. While at first glance, the research seems to back the idea that tryptophan has sedative properties, these studies have used very large quantities to test for these effects. For example, one study from 1975 suggested that consuming 5 grams of tryptophan (so, about 11 servings of turkey) did increase self-reported drowsiness, and a study conducted in 1989 found that a dose of 1.2 grams of tryptophan did NOT increase measures for drowsiness, but a dose of 2.4 grams did. These studies suggest that you would have to eat a lot of turkey (like, over a pound and a half) to get an effective dose. So, while it is possible that you may eat that much turkey on our most hallowed of gluttonous holidays, it is more likely that thanksgiving day drowsiness is the result of a coming together of many factors, a perfect storm if you will, of:
1. lots of food (which diverts bloodflow to the digestive tract),
2. carbohydrate loading, where much of the food is carbohydrate heavy stuffing and sweet foods like cranberry sauce, sweet potatoes, and desserts (which can cause an overproduction of insulin resulting in low blood sugar, and thus sleepiness, later on),
3. and then of course there are usually a couple of alcoholic beverages involved (with obvious sleep inducing effects).
Add all of that up with being in a nice, warm home, on a comfy couch, with football or parades or a fire flickering in the background, and what you have is a recipe for a nap. I'm kinda sleepy just thinking about it.
Have a Happy Thanksgiving!
Since Turkey-day is around the corner, I thought I would bring up the very popular myth that tryptophan in turkey is what makes us all feel groggy on Thanksgiving. In an earlier post, I talked about how the amino acid tryptophan gets converted into serotonin, and then melatonin. Melatonin, as you may or may not know is the "sleep hormone". It is secreted by the pineal gland to help regulate our sleep/wake cycles which follow a circadian rhythm of about 24-25 hours. During the day, when it is bright and sunny we feel awake, then, as the day turns into night, we start producing more melatonin, and we get sleepy. Considering this, it's not too hard to see why tryptophan became the scapegoat for our Thanksgiving day sleepiness, but the truth is tryptophan, or really turkey in general has gotten a bad rap. First, tryptophan is a fairly prevalent amino acid, and there is actually plenty of it in most of the protein containing foods that we eat. Furthermore, turkey does NOT contain a higher level of tryptophan than most other common meats, fish, and poultry. For example, per 200 calorie serving, duck, pork, chicken, soy, sunflower seeds, several types of fish, and turkey all have about 440 - 450 mg of tryptophan, with turkey being the lowest in the group. Of course, that being said, even if turkey did have significantly more tryptophan than other meats, it is still questionable as to whether normally consumed levels of tryptophan can make you sleepy. While at first glance, the research seems to back the idea that tryptophan has sedative properties, these studies have used very large quantities to test for these effects. For example, one study from 1975 suggested that consuming 5 grams of tryptophan (so, about 11 servings of turkey) did increase self-reported drowsiness, and a study conducted in 1989 found that a dose of 1.2 grams of tryptophan did NOT increase measures for drowsiness, but a dose of 2.4 grams did. These studies suggest that you would have to eat a lot of turkey (like, over a pound and a half) to get an effective dose. So, while it is possible that you may eat that much turkey on our most hallowed of gluttonous holidays, it is more likely that thanksgiving day drowsiness is the result of a coming together of many factors, a perfect storm if you will, of:
1. lots of food (which diverts bloodflow to the digestive tract),
2. carbohydrate loading, where much of the food is carbohydrate heavy stuffing and sweet foods like cranberry sauce, sweet potatoes, and desserts (which can cause an overproduction of insulin resulting in low blood sugar, and thus sleepiness, later on),
3. and then of course there are usually a couple of alcoholic beverages involved (with obvious sleep inducing effects).
Add all of that up with being in a nice, warm home, on a comfy couch, with football or parades or a fire flickering in the background, and what you have is a recipe for a nap. I'm kinda sleepy just thinking about it.
Have a Happy Thanksgiving!
Thursday, November 25, 2010
Thanksgiving and Football: Why you should always go for it on 4th and short
Today being Thanksgiving, it's pretty reasonable to assume (if you live in the U.S.) that you will likely be sitting down to a large meal involving lots of turkey, stuffing, and cranberry sauce. It is also pretty likely, that somewhere in the house, football games will be on the television. Which brings us to one of the quintessential questions in football: It's 4th down, your team is on the opposing team's 30 yard line and they have only one yard to go to get a first down. Should they go for it? Most people would probably say no... that they should try for a field goal and at least get the 3 points. But most people would be wrong. At least according to a study in the Journal of Political Economy (pdf), which suggests that the payoff for "going for it" is more than twice as much compared to trying for a field goal.
To completely over simplify the study, economist David Romer determined the payoff of each decision (either to kick or to go for it) by looking at thousands of NFL plays and calculating the average costs and benefits of each decision depending on where the team was positioned on the field. In this case, the benefit would be the likelihood of scoring a touchdown (valued at roughly 7 points*) or of scoring a field goal (valued at 3 points). So, if teams that decide to kick when they are on their opponents' 30 yard line make the field goal an average of 33% of the time, then the benefit of kicking is assigned a point value of 1 (since a field goal is worth 3 points, and 33 percent of 3 points is 1 point). Since teams that only have one yard to go when they are on the 30 yard line convert for a first down 64% of the time, and teams that are inside the 30 yard line score a touchdown about 40% of the time, the benefit of going for a first down is assigned a value of 1.8 (0.64 x 0.40 = 0.24, or a 24% chance of scoring a touchdown by going for it on 4th and 1, and 0.24 x 7 points = 1.8 points). This means that "going for it" should result in scoring almost twice as many points than kicking, and this difference becomes even more exaggerated when we consider the costs. In this case, a failed attempt either way results in giving the other team the ball on their own 30 yard line. Scoring the field goal, or eventually a touchdown, will result in a kickoff which, on average, results in the other team getting the ball on their 27 yard line. This means that, no matter what your team does in this situation, they are going to ultimately give the ball to the other team at about the 30 yard line, which gives the other team a chance at scoring relating to an average value of 0.62. Thus -0.62 points is the cost associated with either kicking or going for it. This means that the net benefit of going for the first down is 1.18 points, while the net benefit of kicking the field goal is 0.38 points. So, while there is less of a chance that "going for it" will result in a touchdown, compared to a greater chance that your kicker will make the 40yard field goal, on average you will get more than THREE TIMES as much benefit if your team goes for it every time they have 4th and 1 on the 30 rather than kicking it every time you were in this situation.
So, why do coaches so rarely go for the first down? It may be that, in these instances, coaches (and likely fans) fail to take into account the differences in values of touchdowns and field goals, seeing all types of "score" as being roughly equal in value even though touchdowns are more than twice as valuable as field goals. Additionally, the costs are also poorly estimated. For example, turning the ball over by a loss of downs seems to carry greater cost than turning it over in the form of a kickoff after scoring, even though the point values of these in our hypothetical are roughly the same. These ideas tie into the explanation that Romer offers in the paper, suggesting that coaches may be succumbing to "loss aversion" type thinking. Loss aversion is a phenomenon in psychology where people generally avoid a more rewarding choice when the lesser of the two options seems more like a sure thing. For example, when offered a fifty percent chance of a $100,000 prize or a 100 percent chance for a $30,000 prize, most people choose the $30,000 prize because it is a sure thing. However, the value of the first option has an average value of $50,000 compared to only $30,000 for the second option. So, if you are only offered this choice once, it might make sense to take the sure thing, BUT, if, as in the game of football, you will face a choice like this many different times in a game or over the course of a season, it makes more sense to choose the option that has the higher average outcome. And, of course, most people do realize which decision is the better one if you take away the "sure thing" aspect of one of the options. For example, if you ask the same group of people whether or not they would go for a 5% chance at $100,000 or a 10% chance at $30,000, more people choose the 5% chance at $100,000, even though the relative difference between winning the prizes is the same as in the previous situation. (If you want to know more about loss aversion, I suggest this post over at Jonah Lehrer's blog.)
In the case of football, most coaches (and fans) see a field goal as much more of a "sure thing" because the probability of a successful try is higher than the probability of ultimately scoring a touchdown, particularly if you are less than 30 yards away from the goal line. BUT, this neglects to take into account the point differences between a field goal (3) and a touchdown (7), and the costs associated with the opposing team's resulting field position. To provide another example, Romer presents the situation of having a 4th down and goal on the 2 yard line, where a field goal really is a sure thing, but the chances of scoring a touchdown are about 3 in 7. Here, the benefit is about the same (an average of 3 points per field goal attempt, and an average of 3 points per touchdown attempt). However, the cost for both of these is NOT the same. Since the field goal is all but guaranteed, that means that on average, the opposing team will then get the ball on their 27 yard line after the kickoff. Whereas, if you go for the touchdown, there is a 4/7 chance that you will fail and leave the other team with the ball on their 2 yard line. When you calculate the cost and benefit of each of those field positions, the kickoff results in a positive chance of 0.62 points for the opposing team, thus costing your team 0.62 points (on average), leaving a resulting net benefit for a field goal try at 2.38. Leaving the other team on their own 2 yard line however, puts them at a serious disadvantage where they are more likely to turn the ball back over or get sacked in the endzone giving your team 2 points for a safety. The average value of this position is therefore actually negative (-1.5), and thus, the benefit of going for the touchdown is (3/7 x 7 =) 3, and the "cost" is (4/7 x -1.5 = -0.857), which, subtracting the cost from the benefit, yields a net benefit of 3.857 going for the touchdown (versus the net benefit of only 2.38 if you kick the field goal).
AND this calculus doesn't just apply to being on the thirty yard line, or the 2 yard line. According to Romer's extrapolation, no matter where you are on the field, except for maybe behind your own 15 yard line, it makes more sense to go for the first down on 4th and short (less than 2-3 yards to go) than it does to punt or to kick a field goal. Romer estimates that teams who adopt a strategy of "going for it" in these situations would be 5% more likely to win each game, and would at least win one extra game per season than they normally would for 3 out of every 4 seasons. So, maybe Bill Belichick, who has a reputation for "going for it" on 4th down, has done his math, or maybe he has intuitively stumbled on to something. Either way, his career coaching record of almost twice as many wins as losses, 3 Super Bowl rings and 4 AFC championships, support the idea that going for the first down conversion on 4th and short might just give your team the winning edge. Just something to keep in mind while you are watching the games later this afternoon. Happy Thanksgiving!
*Since point after touchdown ("extra point") kicks are successful 98.5% of the time, the actual value of a touchdown used in the study was 6.985, not 7, but for us, to keep the math simple, I'll just use the full 7.
__________________________________________________________________________________
And if you want to learn more about what various fields of research can tell you about your favorite sports, check out some of my earlier posts:
How the flash lag illusion may have cost Armando Galarraga his perfect game in baseball.
How magnetic necklaces and hologram bracelets may actually affect athletic performance.
How watching your favorite team win (or lose) might affect your testosterone levels.
Do you run a higher risk of getting a concussion playing boys" football or girls' soccer?
Where the students sit in the stadium could enhance your school's home field advantage.
Romer, D. (2006). Do Firms Maximize? Evidence from Professional Football Journal of Political Economy, 114 (2), 340-365 DOI: 10.1086/501171
To completely over simplify the study, economist David Romer determined the payoff of each decision (either to kick or to go for it) by looking at thousands of NFL plays and calculating the average costs and benefits of each decision depending on where the team was positioned on the field. In this case, the benefit would be the likelihood of scoring a touchdown (valued at roughly 7 points*) or of scoring a field goal (valued at 3 points). So, if teams that decide to kick when they are on their opponents' 30 yard line make the field goal an average of 33% of the time, then the benefit of kicking is assigned a point value of 1 (since a field goal is worth 3 points, and 33 percent of 3 points is 1 point). Since teams that only have one yard to go when they are on the 30 yard line convert for a first down 64% of the time, and teams that are inside the 30 yard line score a touchdown about 40% of the time, the benefit of going for a first down is assigned a value of 1.8 (0.64 x 0.40 = 0.24, or a 24% chance of scoring a touchdown by going for it on 4th and 1, and 0.24 x 7 points = 1.8 points). This means that "going for it" should result in scoring almost twice as many points than kicking, and this difference becomes even more exaggerated when we consider the costs. In this case, a failed attempt either way results in giving the other team the ball on their own 30 yard line. Scoring the field goal, or eventually a touchdown, will result in a kickoff which, on average, results in the other team getting the ball on their 27 yard line. This means that, no matter what your team does in this situation, they are going to ultimately give the ball to the other team at about the 30 yard line, which gives the other team a chance at scoring relating to an average value of 0.62. Thus -0.62 points is the cost associated with either kicking or going for it. This means that the net benefit of going for the first down is 1.18 points, while the net benefit of kicking the field goal is 0.38 points. So, while there is less of a chance that "going for it" will result in a touchdown, compared to a greater chance that your kicker will make the 40yard field goal, on average you will get more than THREE TIMES as much benefit if your team goes for it every time they have 4th and 1 on the 30 rather than kicking it every time you were in this situation.
So, why do coaches so rarely go for the first down? It may be that, in these instances, coaches (and likely fans) fail to take into account the differences in values of touchdowns and field goals, seeing all types of "score" as being roughly equal in value even though touchdowns are more than twice as valuable as field goals. Additionally, the costs are also poorly estimated. For example, turning the ball over by a loss of downs seems to carry greater cost than turning it over in the form of a kickoff after scoring, even though the point values of these in our hypothetical are roughly the same. These ideas tie into the explanation that Romer offers in the paper, suggesting that coaches may be succumbing to "loss aversion" type thinking. Loss aversion is a phenomenon in psychology where people generally avoid a more rewarding choice when the lesser of the two options seems more like a sure thing. For example, when offered a fifty percent chance of a $100,000 prize or a 100 percent chance for a $30,000 prize, most people choose the $30,000 prize because it is a sure thing. However, the value of the first option has an average value of $50,000 compared to only $30,000 for the second option. So, if you are only offered this choice once, it might make sense to take the sure thing, BUT, if, as in the game of football, you will face a choice like this many different times in a game or over the course of a season, it makes more sense to choose the option that has the higher average outcome. And, of course, most people do realize which decision is the better one if you take away the "sure thing" aspect of one of the options. For example, if you ask the same group of people whether or not they would go for a 5% chance at $100,000 or a 10% chance at $30,000, more people choose the 5% chance at $100,000, even though the relative difference between winning the prizes is the same as in the previous situation. (If you want to know more about loss aversion, I suggest this post over at Jonah Lehrer's blog.)
In the case of football, most coaches (and fans) see a field goal as much more of a "sure thing" because the probability of a successful try is higher than the probability of ultimately scoring a touchdown, particularly if you are less than 30 yards away from the goal line. BUT, this neglects to take into account the point differences between a field goal (3) and a touchdown (7), and the costs associated with the opposing team's resulting field position. To provide another example, Romer presents the situation of having a 4th down and goal on the 2 yard line, where a field goal really is a sure thing, but the chances of scoring a touchdown are about 3 in 7. Here, the benefit is about the same (an average of 3 points per field goal attempt, and an average of 3 points per touchdown attempt). However, the cost for both of these is NOT the same. Since the field goal is all but guaranteed, that means that on average, the opposing team will then get the ball on their 27 yard line after the kickoff. Whereas, if you go for the touchdown, there is a 4/7 chance that you will fail and leave the other team with the ball on their 2 yard line. When you calculate the cost and benefit of each of those field positions, the kickoff results in a positive chance of 0.62 points for the opposing team, thus costing your team 0.62 points (on average), leaving a resulting net benefit for a field goal try at 2.38. Leaving the other team on their own 2 yard line however, puts them at a serious disadvantage where they are more likely to turn the ball back over or get sacked in the endzone giving your team 2 points for a safety. The average value of this position is therefore actually negative (-1.5), and thus, the benefit of going for the touchdown is (3/7 x 7 =) 3, and the "cost" is (4/7 x -1.5 = -0.857), which, subtracting the cost from the benefit, yields a net benefit of 3.857 going for the touchdown (versus the net benefit of only 2.38 if you kick the field goal).
AND this calculus doesn't just apply to being on the thirty yard line, or the 2 yard line. According to Romer's extrapolation, no matter where you are on the field, except for maybe behind your own 15 yard line, it makes more sense to go for the first down on 4th and short (less than 2-3 yards to go) than it does to punt or to kick a field goal. Romer estimates that teams who adopt a strategy of "going for it" in these situations would be 5% more likely to win each game, and would at least win one extra game per season than they normally would for 3 out of every 4 seasons. So, maybe Bill Belichick, who has a reputation for "going for it" on 4th down, has done his math, or maybe he has intuitively stumbled on to something. Either way, his career coaching record of almost twice as many wins as losses, 3 Super Bowl rings and 4 AFC championships, support the idea that going for the first down conversion on 4th and short might just give your team the winning edge. Just something to keep in mind while you are watching the games later this afternoon. Happy Thanksgiving!
*Since point after touchdown ("extra point") kicks are successful 98.5% of the time, the actual value of a touchdown used in the study was 6.985, not 7, but for us, to keep the math simple, I'll just use the full 7.
__________________________________________________________________________________
And if you want to learn more about what various fields of research can tell you about your favorite sports, check out some of my earlier posts:
How the flash lag illusion may have cost Armando Galarraga his perfect game in baseball.
How magnetic necklaces and hologram bracelets may actually affect athletic performance.
How watching your favorite team win (or lose) might affect your testosterone levels.
Do you run a higher risk of getting a concussion playing boys" football or girls' soccer?
Where the students sit in the stadium could enhance your school's home field advantage.
Romer, D. (2006). Do Firms Maximize? Evidence from Professional Football Journal of Political Economy, 114 (2), 340-365 DOI: 10.1086/501171
Tuesday, November 23, 2010
The Top 7 research papers in Neuroscience
The Faculty of 1000 have come out with their rankings (for the year I guess?) and in the category of neuroscience,you can find summaries of the top 7 here.
Saturday, November 20, 2010
Where art meets neuroscience
Last week's issue of the journal Nature had such a striking cover, that I couldn't help but pick one up off of the newsstand despite that fact that I can view the articles electronically at work. The cover revealed that the focus of the issue would be schizophrenia, a disease where the causes remain poorly understood in the scientific community, and the symptoms even less so in the mainstream. Many people often confuse schizophrenia with multiple personality disorder despite the fact that the most prominent symptoms of schizophrenia are paranoid delusions and auditory hallucinations, not multiple personalities. Also, most people are generally unaware of the fact that schizophrenia typically manifests itself during late adolescence or early adulthood, which means that many who have schizophrenia can feel perfectly normal throughout their teens and even twenties before descending into the throes of this terrible disease. There are, however, some promising discoveries being made, and, hopefully, they will lead to progress in treatments or even prevention of the disease, which is estimated to affect nearly 1 percent of the world's population. At the nature website, you can check out many of the articles from the issue online which do a good job of summarizing some of the progress and many of the obstacles to understanding schizophrenia. Of course, I started all of this by talking about art and neuroscience, and the cover of the magazine, the main portion of which is shown above. The painting, which is credited to Rodger Casier, is an example taken from the NARSAD Artworks program which provides "museum quality art by artists whose lives share or have shared the bond of mental illness". The program is an interesting one, not only raising awareness of mental illness by showcasing the artwork of those who have suffered or still suffered from some form of mental illness, but also raising money for mental health research as well. I will have to check out the site a little more to see what's available (even just to look at), but in the meanwhile, with the holidays coming up, it may be a good place to go if you need to get some greeting cards (like the ones pictured below).
Monday, November 15, 2010
Cholesterol isn't all bad...
We've actually known this for quite some time, cholesterol does a lot of important things in cells and in your body, it just gets a bad wrap because when a lot of it gets carried around by low density lipoproteins, it can clog your arteries. But cholesterol does lots of good things too, it improves the integrity of the cell membranes in all of your cells, and in many of the organelles within those cells. It is also the molecule from which all of the steroid hormones are made (including estrogens and testosterone). Some recent studies have also shown how critical cholesterol is for the development of the brain (an organ that is very rich with cholesterol) and the normal functioning of neurons. The first study mentioned here shows how oxysterol (a metabolite, or breakdown product of cholesterol) seems to be important for the production of midbrain dopaminergic neurons (the type of cells that are lost in Parkinson's disease). The other study shows how cholesterol is important to normal brain function and the ability of neurons to communicate across synapses. Some other important things that cholesterol does for us? Well it is necessary for making vitamin D, it can have antioxidant properties (thus helping to prevent cell damage and cell death), and it can help our digestion of fat and fat soluble vitamins as a critical component of bile acids. Of course, this doesn't mean that you should go out and start eating bacon and eggs for every meal. Chances are that you are already getting plenty of cholesterol from you diet, and eating too much cholesterol in your diet can still be bad for your cardiovascular health, but the cholesterol that is made by the cells in your brain gets put to good use (as does the cholesterol made in most of your cells), it's just when you have to transport the stuff in your blood that it becomes a problem. Also, there have been some studies to suggest that decreasing cholesterol synthesis with statins can reduce the risk of developing Alzheimer's disease, and there are other studies that suggest that high levels of cholesterol are correlated with a higher incidence of certain cancers, and there are even a couple new studies to suggest that even cholesterol that is being carried by high density lipoproteins (HDLs, or "good cholesterol") may be harmful if you have conditions such as diabetes, arthritis, or kidney dysfunction. So, I guess the point is that cholesterol is a complicated molecule with lots of functions in the body, some good, some bad, and in the end we have to weigh what we know about the good and the bad to determine how we treat various diseases, though, given the effectiveness of lowering cholesterol on treating and preventing heart disease, and the prevalence of hear disease (it being the number one killer in the U.S.) I think we will still have to keep the general mindset that cholesterol is bad (at least that too much cholesterol is bad), but we don't want to completely eliminate it, because some cholesterol can be good (at least for your brain, and your cell membranes).
Sunday, November 14, 2010
Society for Neuroscience Meeting
I am currently in San Diego for the annual meeting of the Society for Neuroscience. I will hopefully be able to post some of the more interesting things I see here, but probably not in any sort of detail until after the meeting is over. In the meanwhile, there are several bloggers who will be updating regularly as the meeting goes on (like maybe they will blog about the talk given by actress Glenn Close who hopes to promote research and treatments for mental illnesses). Anyway, you can find the list of bloggers (and tweeters) for the meeting here.
Thursday, November 11, 2010
The Psychology of Climate Denialism
Probably more than anything else on this blog, I have posted about the denial of certain scientific facts (like global warming, evolution, and the safety of vaccines). Second to that, I tend to post about the psychology that underlies such disbelief, like this post, where I recommend the ultimate resource in understanding why we tend to reject certain types of data. Along these lines, I have been curious for a long time why research and data concerning how to persuade people, or to disabuse them of these mind blocks, is not more prevalent in discussions about things like global warming and the safety of vaccines. While for most scientists, the data are the data, and these facts are readily accepted as such, there is clearly a disconnect with a substantial portion of the population. For most scientists, myself included, simply repeating the facts, or shouting louder and louder, or finally name calling in frustration are the most common recourse when we are confronted with those who flatly deny the evidence (or worse, refuse to listen to or look at the evidence, claiming instead that it has all been fabricated). Of course, hammering home the facts tends to work in lab meetings or at scientific conferences, but it doesn't seem to work at all in with climate deniers, evolution deniers, flat earthers, whatever. So how do we convince the general public (or this proportion of it) that policies need to be enacted to stop global warming, or that they need to get their children vaccinated, etc. It seems to me this problem is just as critical, if not more so, than the problems of global warming and autism themselves. Because if people don't think global warming is real, they won't support public policies for change or for more research. If people believe that the cause for autism is vaccines, they will harm others by not getting vaccinated, and, again, they may refuse to support publicly funded research to find the real causes of autism spectrum disorders. So what can be done? Well, I don't have any solid answers, but there are two interesting items I have come across recently that offer some hope. The first is the cultural cognition project at Yale law school. If you click on the link and go to the website, you can find several articles and scientific studies that have been sponsored by the program, like this one, which reviewed some of the experiments and showed that people's core beliefs are a major factor in determining how they view a particular scientific or technological issue. This effect is particularly strong when the issue requires some additional level of expertise, causing us to rely on experts to explain things to us, or to tell us how to feel about a particular issue. In these cases, the average person is much more likely to believe the "experts" that they feel they can identify with on core values. This is very clearly illustrated by the fact that many will take Rush Limbaugh's opinions on global warming as fact despite his complete and utter lack of any scientific credentials. People who identify with Limbaugh's conservative political and religious values see him as more of an "expert" than scientists who they may see as elitist, overly liberal, or atheistic, even when the issues at hand are scientific in nature. Which, I guess, debunks the "shouting loudly and calling people stupid" method for persuading people of the veracity of scientific facts (sorry, PZ Myers).
Anyway, the second item that I found was much more directly related to climate denialism. Recently, the American Psychological Association put out a report including "studies of human responses to natural and technological disasters, efforts to encourage environmentally responsible behavior, and research on the psychosocial impacts of climate change." If you don't want to read the whole report (and I don't blame you) you can listen to an interview with a couple of the psychologists who helped to put together this report, here. I don't know if this report really offers any solid answers, but it seems to do a decent job of identifying the problems we face with a public that does not accept or does not want to accept the consequences of it's polluting lifestyle. It is encouraging to see that researchers are identifying these problems and trying to find solutions. Hopefully, this type of research and the resultant findings will gain a higher profile, and scientists and science reporters will have the tools they need to communicate most effectively with the public. And beyond that, hopefully we will see a brighter future where society acts upon factual information (backed by mounds of scientific evidence) to make our world a better, safer place for future generations.
Anyway, the second item that I found was much more directly related to climate denialism. Recently, the American Psychological Association put out a report including "studies of human responses to natural and technological disasters, efforts to encourage environmentally responsible behavior, and research on the psychosocial impacts of climate change." If you don't want to read the whole report (and I don't blame you) you can listen to an interview with a couple of the psychologists who helped to put together this report, here. I don't know if this report really offers any solid answers, but it seems to do a decent job of identifying the problems we face with a public that does not accept or does not want to accept the consequences of it's polluting lifestyle. It is encouraging to see that researchers are identifying these problems and trying to find solutions. Hopefully, this type of research and the resultant findings will gain a higher profile, and scientists and science reporters will have the tools they need to communicate most effectively with the public. And beyond that, hopefully we will see a brighter future where society acts upon factual information (backed by mounds of scientific evidence) to make our world a better, safer place for future generations.
Monday, November 8, 2010
More evidence against the "Grumpy old men" myth
A while back I posted about the idea we have in this country that the elderly are more likely to be sad, grumpy, curmudgenly, etc. Despite this widespread belief, there are a handful of surveys that suggest that older people are generally happier than their young and middle aged counterparts (and generally more well adjusted). Of course, since these surveys always asked different people (one group above a certain age versus another group at a middle age, etc.) one could hypothesize that times were simply better so many decades ago, thus arguing that the reason older people tend to be happier is because they grew up in some idealized "Leave it to Beaver" type culture, and the younger people are not as happy, not because of their age, but simply because they have had to live and grow up in a different world. Well, a recent longevity study, that is, one that followed the same people as they aged, suggests that it really is age that confers feelings of well being.
http://www.sciencedaily.com/releases/2010/10/101028113819.htm
I guess they really are your golden years.
http://www.sciencedaily.com/releases/2010/10/101028113819.htm
I guess they really are your golden years.
Wednesday, November 3, 2010
Magnetic necklaces, Holographic bracelets, and Other Totems in Sports
If you have watched any of the Major League Baseball playoffs recently, you can't help but notice the twisty, braided necklaces that have become an all too popular fashion accessory for many of the players. Or maybe you have caught a glimpse of a shiny "power balance" bracelet on your favorite, baseball, football, or basketball player. Of course, there is absolutely no evidence that any of these things actually have any of the amazing effects they claim (enhanced balance or stamina or overall athletic performance). But then again, pro athletes have always been a superstitious bunch. According to an article over at ESPN.com:
"SINCE THE BEGINNING of sport, athletes have looked outside themselves for an edge. In ancient Greece, Olympians sacrificed oxen to satisfy the gods. Roman gladiators entered the arena with their dominant foot first. Yogi Berra used the same Yankee Stadium shower during any winning streak.Michael Jordan wore UNC shorts under his Bulls uniform in every game for 13 years. And before Wade Boggs stepped to the plate, which he did more than 10,000 times in his 18-year career, he carved the Hebrew letters for the word chai ("life") into the dirt with his foot. And Boggs isn't Jewish."
Of course, as the article goes on to discuss, there may be some benefit to these superstitions, a la the placebo effect. Basically, the placebo effect can be described as something like succumbing to the power of suggestion, or a self-fulfilling prophecy. The idea is that if you tell a bunch of people that some experimental treatment is going to have an effect, like pain relief, then a certain number of those people are going to report feeling less pain, even if you don't give them the treatment. As the ESPN article points out, there are some research articles out there that have looked at the placebo effect as it pertains to sports, and much like in biomedical studies, one can see that the placebo effect can yield better performance on both physical and mental performance, and may explain why many athletes believe in "lucky charms" or other superstitions. Anyway, the article does a pretty good job of dealing with these aspects of the "magical bracelets" and other charms that seem to be so popular these days. And apparently, a new episode of Outside the Lines will be featuring research done at the University of Wisconsin, Lacrosse, that determined how effective, or rather, ineffective the "power balance" bracelets are...
PS. If you read the article over at ESPN, you will see that the author makes mention of two articles pertaining to placebo effects and superstitions on performance. However, the articles were not referenced, so I can only assume they are the ones that I linked to above (and here): one where cyclists were told that they were getting a carbohydrate supplement, and performed better than baseline, even though they only got a placebo, and another where "lucky" totems increased participants' performance on puzzles and memory games (as well as on 1 meter golf putts). While tracking those two down, I also found this one, which again used cyclists, and similarly showed that the placebo effect could improve performance, though the participants were told they were getting caffeine rather than carbohydrates. And, of course, the interesting thing about this last study was that, not only was there a placebo effect, but the effect was correlated with the amount of caffeine that the participants were told they had received (i.e. telling someone they got a little bit of caffeine made them perform a little better, telling them they got a lot of caffeine made them perform a lot better, even though neither group got any caffeine).
Clark VR, Hopkins WG, Hawley JA, & Burke LM (2000). Placebo effect of carbohydrate feedings during a 40-km cycling time trial. Medicine and science in sports and exercise, 32 (9), 1642-7 PMID: 10994918
Damisch L, Stoberock B, & Mussweiler T (2010). Keep your fingers crossed!: how superstition improves performance. Psychological science : a journal of the American Psychological Society / APS, 21 (7), 1014-20 PMID: 20511389
Beedie CJ, Stuart EM, Coleman DA, & Foad AJ (2006). Placebo effects of caffeine on cycling performance. Medicine and science in sports and exercise, 38 (12), 2159-64 PMID: 17146324
"SINCE THE BEGINNING of sport, athletes have looked outside themselves for an edge. In ancient Greece, Olympians sacrificed oxen to satisfy the gods. Roman gladiators entered the arena with their dominant foot first. Yogi Berra used the same Yankee Stadium shower during any winning streak.Michael Jordan wore UNC shorts under his Bulls uniform in every game for 13 years. And before Wade Boggs stepped to the plate, which he did more than 10,000 times in his 18-year career, he carved the Hebrew letters for the word chai ("life") into the dirt with his foot. And Boggs isn't Jewish."
Of course, as the article goes on to discuss, there may be some benefit to these superstitions, a la the placebo effect. Basically, the placebo effect can be described as something like succumbing to the power of suggestion, or a self-fulfilling prophecy. The idea is that if you tell a bunch of people that some experimental treatment is going to have an effect, like pain relief, then a certain number of those people are going to report feeling less pain, even if you don't give them the treatment. As the ESPN article points out, there are some research articles out there that have looked at the placebo effect as it pertains to sports, and much like in biomedical studies, one can see that the placebo effect can yield better performance on both physical and mental performance, and may explain why many athletes believe in "lucky charms" or other superstitions. Anyway, the article does a pretty good job of dealing with these aspects of the "magical bracelets" and other charms that seem to be so popular these days. And apparently, a new episode of Outside the Lines will be featuring research done at the University of Wisconsin, Lacrosse, that determined how effective, or rather, ineffective the "power balance" bracelets are...
PS. If you read the article over at ESPN, you will see that the author makes mention of two articles pertaining to placebo effects and superstitions on performance. However, the articles were not referenced, so I can only assume they are the ones that I linked to above (and here): one where cyclists were told that they were getting a carbohydrate supplement, and performed better than baseline, even though they only got a placebo, and another where "lucky" totems increased participants' performance on puzzles and memory games (as well as on 1 meter golf putts). While tracking those two down, I also found this one, which again used cyclists, and similarly showed that the placebo effect could improve performance, though the participants were told they were getting caffeine rather than carbohydrates. And, of course, the interesting thing about this last study was that, not only was there a placebo effect, but the effect was correlated with the amount of caffeine that the participants were told they had received (i.e. telling someone they got a little bit of caffeine made them perform a little better, telling them they got a lot of caffeine made them perform a lot better, even though neither group got any caffeine).
Clark VR, Hopkins WG, Hawley JA, & Burke LM (2000). Placebo effect of carbohydrate feedings during a 40-km cycling time trial. Medicine and science in sports and exercise, 32 (9), 1642-7 PMID: 10994918
Damisch L, Stoberock B, & Mussweiler T (2010). Keep your fingers crossed!: how superstition improves performance. Psychological science : a journal of the American Psychological Society / APS, 21 (7), 1014-20 PMID: 20511389
Beedie CJ, Stuart EM, Coleman DA, & Foad AJ (2006). Placebo effects of caffeine on cycling performance. Medicine and science in sports and exercise, 38 (12), 2159-64 PMID: 17146324
Monday, November 1, 2010
More Halloween Stuff
So, I know it's a day late, but I came across this post over at Discover magazine listing some of the more unusual, spooky reports to be found on the National Center for Biotechnology Information (NCBI) research search engine. Like the "haunted" scrotum pictured to the right...
Sunday, October 31, 2010
Saturday, October 30, 2010
Real Life "Lie To Me"
You may be familiar with the new-ish show on Fox called "Lie To Me" where the incomparable Tim Roth plays a psychologist who can detect when people are lying (and numerous other emotions) through revealing facial expressions he calls "micro-expressions" or, my favorite, "deception leakage". The show is based on the work of psychologist Paul Ekman, and now, in the real world, so are some airport security screening techniques (or via the MindHacks blog).
Of course many experts question how reliable these techniques are, particularly in light of the fact that Ekman's research seems unreplicable, and since he has shied away from publishing in peer reviewed journals in recent decades. (I'm always so disappointed when real life differs from Hollywood.) Anyway, my own take goes something like this: The principle is obviously intriguing. After all, anyone who has played poker quickly learns that their facial expressions can betray what kinds of cards they are holding. However, if you play cards a lot, you may also know that it can take some time to learn what each individual person's "tell" may be. And experienced gamblers can obviously manipulate the situation by intentionally displaying their facial tick or other betraying behavior when they want you to think they are bluffing. Like this scene in Casino Royale:
What this tells us is that any system of "deception detection" would have to rely on either an intimate knowledge of the person being interrogated (to know what their specific "tells" are) OR on a set of facial expressions and/or body movements that are common to EVERYONE when lying. Since different people have different emotional responses to telling a lie (or even to the type of lie they are telling) and since different people often have different "tells" regardless of the extent of guilt or shame they feel, it seems to me that coming up with a system that provides cues used by everyone would yield some (likely unacceptable) level of false positives (i.e. thinking that someone is lying when they aren't) and false negatives (i.e. thinking that someone is not lying even though they are). Anyway, if you read the article over at Nature you will see that the Department of Homeland Security is promising a "rigorous review" of the scientific merit of the programs they have put in place, so maybe we will get some data to support Ekman's ideas, or maybe we will just get more that debunks them.
Of course many experts question how reliable these techniques are, particularly in light of the fact that Ekman's research seems unreplicable, and since he has shied away from publishing in peer reviewed journals in recent decades. (I'm always so disappointed when real life differs from Hollywood.) Anyway, my own take goes something like this: The principle is obviously intriguing. After all, anyone who has played poker quickly learns that their facial expressions can betray what kinds of cards they are holding. However, if you play cards a lot, you may also know that it can take some time to learn what each individual person's "tell" may be. And experienced gamblers can obviously manipulate the situation by intentionally displaying their facial tick or other betraying behavior when they want you to think they are bluffing. Like this scene in Casino Royale:
What this tells us is that any system of "deception detection" would have to rely on either an intimate knowledge of the person being interrogated (to know what their specific "tells" are) OR on a set of facial expressions and/or body movements that are common to EVERYONE when lying. Since different people have different emotional responses to telling a lie (or even to the type of lie they are telling) and since different people often have different "tells" regardless of the extent of guilt or shame they feel, it seems to me that coming up with a system that provides cues used by everyone would yield some (likely unacceptable) level of false positives (i.e. thinking that someone is lying when they aren't) and false negatives (i.e. thinking that someone is not lying even though they are). Anyway, if you read the article over at Nature you will see that the Department of Homeland Security is promising a "rigorous review" of the scientific merit of the programs they have put in place, so maybe we will get some data to support Ekman's ideas, or maybe we will just get more that debunks them.
Tuesday, October 26, 2010
Book Review: Fifty Great Myths of Popular Psychology
So you may or may not have noticed that this book has been listed under the "Currently Reading" heading for, well, forever. To be fair, that had nothing to do with the book itself, but more to do with me having to write and defend my thesis, graduate, move to a new city, and start a new post-doc. However, I am happy to announce that I have finished the book, and I must say, I cannot recommend it highly enough. When I started this blog, my intent was to post about common myths and misperceptions in neuroscience. After having numerous conversations with people who would say things like " we only use 10 percent of our brains" or "I'm more of a right-brained kind of person", I felt that someone needed to write up the research that debunks these ideas... and luckily, someone has, and well. While the book has a bit of an academic feel (design-wise this seems inevitable because the publisher, Wiley, is an academic publisher), BUT, aside from the fact that all of the research is meticulously referenced, the book reads like popular-non-fiction. There are references to modern films, music, and even to episodes of the Simpsons. The writing style is informal and the explanations are simply written, and there is even a bit of humor running throughout. Of course, for me, the information was key, and while I wouldn't have stopped at just 50 myths, the authors did a good job of pointing out some really popular myths and debunking them clearly and eloquently, while also listing many more popular misconceptions at the end of each chapter. As I said above, I highly recommend it. And if you want an example of content from the book, you can check out an earlier post I had when I first started reading the book.
Sunday, October 24, 2010
Halloween Post: The "Bloody Mary" illusion
When I was a kid, a popular ghost story that we would all tell around this time of year was the story of "Bloody Mary". The story is actually very widespread here in the U.S., to the point that it has a Wikipedia page, a post on the mythbusting/fact-checking site Snopes.com, a plethora of YouTube videos devoted to the subject, and numerous mentions in movies and television shows. If, somehow, you have never heard this story, it goes like this:
In Colonial times, there was a beautiful woman named Mary Worth who found herself in the unfortunate position of being an expecting, but unwed, mother. The fact that Mary didn't seem to be bothered by her sin, and that she still seemed to capture the wandering glances of many of the men in the town, infuriated her Puritan neighbors. When Mary had her baby, the townspeople stole it away from her. Claiming that it was the spawn of Satan, they buried it alive as it flailed and screamed. The townsfolk then accused Mary of being in league with the devil and decided she must be burned as a witch. Mary was dragged to the center of town and tied to a stake as the townsfolk beat and slashed her face with the sticks that they would use to burn her. One woman held a mirror up to Mary's face, taunting her to look and see how she was no longer beautiful, she was dirty, and broken, and bloody. "Bloody Mary" she called her, and as the pyre was lit and the flames began to climb, the crowd chanted the name over and over again. Mary screamed as the flames licked her legs and her thighs, and as the acrid smell of burning flesh filled the air, the crowd became hushed. In the lull, Mary cursed the townspeople for what they had done and claimed she would visit vengeance upon them and all of their future generations, they would know the anguish they had put her through. As the flames climbed higher, the form that had been Mary Worth began to disappear, but the last words of the curse lingered in the ears of the townspeople, seemingly echoing off of the surrounding trees and buildings. Then, suddenly and without explanation, the mirror that had been held up to Mary's face shattered, slicing the hand of the woman who had initially taunted her. About a week later, the woman fell ill and died. Soon after, many of the townspeople who had taunted Mary began to meet with ill fated deaths, all in rooms with broken mirrors. It is said, that Mary still seeks vengeance to this day. All you have to do conjure her is to light a dark room with a candle, stand in front of a mirror, and say the name "Bloody Mary" five times in succession. Her face will appear in the mirror in front of you, and if you are descended from one of the townspeople that taunted her, or if she believes that you are taunting her, she will reach through the mirror and slash your face as hers was, or break the mirror cutting you all over, or, she may even pull you into the mirror with her so that you will never be seen again...
At this point, other people would usually chime in about how they heard about a girl from the next town over who had conjured Bloody Mary and was cut all over by shattered mirror glass, or about a boy that tried it and disappeared, never to be found, etc. etc.
Of course, at some point, we've all tried it, and, of course, nothing bad happens. So, how does a story like this get started. Well, a report that was published earlier this year, describing an interesting illusion (pdf), may shed some light on the subject. The author of the paper, Giovanni Caputo describes what he calls the "Strange Face in the Mirror Illusion", and it may be that this illusion spawned stories like this one that revolve around ghosts in the mirror. To characterize this illusion, Caputo got 50 volunteers, who had no idea what they were supposed to see, and had them stare at themselves in a mirror in a dimly lit room. At the end of a ten minute period, the volunteers were asked to write down what they saw. Two thirds of the participants reported seeing huge deformations of their own face, and nearly half reported seeing "fantastical" or "monstrous" beings. Smaller proportions reported seeing the faces of parents, or of ancestors, and some saw the faces of strangers, including old women and children. In all 50 cases, the participants reported some form of dissociative identity effect, which is to say, they felt like what they saw in the mirror was someone (or something) other than themselves. Many felt like they were being watched by the "other" in the mirror, and some reported getting scared or anxious because they felt that the face in the mirror looked angry. Caputo offers some speculations as to what might be causing these effects, but as yet, there is no complete explanation for all of the phenomena that were reported.
Likely, there are several things at play. First, is the Troxler effect, which is an illusion where focusing on an object causes objects in the periphery to seemingly disappear (nicely illustrated by the following figure: stare at the + in the middle for about 20-30 seconds, and the purple dots should start to disappear, though you may still see the moving "green" dot that is the negative image your brain perceives when a purple dot disappears...)
While Caputo discounts the Troxler effect because it should predict the disappearance of one's face rather than the appearance of a new face, it may be that an incomplete Troxler effect (due to lack of a solid fixation point) could lead to skull like apparitions (where the eyes and nose disappear) or other changes that could result in an unrecognizable face (when I tried this experiment myself, the Troxler effect was the first thing I noticed, and the strongest effect throughout, sometimes causing it to seem like my whole face had disappeared). Also, it may be that the disappearance of one's own face causes the brain to fill in the void with imagined faces since it is expecting to see a face there.
Instead of, or perhaps in addition to, the Troxler effect, Caputo points to the "Multiple faces phenomenon" (pdf) which is an illusion that plays upon both the weaknesses of our peripheral vision, and the higher order neurons that integrate facial features to make the faces that we see recognizable. When black and white photographs of familiar faces are viewed so that the face is centered on a blind spot, people have reported seeing different features and even different faces (i.e. white eyes, facial hair that's not present, upside down faces, the subject's own face, other faces than what is shown, etc.). Many of these characteristics were similar to what was reported in the "strange face in the mirror illusion", and many of the same conditions appear to be necessary for both illusions to work. For example, the "multiple faces phenomenon" works much better with black and white photographs than with color photos, while the "strange face in the mirror" illusion relies on low level lighting that makes it difficult for subjects to perceive color information. Additionally, the multiple faces phenomenon seemed to work better when the photos were of faces familiar to the viewer, and the mirror illusion relies upon the most familiar face of all, the viewer's own.
Regardless of the cause, it is clear that these illusions are pretty common (84% of respondents for the multiple faces, and 66% for the face in the mirror), and they can be pretty spooky. So if you want to give yourself a scare this Halloween, you can try it out and see for yourself. All you need is a 25 watt incandescent light placed behind you so that you can't see the light directly or it's reflection, and five to ten minutes of staring at yourself in the mirror (from about 1.5 - 2 feet away). If you get the conditions right, it might even be a lot of fun to convince your friends or family that your bedroom mirror is haunted, all you have to do is tell them to stare into the mirror for a few minutes and wait for the ghosts to appear. If you do try it out, feel free to leave descriptions of what you saw in the comments, and have a safe and Happy Halloween!
Caputo, G. (2010). Strange-face-in-the-mirror illusion Perception, 39 (7), 1007-1008 DOI: 10.1068/p6466
de Bustamante Simas, M., & Irwin, R. (2000). Last but not least Perception, 29 (11), 1393-1396 DOI: 10.1068/p2911no
In Colonial times, there was a beautiful woman named Mary Worth who found herself in the unfortunate position of being an expecting, but unwed, mother. The fact that Mary didn't seem to be bothered by her sin, and that she still seemed to capture the wandering glances of many of the men in the town, infuriated her Puritan neighbors. When Mary had her baby, the townspeople stole it away from her. Claiming that it was the spawn of Satan, they buried it alive as it flailed and screamed. The townsfolk then accused Mary of being in league with the devil and decided she must be burned as a witch. Mary was dragged to the center of town and tied to a stake as the townsfolk beat and slashed her face with the sticks that they would use to burn her. One woman held a mirror up to Mary's face, taunting her to look and see how she was no longer beautiful, she was dirty, and broken, and bloody. "Bloody Mary" she called her, and as the pyre was lit and the flames began to climb, the crowd chanted the name over and over again. Mary screamed as the flames licked her legs and her thighs, and as the acrid smell of burning flesh filled the air, the crowd became hushed. In the lull, Mary cursed the townspeople for what they had done and claimed she would visit vengeance upon them and all of their future generations, they would know the anguish they had put her through. As the flames climbed higher, the form that had been Mary Worth began to disappear, but the last words of the curse lingered in the ears of the townspeople, seemingly echoing off of the surrounding trees and buildings. Then, suddenly and without explanation, the mirror that had been held up to Mary's face shattered, slicing the hand of the woman who had initially taunted her. About a week later, the woman fell ill and died. Soon after, many of the townspeople who had taunted Mary began to meet with ill fated deaths, all in rooms with broken mirrors. It is said, that Mary still seeks vengeance to this day. All you have to do conjure her is to light a dark room with a candle, stand in front of a mirror, and say the name "Bloody Mary" five times in succession. Her face will appear in the mirror in front of you, and if you are descended from one of the townspeople that taunted her, or if she believes that you are taunting her, she will reach through the mirror and slash your face as hers was, or break the mirror cutting you all over, or, she may even pull you into the mirror with her so that you will never be seen again...
At this point, other people would usually chime in about how they heard about a girl from the next town over who had conjured Bloody Mary and was cut all over by shattered mirror glass, or about a boy that tried it and disappeared, never to be found, etc. etc.
Of course, at some point, we've all tried it, and, of course, nothing bad happens. So, how does a story like this get started. Well, a report that was published earlier this year, describing an interesting illusion (pdf), may shed some light on the subject. The author of the paper, Giovanni Caputo describes what he calls the "Strange Face in the Mirror Illusion", and it may be that this illusion spawned stories like this one that revolve around ghosts in the mirror. To characterize this illusion, Caputo got 50 volunteers, who had no idea what they were supposed to see, and had them stare at themselves in a mirror in a dimly lit room. At the end of a ten minute period, the volunteers were asked to write down what they saw. Two thirds of the participants reported seeing huge deformations of their own face, and nearly half reported seeing "fantastical" or "monstrous" beings. Smaller proportions reported seeing the faces of parents, or of ancestors, and some saw the faces of strangers, including old women and children. In all 50 cases, the participants reported some form of dissociative identity effect, which is to say, they felt like what they saw in the mirror was someone (or something) other than themselves. Many felt like they were being watched by the "other" in the mirror, and some reported getting scared or anxious because they felt that the face in the mirror looked angry. Caputo offers some speculations as to what might be causing these effects, but as yet, there is no complete explanation for all of the phenomena that were reported.
Likely, there are several things at play. First, is the Troxler effect, which is an illusion where focusing on an object causes objects in the periphery to seemingly disappear (nicely illustrated by the following figure: stare at the + in the middle for about 20-30 seconds, and the purple dots should start to disappear, though you may still see the moving "green" dot that is the negative image your brain perceives when a purple dot disappears...)
While Caputo discounts the Troxler effect because it should predict the disappearance of one's face rather than the appearance of a new face, it may be that an incomplete Troxler effect (due to lack of a solid fixation point) could lead to skull like apparitions (where the eyes and nose disappear) or other changes that could result in an unrecognizable face (when I tried this experiment myself, the Troxler effect was the first thing I noticed, and the strongest effect throughout, sometimes causing it to seem like my whole face had disappeared). Also, it may be that the disappearance of one's own face causes the brain to fill in the void with imagined faces since it is expecting to see a face there.
Instead of, or perhaps in addition to, the Troxler effect, Caputo points to the "Multiple faces phenomenon" (pdf) which is an illusion that plays upon both the weaknesses of our peripheral vision, and the higher order neurons that integrate facial features to make the faces that we see recognizable. When black and white photographs of familiar faces are viewed so that the face is centered on a blind spot, people have reported seeing different features and even different faces (i.e. white eyes, facial hair that's not present, upside down faces, the subject's own face, other faces than what is shown, etc.). Many of these characteristics were similar to what was reported in the "strange face in the mirror illusion", and many of the same conditions appear to be necessary for both illusions to work. For example, the "multiple faces phenomenon" works much better with black and white photographs than with color photos, while the "strange face in the mirror" illusion relies on low level lighting that makes it difficult for subjects to perceive color information. Additionally, the multiple faces phenomenon seemed to work better when the photos were of faces familiar to the viewer, and the mirror illusion relies upon the most familiar face of all, the viewer's own.
Regardless of the cause, it is clear that these illusions are pretty common (84% of respondents for the multiple faces, and 66% for the face in the mirror), and they can be pretty spooky. So if you want to give yourself a scare this Halloween, you can try it out and see for yourself. All you need is a 25 watt incandescent light placed behind you so that you can't see the light directly or it's reflection, and five to ten minutes of staring at yourself in the mirror (from about 1.5 - 2 feet away). If you get the conditions right, it might even be a lot of fun to convince your friends or family that your bedroom mirror is haunted, all you have to do is tell them to stare into the mirror for a few minutes and wait for the ghosts to appear. If you do try it out, feel free to leave descriptions of what you saw in the comments, and have a safe and Happy Halloween!
Caputo, G. (2010). Strange-face-in-the-mirror illusion Perception, 39 (7), 1007-1008 DOI: 10.1068/p6466
de Bustamante Simas, M., & Irwin, R. (2000). Last but not least Perception, 29 (11), 1393-1396 DOI: 10.1068/p2911no
Wednesday, October 20, 2010
The Ig-Nobel Prize for Economics: Should companies promote people at random?
This year, the nobel prize for economics was awarded to/shared by Peter A. Diamond of MIT, Dale T. Mortensen of Northwestern University, and Christopher A. Pissarides of the London School of Economics. These three economists were honored for their work relating to government policies and employment and economic growth during recessions. Among some of the many contributions in these areas are the finding that greater unemployment benefits can lead to longer periods of unemployment and the finding that obstacles to matching (in this case employers finding potential employees) are a critical factor in determining the levels of unemployment. In fact, the research showed that problems in matching are so important to unemployment that even with extensive government spending and works programs and even in economic boom times there will always be some level of unemployment due to the difficulties of matching employees with employers. Of course, meanwhile, the Ig-Nobel prize for economics this year went to the big Wall Street Banks for creating hard to value derivatives and credit default swaps and other financial instruments that led to the overinflated bubble that ultimately burst. Since that really isn't research related, I am going to claim that the Ig-Nobel prize for management serve as a proxy for the prize in economics (since there is no Nobel prize for management, and business and management are related to economics so...) This year's Ig-Nobel prize for economics/management went to Alessandro Pluchino, Andrea Rapisarda, and Cesare Garofalo of the University of Catania, Italy, for "demonstrating mathematically that organizations would become more efficient if they promoted people at random." The premise is an interesting one, and perhaps we've all experienced this to some degree, especially if you've ever worked for a big corporation. Companies promote managers largely based on performance (assuming you ignore any nepotism, backstabbing, or other political gamesmanship), and so the best assembly technician, data entry specialist, scientist, factory floor sweeper, etc., gets promoted to manager. The problem is, being good at floor sweeping or science (or at most any other task) has absolutely nothing to do with being a good manager, and so, despite any individual person's great performance at their first job, they may be the worst manager the world has ever seen. Believe it or not, this observation has been somewhat codified by Canadian psychologist Laurence J. Peter, and is thus named the Peter Principle, which states: "Every new member in a hierarchical organization climbs the hierarchy until he/she reaches his/her level of maximum incompetence". If this is true, or happens somewhat regularly, it begs the question of whether or not companies should promote the best person from any given level, or, perhaps instead simply promote people at random. In the paper by Pluchino et al., the authors tested this idea questioning whether the "common sense" method of promoting the best people (i.e. promoting those who excel most at their current level) might make a company less efficient than if it were to promote people at random. Of course, they didn't try this in a real company, but ran computer simulations, allowing them to test the idea over and over and average out the results. Essentially, they designed "companies" that had a pyramidal structure: lots of low level employees, slightly less middle managers, even less upper level managers, and ultimately one person who would be in charge (see figure above).
They then had the computer software randomly generate "individuals" who had "competence" values ranging from 1 to 10, and ages ranging from 18 to 60. If an individual was incompetent (a value less than or equal to 4) or of retiring age (60) they were removed, a spot opened at that level, and an individual from the next level down was promoted to fill the vacancy. Several strategies were applied: 1. the "best" approach, where the most competent at a given level was promoted, 2. the "worst" approach, where the least competent person was promoted, and 3. the random approach, where the individual that was promoted was chosen at random. Each of these strategies was applied for the two hypotheses being tested: 1. The common sense hypothesis, where an individual's level of competence transfers from one level to the next (i.e. it is assumed that good floor sweepers generally make good managers, though the authors did build in a possible swing of plus or minus 1 point allowing that some floor sweepers could be slightly worse, or even better, managers than they were sweepers.) 2. the Peter principle, where a person's competence did not transfer to the next level with their promotion, but rather competence at a new level was again randomly assigned. Finally, the measure of success for each of these methods was a valuation of the company's "global efficiency" which was calculated by adding up the competence values at each level and weighting them more as the level approached the top of the company (basically assuming that better or worse performance at the top of the company would have more of an effect on the overall performance of the company than competence or incompetence at lower levels). What the computer simulations showed is that when the common sense outcomes applied (that is, when competence was basically the same from one level to the next) and you promoted the best people at each level, not surprisingly, you got very good global efficiency for the company. When the worst person was promoted, the company had pretty lousy efficiency. What was surprising was that if competence at one level had no effect on competence at another level (the Peter principle) then promoting the "best" person at each level actually resulted in the worst global efficiency, and promoting the "worst" person at each instance resulted in the best global efficiency. Finally, under both hypotheses (common sense and Peter principle) promoting people at random resulted in small increases in global efficiency. From this, the authors conclude that, if you don't know whether common sense principles or the Peter principle is at work, your best bet would be to promote individuals at random because even though the effect was small, you would always get an increase in global efficiency rather than risk the loss in efficiency that would result from using the best strategy if the peter principle really is at work. And, of course, since we don't know if the Peter principle really is at work, you wouldn't want to risk promoting the worst candidates only to find the common sense principle was right. Of course, there are definitely some considerations that need to be made before instituting the random promotion policy. First, I think the assumption that a highly competent person at one level (a 10) could be so inept at the next level to be randomly assigned a 1 and then be fired (even if the probability of this is small, since the re-assignment is not totally random, but falls along a normal distribution). To me, if you excel at one job, you likely have skills that apply at every level (being punctual, organized, responsible, hard working, smart, easily trainable, etc.) Therefore, I would like to see the simulations re-run with promotions in the Peter principle assigning random values between 4 and 10, rather than 1 and 10 (or at least skew the distribution more to the right). Second, I think that even if you tweaked the game this way, and it still came out that randomly promoting people was the better strategy, one still has to consider the repercussions of a random promotion policy that might kill the incentive for workers to excel at their job (since they know it will have no impact on whether or not they get promoted). Ultimately, I think that this would lead to the majority of employees operating at a level of competence just high enough to not get fired. Still, the article is interesting, and suggests that the Peter principle is something that companies and other hierarchical institutions need to be wary of, and perhaps, look for a better way to assess the skills that will be needed at each new level and base promotions off of a combination of excellence at the current level and this potential for excellence at the next level.
Figures were taken from the article, the reference for which is:
Pluchino, A., Rapisarda, A., & Garofalo, C. (2010). The Peter principle revisited: A computational study Physica A: Statistical Mechanics and its Applications, 389 (3), 467-472 DOI: 10.1016/j.physa.2009.09.045
They then had the computer software randomly generate "individuals" who had "competence" values ranging from 1 to 10, and ages ranging from 18 to 60. If an individual was incompetent (a value less than or equal to 4) or of retiring age (60) they were removed, a spot opened at that level, and an individual from the next level down was promoted to fill the vacancy. Several strategies were applied: 1. the "best" approach, where the most competent at a given level was promoted, 2. the "worst" approach, where the least competent person was promoted, and 3. the random approach, where the individual that was promoted was chosen at random. Each of these strategies was applied for the two hypotheses being tested: 1. The common sense hypothesis, where an individual's level of competence transfers from one level to the next (i.e. it is assumed that good floor sweepers generally make good managers, though the authors did build in a possible swing of plus or minus 1 point allowing that some floor sweepers could be slightly worse, or even better, managers than they were sweepers.) 2. the Peter principle, where a person's competence did not transfer to the next level with their promotion, but rather competence at a new level was again randomly assigned. Finally, the measure of success for each of these methods was a valuation of the company's "global efficiency" which was calculated by adding up the competence values at each level and weighting them more as the level approached the top of the company (basically assuming that better or worse performance at the top of the company would have more of an effect on the overall performance of the company than competence or incompetence at lower levels). What the computer simulations showed is that when the common sense outcomes applied (that is, when competence was basically the same from one level to the next) and you promoted the best people at each level, not surprisingly, you got very good global efficiency for the company. When the worst person was promoted, the company had pretty lousy efficiency. What was surprising was that if competence at one level had no effect on competence at another level (the Peter principle) then promoting the "best" person at each level actually resulted in the worst global efficiency, and promoting the "worst" person at each instance resulted in the best global efficiency. Finally, under both hypotheses (common sense and Peter principle) promoting people at random resulted in small increases in global efficiency. From this, the authors conclude that, if you don't know whether common sense principles or the Peter principle is at work, your best bet would be to promote individuals at random because even though the effect was small, you would always get an increase in global efficiency rather than risk the loss in efficiency that would result from using the best strategy if the peter principle really is at work. And, of course, since we don't know if the Peter principle really is at work, you wouldn't want to risk promoting the worst candidates only to find the common sense principle was right. Of course, there are definitely some considerations that need to be made before instituting the random promotion policy. First, I think the assumption that a highly competent person at one level (a 10) could be so inept at the next level to be randomly assigned a 1 and then be fired (even if the probability of this is small, since the re-assignment is not totally random, but falls along a normal distribution). To me, if you excel at one job, you likely have skills that apply at every level (being punctual, organized, responsible, hard working, smart, easily trainable, etc.) Therefore, I would like to see the simulations re-run with promotions in the Peter principle assigning random values between 4 and 10, rather than 1 and 10 (or at least skew the distribution more to the right). Second, I think that even if you tweaked the game this way, and it still came out that randomly promoting people was the better strategy, one still has to consider the repercussions of a random promotion policy that might kill the incentive for workers to excel at their job (since they know it will have no impact on whether or not they get promoted). Ultimately, I think that this would lead to the majority of employees operating at a level of competence just high enough to not get fired. Still, the article is interesting, and suggests that the Peter principle is something that companies and other hierarchical institutions need to be wary of, and perhaps, look for a better way to assess the skills that will be needed at each new level and base promotions off of a combination of excellence at the current level and this potential for excellence at the next level.
Figures were taken from the article, the reference for which is:
Pluchino, A., Rapisarda, A., & Garofalo, C. (2010). The Peter principle revisited: A computational study Physica A: Statistical Mechanics and its Applications, 389 (3), 467-472 DOI: 10.1016/j.physa.2009.09.045
Sunday, October 17, 2010
Sunday Comics
I think maybe they forgot to define all quantities. (If you were curious, the Beer-Lambert law relates to the absorption of light by a solution and is the basis of spectrophotometry... though I don't think I could write out the actual equation from memory. Guess I'll just Google it.)
Subscribe to:
Posts (Atom)