Supporting parenting news and advice since 2006

Flawed Brain Science? What I Learned on Summer Vacation

Parents, the summer has not been so kind to the brain. This is not news to you. Nor teachers. You do not need a study to tell you this. But I digress. The past few months have not been kind to brain imaging studies. A recent paper questions the validity of research relying on the fMRI or functional magnetic resonance imagining. You know the brain scan studies – the ones reporting how parts of the brain “light up” when people are happy, sad, winning, losing, voting, not voting, gambling, eating chocolate, thinking or trying not to think about God, money, sex, cocaine, probably cute babies, puppies and anything else someone had the means and motivation to subject to scientific inquiry.
For over two decades the fMRI has starred in some 40,000 studies and shed much light on the brain and the complicated interplay between thought, emotion and behavior. There is no doubt the fMRI has produced a wealth of data. Some of it high quality, some not so high and some really really not so high, the latter famously illuminated by The Dead Salmon Study. The one where the Dartmouth neuroscientist shoved a dead salmon into an fMRI scanner. Then surprise, surprise, found the fish showed neural activity when “looking” at photos of people. Yes, even neuroscientists have fun.
Now comes word that the statistical software most researchers use to analyze fMRI raw data produces a troubling number of false positives – significant results that are not significant. I don’t need to remind anyone here how research journals as well as the media love positive results. And not just when it comes to brain scans.
How often do false positives happen? It’s a bit complicated, about 5%. 

Generally results are deemed significant only if they occur less than 5% of the time by chance. Meaning, the results are SO rare, so precious, so earth-shatteringly amazing that they aren’t occurring merely by chance but because the treatment, intervention, counseling, after-school program, breastfeeding or drug actually made a difference. Basically findings are significant only if they seldomly happen by chance. As a consequence of this set up, roughly 5% of results accepted as significant are not truly significant (but due to chance). In other words, 5% of findings are false postives. 

GEEK ALERT: Some will recall the jargon p =.05  from statistics class. Results or p values that fall below this level are considered significant. This means you can reject the null. 
How common are the fMRI false positives? The new paper suggests fMRI studies could have a considerably higher false positive rate than 5%, a small share as high as 60 to 90%. Some commentators have suggested the latter is a bit too high of an estimate and unfairly implicates all the studies. In any event it is fair to say concerns over false positive have plagued not just fMRI studies but the entire scientific literature.
The fMRI study is no stranger to debate or doubt. Namely, these discussions revolve around the question of how to interpret the brain scans. Does a second of increased oxygen in some brain region really mean a person is lusting, jonesing for some coke or mourning lost loved ones? Is electrical brain activity the same as thought? As feeling? As complex human behavior?
So the next time you find yourself eye to eye with the latest fMRI finding, try not to be too impressed (or anxiety-ridden, disdainful, guilty or ebullient). Now that I’ve rattled your faith in the fMRI, don’t toss it out completely. Not like some dead salmon. When the next fMRI study lands in the news, put it into perspective.
MEDIA COVERAGE
The media did not fall over this neuroscience news. Nor did anyone save some neuroscience wonks. It’s not like most of thought, hhhhmmm. What to throw in the beach bag? Hhhmmm. Girl On the Train?  Or Cluster failure: Why fMRI inferences for spatial extent have inflated false-positive rates.  It lacks a certain appeal. Not hard to believe Cluster Failure failed to get major media attention save an article in the weekend New York Times. For their part the Times tried to to do what it could to entice readers and dressed it with a dramatic headline – Do You Believe in God, Or Is That a Software Glitch? Terrible. Not about God. At all. 
Further reading. For another take on the issue and the study, try Science-Based Medicine. Steven Novella’s article covers it there with some moderate scientific jargon. It also got picked up by the Neuroskeptic at Discover. The Guardian and Forbes weighed in too.