Fish Oil and Snake Oil

March 10, 2013
Posted by Jay Livingston

At a medical conference years ago, my friend Ron, a psychiatrist and former public health official, was seated at breakfast next to a cardiologist.  The man was slathering his toast with butter.  He also noticed Ron glancing at him and the butter-laden toast – a glance of puzzlement tinged with disapproval, like an AA member watching Bill W ordering a martini. 

“Only two things matter,” said the cardiologist, answering the question Ron had been too polite to ask.  “Good genes, no smoke.”

Ron told me this story as he was unwrapping the massive corned beef and chopped liver sandwich he’d picked up at the local Carnegie-Deli-style restaurant in my neighborhood. 

The good doctor was ahead of his time.  Now, years later, the old consensus on cholesterol and heart disease is fraying if not unraveling completely.  In today’s Times Magazine, Gretchen Reynolds (here) cites
studies showing that assiduously sticking to a diet rich in fish oils, another heart-healthful fat, doesn’t necessarily protect people from heart attacks or strokes.
It’s not that we’re now getting low-quality fish oil from the “slightly irregular” bin. It’s just that like so many other discoveries, the fish oil effect has fallen victim to the erosion that comes with more and more research. The JAMA (here) recently had this chart showing the fading of fish oil findings. 

(Click on the chart for a larger view.)

The social sciences too suffer from this law of diminishing results.  The first publication of some interesting, even startling, effect makes us take notice.  But further studies find correlations that are weaker; subsequent experiments fail to replicate.  The Chronicle recently reported on the pitfalls of “priming,” particularly the problems of one of its principle proponents, John Bargh.  In one of his better-known experiments, participants (didn’t they used to be “subjects”?) were primed with words suggesting advanced age – wrinkles, bingo, alone, Florida, etc.  The words were embedded in an irrelevant task so subtly that participants were unaware of them. Yet when Bargh timed these college-age kids walking down the hall, compared with the control group they walked more slowly, as though wrinkle and Florida had hobbled them behind an invisible walker. 

But other researchers have been unable to replicate these results.  The interesting thing is that they have, however, been able to get their findings published.  Usually, the positive-results bias among journals would consign these to the rejection pile.

I was telling another friend about this. She’s a neuroscientist and professor of psychology. “There are lots of failures to replicate. You don’t always get the results,” she said.  “That doesn’t mean the effect doesn’t exist.”
I asked her if she knew about the Bargh controversy.  No, she said, but she knew of this experiment.  “I tried it with my students in my course,” she said.
“And?”
“We couldn’t replicate.”

But maybe that’s the way it is in social science and medical science. If a magician pops the balloon to reveal inside it the three of clubs when the card you chose was the ten of hearts, he’s not much of a magician, and “But I get it right most of the time” is not much of a defense.  But science isn’t stage magic.  “Sometimes it works, sometimes it doesn’t” is not completely devastating.  We deal in probabilities, not certainties.


(HT: Keith Humphreys at the Reality Based Community for publishing the otherwise pay-walled JAMA chart.)

No comments:

Post a Comment