Posted by Jay Livingston
A bet is tax on bullshit, says Marginal Revolution’s Alex Tabarrok (here). So is replication.
Here’s one of my favorite examples of both – the cold-open scene from “The Hustler” (1961). Charlie is proposing replication. Without it, he considers the effect to be random variation.
It’s a great three minutes of film, but to spare you the time, here’s the relevant exchange.
CHARLIE
You ought to take up crap shooting. Talk about luck!
EDDIE
Luck! Whaddya mean, luck?
CHARLIE
You know what I mean. You couldn't make that shot again in a million years.
EDDIE
I couldn’t, huh? Okay. Go ahead. Set ’em up the way they were before.
CHARLIE
Why?
EDDIE
Go ahead. Set ’em up the way they were before. Bet ya twenty bucks. Make that shot just the way I made it before.
CHARLIE
Nobody can make that shot and you know it. Not even a lucky lush. |
After some by-play and betting and a deliberate miss, Eddie (aka Fast Eddie) replicates the effect, and we segue to the opening credits* confident that the results are indeed not random variation but a true indicator of Eddie’s skill.
But now Jason Mitchell, a psychologist at Harvard, has published a long throw-down against replication. (The essay is here.) Psychologists shouldn’t try to replicate others’ experiments, he says. And if they do replicate and find no effect, the results shouldn’t be published. Experiments are delicate mechanisms, and you have to do everything just right. The failure to replicate results means only that someone messed up.
Because experiments can be undermined by a vast number of practical mistakes, the likeliest explanation for any failed replication will always be that the replicator bungled something along the way. Unless direct replications are conducted by flawless experimenters, nothing interesting can be learned from them. |
L. J. Zigerell, in a comment at Scatterplot thinks that Mitchell may have gotten it switched around. Zigerell begins by quoting Mitchell,
“When an experiment succeeds, we can celebrate that the phenomenon survived these all-too-frequent shortcomings.” But, actually, when an experiment succeeds, we can only wallow in uncertainty about whether a phenomenon exists, or whether a phenomenon appears to exist only because a researcher invented the data, because the research report revealed a non-representative selection of results, because the research design biased results away from the null, or because the researcher performed the experiment in a context in which the effect size for some reason appeared much larger than the true effect size. |
It would probably be more accurate to say that replication is not so much a tax on bullshit as a tax on those other factors Zigerell mentions. But he left out one other possibility: that the experimenter hadn’t taken all the relevant variables into account. The best-known of these unincluded variables is the experimenter himself or herself, even in this post-Rosenthal world. But Zigerell’s comment reminded me of my own experience in an experimental psych lab. A full description is here, but in brief, here’s what happened. The experimenters claimed that a monkey watching the face of another monkey on a small black-and-white TV monitor could read the other monkey’s facial expressions. Their publications made no mention of something that should have been clear to anyone in the lab: that the monkey was responding to the shrieks and pounding of the other monkey – auditory signals that could be clearly heard even though the monkeys were in different rooms.
Imagine another researcher trying to replicate the experiment. She puts the monkeys in rooms where they cannot hear each other, and what they have is a failure to communicate. Should a journal publish her results? Should she have even tried to replicate in the first place? In response, here are Mitchell’s general principles:
• failed replications do not provide meaningful information if they closely follow original methodology; • Replication efforts appear to reflect strong prior expectations that published findings are not reliable, and as such, do not constitute scientific output. • The field of social psychology can be improved, but not by the publication of negative findings. • authors and editors of failed replications are publicly impugning the scientific integrity of their colleagues. |
Mitchell makes research sound like a zero-sum game, with “mean-spirited” replicators out to win some easy money from a “a lucky lush.” But often, the attempt to replicate is not motivated by skepticism and envy. Just the opposite. You hear about some finding, and you want to see where the underlying idea might lead.** So as a first step, to see if you’ve got it right, you try to imitate the original research. And if you fail to get similar results, you usually question your own methods.
My guess is that the arrogance Mitchell attributes to the replicators is more common among those who have gotten positive findings. How often do they reflect on their experiments and wonder if it might have been luck or some other element not in their model?
----
* Those credits can be seen here – with the correct aspect ratio and a saxophone on the soundtrack that has to be Phil Woods.
** (Update, July 10) ** DrugMonkey, a bio-medical research scientist says something similar:
Trying to replicate another paper's effects is a compliment! Failing to do so is not an attack on the authors’ “integrity.” It is how science advances.
1 comment:
Mitchel deflates whatever small pride I might have from graduating that doctoral program.
Post a Comment