Negative Results

September 20, 2006

Posted by Jay Livingston
A man gets thrown into a jail cell with a long-term occupant and then begins a series of attempts to escape, each by some different method. He fails every time, getting captured and thrown back in the cell. The older prisoner looks at him silently after each failure. Finally, after six or seven attempts, the man loses his patience with the old prisoner and says “Well, couldn’t you help me a little?” “Oh,” says the old guy, “I’ve tried all the ways you thought of—they don’t work.” “Well why the hell didn’t you tell me?!” shouts the man. “Who reports negative results?” says the old prisoner.
thanks to sociologist and blogger Kieran Healy (http://www.kieranhealy.org/blog/).
I hadn’t heard the joke before, but I’ve certainly heard of the bias towards positive results. Back when I was in graduate school, one of my professors, an experimental social psychologist, proposed that journals evaluate papers solely on the basis of research design. Researchers would submit all the preliminary stuff including the design but not the results. Then, if they accepted the article, it would be published regardless of whether the results showed the intended effects.
Healy used the joke in connection with an article on biases in political science journals. Articles which just make the p <.05 level are much more likely to be published than are those that just miss it. I’m not sure if political science and other disciplines (like sociology) that rely on survey data could use the same strategy on deciding on publication before they see the data. That strategy may be more applicable to experiments rather than surveys.
I find it interesting, even ironic, that my social psych professor who proposed this soon became very well known for his own experimental study, whose results were widely discussed even outside academia. But statisticians reviewing the data claimed that he had used the wrong statistical analyses in order to make his results look significant. His idea might be right, the critics said —in fact they probably hoped it was right — but the numbers in the study didn’t prove it. The professor and others claimed that the numbers did support the idea and defended their analyses. Clearly, it was a case that needed replication studies, lots of them. But I’m not sure what attempts to replicate the study have been done, nor do I know what the results have been. But I am fairly certain that researchers who attempted to replicate and got negative results had a harder time getting published than did those who got positive results.
This professor also had our class do a replication of one of his experiments. It didn’t work. In fact, the strongest correlation was with a variable that by design was randomized. There were two versions of some test, A and B, distributed randomly to seats in a room. We wanted to see if the two versions of the test produced different results. People came in, sat down, and did the test. But it turned out that the strongest correlation was between sex and test version. That is, the A version wound up being taken mostly by girls, the B version by boys, just by accident of where they chose to sit. No other difference between the two versions was nearly so strong. It made me a bit skeptical about the whole enterprise.

No comments:

Post a Comment