Null But Not Void

March 20, 2009
Posted by Jay Livingston

Code and Culture is a new blog by Gabriel Rossman at UCLA (ht Jenn Lena). His post Tuesday on “Publication Bias” discussed the problem of false positives in publications. Suppose 20 researchers do the same study, and 19 get non-significant results. Of the 19, most give up. The two or three papers submitted with null findings get rejected as uninteresting. But the one study that by chance got positive results at the p <.05 level gets published. Worse, this false positive now becomes the conventional wisdom and the basis of further rejections of null findings. Rossman has a neat workaround for this problem for those willing to sort through all the literature on a topic. But there’s also the Journal of Articles in Support of the Null Hypothesis, which is just what it says. It’s serious, and not to be confused with the Journal of Irreproducible Results.

Long before the appearance of the Journal of Articles in Support of the Null Hypothesis, Bob Rosenthal (eponym of the effect) used to argue that psych journals should ask for submissions that do not include data. They would then decide whether to publish the study based purely on the design. Then they would ask the researchers to submit the data, and if the results were null, so be it.

I wonder what this policy would have done for Rosenthal’s effect. As a class project in his course, we carried out such a study in a local high school. Students would look at pictures of people and hear taped instructions telling them to estimate whether the person in the picture had been experiencing success or failure. The kids sat in language-lab booths, and by random assignment, half the kids heard instructions that would elicit more “success” scores; the other half heard instructions that would elicit “failure”scores. Or at least that was the idea.

When we looked at the data, the strongest correlation in our matrix was between this variable that was randomized (success tape or failure tape) and the sex of the student. It dwarfed any expectation-effect correlation. Nevertheless, we were supposed to look at any correlations that had asterisks and analyze them.

It wasn’t my first disillusioning experience in experimental psychology.

No comments:

Post a Comment