Omerta at JAMA

March 24, 2009
Posted by Jay Livingston

If you thought sociology journals don’t respond well to criticism, try the Journal of the American Medical Association.

A medical researcher, Jonathan Leo, at some obscure school in Tennessee (Lincoln Memorial University in Harrogate) reads an article in JAMA about the use of antidepressants in stroke patients. He finds some flaws in it. He goes online and discovers that the author of the article has been on the payroll of Forest Laboratories, the makers of Lexapro and other antidepressants. He publishes a letter about this in BMJ (aka British Medical Journal).

Does JAMA welcome this revelation and vow to be more open when it comes to conflict-of-interest charges? Think again. Instead, they go all Goodfellas, as a matter of policy.
Medical Journal Decries Public Airing of Conflicts

The Journal of the American Medical Association, one of the world's most influential medical journals, says it is instituting a new policy for how it handles complaints about study authors who fail to disclose they have received payments from drug companies or others that pose a conflict: It will instruct anyone filing a complaint to remain silent about the allegation until the journal investigates the charge. (emphasis added.)

That’s from a story by David Armstrong in yesterday’s Wall Street Journal. (The rest of the article is gated.)

Kathy G. at The G Spot has more information, though I’m not sure what her source is.


The editors at JAMA deny making these threats, but they are on record with their policy: don’t say nothin’ to nobody “while an investigation is under way.” JAMA’s investigation into the antidepressant matter had taken five months. When did it finally publish a correction and an acknowledgment from the author that he had received and not reported payments from Forest Laboratories? A week after Dr. Leo’s letter appeared in the BMJ.

Null But Not Void

March 20, 2009
Posted by Jay Livingston

Code and Culture is a new blog by Gabriel Rossman at UCLA (ht Jenn Lena). His post Tuesday on “Publication Bias” discussed the problem of false positives in publications. Suppose 20 researchers do the same study, and 19 get non-significant results. Of the 19, most give up. The two or three papers submitted with null findings get rejected as uninteresting. But the one study that by chance got positive results at the p <.05 level gets published. Worse, this false positive now becomes the conventional wisdom and the basis of further rejections of null findings. Rossman has a neat workaround for this problem for those willing to sort through all the literature on a topic. But there’s also the Journal of Articles in Support of the Null Hypothesis, which is just what it says. It’s serious, and not to be confused with the Journal of Irreproducible Results.

Long before the appearance of the Journal of Articles in Support of the Null Hypothesis, Bob Rosenthal (eponym of the effect) used to argue that psych journals should ask for submissions that do not include data. They would then decide whether to publish the study based purely on the design. Then they would ask the researchers to submit the data, and if the results were null, so be it.

I wonder what this policy would have done for Rosenthal’s effect. As a class project in his course, we carried out such a study in a local high school. Students would look at pictures of people and hear taped instructions telling them to estimate whether the person in the picture had been experiencing success or failure. The kids sat in language-lab booths, and by random assignment, half the kids heard instructions that would elicit more “success” scores; the other half heard instructions that would elicit “failure”scores. Or at least that was the idea.

When we looked at the data, the strongest correlation in our matrix was between this variable that was randomized (success tape or failure tape) and the sex of the student. It dwarfed any expectation-effect correlation. Nevertheless, we were supposed to look at any correlations that had asterisks and analyze them.

It wasn’t my first disillusioning experience in experimental psychology.

When Blogging Leads to Blogging

March 19, 2009
Posted by Jay Livingston

As Mike’s comment yesterday suggested, I wasn’t the only one to blog the basketball paper (aka “Avis goes to the NCAA”). I certainly wasn’t the most methodologically sophisticated. Andrew Gelman and some of the Freakonomics commenters were.

Now Justin Wolfers has printed a rebuttal of sorts to these criticisms. It’s by the authors of the original paper, Jonah Berger and Devin Pope, who present a new graph showing the home team’s winning percentage for all halftime differences from behind by ten points to ahead by ten points.

They plot a curve to show the “expected” percentage for each point-differential.
(In the months following the 9/11 attacks, there was much hand-wringing about failure to “connect the dots.” So I have added a red line that does just that, making it easier to see the discontinuities, those points where the line turns down instead of continuing up.)

(Click on the chart for a larger view.)
Focus on the winning percentage when either the away team was losing by a point, or the home team was losing by a point. In both of these situations, the losing team did better than expected.
True, they did better than expected. But given the overlap in the standard-error ranges, the data still don’t provide a clear answer as to whether it’s better for the home team to be down by one or up by one at the half.* More curious, Berger and Pope say nothing about halftime ties, which turn out favorably for the home team more often than either minus one or plus one scores.

* Berger and Pope say that this is the wrong question: “Directly comparing the winning percentage of teams down by one with teams up by one is problematic.” That’s odd. It would seem that winning is the central question. The title of their paper puts it pretty clearly: “When Losing Leads to Winning.”

I guess that when the paper is finally published, they’ll change the title to “When Losing Leads to Doing Better than Expected.” Or better yet, “If You Can Make Halftime Prop Bets and the Score Is Tied and the Money Line is Close to Even, Sock It In on the Home Team.”

March Madness

March 18, 2009
Posted by Jay Livingston

“When Losing Leads to Winning.” That’s the title of the paper by Jonah Berger and Devin Pope. In the New York Times recently, they put it like this:

Surprisingly, the data show that trailing by a little can actually be a good thing.
Take games in which one team is ahead by a point at the half. . . . The team trailing by a point actually wins more often and, relative to expectation, being slightly behind increases a team’s chance of winning by 5 percent to 7 percent.
They had data on over 6500 NCAA games in four seasons. Here’s the key graph.

(Click on the chart for a larger view.)

The surprise they refer to is in the red circle I drew. The dot one point to the left of the tie-game point is higher than the dot one point to the right. Teams behind by one point at the half won 51.3% of the games; teams leading by a point won only 48.7.*

Justin Wolfers** at Freakonomics reprints the graph and adds that Berger and Pope are “two of the brightest young behavioral economists around.”

I’m not a bright behavioral economist, I’m not young, I’m not a methodologist or a statistician, and truth be told, I’m not much of an NCAA fan. But here’s what I see. First of all, the right half of the graph is just the mirror image of the left. If teams down by one win 51.3%, teams ahead by one have to lose 51.3%, and similarly for every other dot on the chart.

Second, this is not the only discontinuity in the graph. I’ve put yellow squares around the others.


Teams down by 7 points at the half have a slightly higher win percentage than do teams down by 6. By the same graph-reading logic, it’s better to be down by 4 points than by only 3. And the percentage difference for these points is greater than the one-point/tie-game difference.

Then, what about that statement that being down by one point at the half “ increases a team’s chance of winning by 5 percent to 7 percent”? Remember, those teams won 51.3% of the games. How did 1.3 percentage points above 50-50 become a 5-7% increase? You have to read the fine print: “relative to expectation.” That expectation is based on a straight-line equation presumably derived from the ten data points (all the score differentials from 10 points to one – no sense in including games tied at the half). That model predicts that teams down by one at the half will win only 46% of the time. Instead, they won 51.3%.

Berger and Pope’s explanation of their finding is basically the Avis factor. The teams that are behind try harder. Maybe so, but that doesn’t explain the other discontinuities in the graph. Using this logic, we would conclude that teams behind by seven try harder than teams behind by six. But teams behind by 5 don’t try harder than teams behind by four. And so on. Why do only some point deficits produce the Avis effect?


* Their results are significant at the .05 level. With 6500 games in the sample, I’d bet that any difference will turn out to be statistically significant, though the authors don’t say how many of those 6500 games had 1-point halftime differences.

**Wolfers himself is the author of another economics journal article on basketball, a study purporting to reveal unwitting racism among NBA referees. In that article as well, I thought there might be less there than meets the economist’s eye.