Omerta at JAMA

March 24, 2009
Posted by Jay Livingston

If you thought sociology journals don’t respond well to criticism, try the Journal of the American Medical Association.

A medical researcher, Jonathan Leo, at some obscure school in Tennessee (Lincoln Memorial University in Harrogate) reads an article in JAMA about the use of antidepressants in stroke patients. He finds some flaws in it. He goes online and discovers that the author of the article has been on the payroll of Forest Laboratories, the makers of Lexapro and other antidepressants. He publishes a letter about this in BMJ (aka British Medical Journal).

Does JAMA welcome this revelation and vow to be more open when it comes to conflict-of-interest charges? Think again. Instead, they go all Goodfellas, as a matter of policy.
Medical Journal Decries Public Airing of Conflicts

The Journal of the American Medical Association, one of the world's most influential medical journals, says it is instituting a new policy for how it handles complaints about study authors who fail to disclose they have received payments from drug companies or others that pose a conflict: It will instruct anyone filing a complaint to remain silent about the allegation until the journal investigates the charge. (emphasis added.)

That’s from a story by David Armstrong in yesterday’s Wall Street Journal. (The rest of the article is gated.)

Kathy G. at The G Spot has more information, though I’m not sure what her source is.


The editors at JAMA deny making these threats, but they are on record with their policy: don’t say nothin’ to nobody “while an investigation is under way.” JAMA’s investigation into the antidepressant matter had taken five months. When did it finally publish a correction and an acknowledgment from the author that he had received and not reported payments from Forest Laboratories? A week after Dr. Leo’s letter appeared in the BMJ.

The Distribution of Fame

March 23, 2009
Posted by Jay Livingston

Might Natasha Richardson’s fame have contributed to her death? That’s the question Steven Dubner at Freakonomics asks. It seems like a silly idea to me, and my first thought was, Gee, I didn’t realize that Freakonomics was so desperate for material. But they’re not, and apparently Dubner is serious.
if I were part of a famous family and was advised to go to the hospital after a minor mishap, the invasion of privacy might have appeared to outweigh the benefit of what was a seemingly precautionary measure. Do I really want to deal with the possibility of tabloid photos, career rumors, the sheer noise of it all?
The paparazzi certainly played in important part in the death of Princess Diana. And there are other celebs who must go to great lengths for some modicum of privacy. But how many?

The question is really this: what does the power law distribution of fame look like? The power law is about inequality. One example is Pareto’s 80/20 rule – 20% of the population controls 80% of the wealth. The actual distribution is more unequal than Pareto imagined. But what about other areas? Maybe twenty percent of the students in a class account for 80% of the discussion.

There are 13 million songs available for download. But the top 0.4% account for 80% of downloads. (Most of those 13 million are not downloaded at all. Of the songs actually downloaded, the top 1.7% account for that 80%. Source here.) The curve is even more skewed for CD sales.



What does the power law distribution of celebrity look like? Let’s assume there’s some finite quantity of celebrity in the world. Most of that fame goes to a relative handful. They are the ones who have to worry about stalkers, mobs of fans, paparazzi.

But how famous was Richardson? It turns out that she and her probably more famous husband Liam Neeson lived in my neighborhood, the Upper West Side, and I learned of that only recently. I imagine they walked the streets freely. Even if they were noticed, they weren’t harassed or bothered.

I’ve noted before that in most fields, even the performing arts, the top people can remain mostly invisible to the public and the press. The best classical pianist in the world would go unrecognized even in a sophisticated city like New York. One of the greatest violinists in the world stood for nearly an hour playing his Stradivarius in a Washington, DC metro station, and nobody recognized him.

Even among movie actors, the power law curve descends steeply. One morning a year or two ago, I was having coffee in an ordinary café. I looked up from my newspaper and there, directly across a narrow counter from me, was an actress who has been in dozens of movies (four Oscar nominations including one win for best actress). Nobody else in the café noticed.

Null But Not Void

March 20, 2009
Posted by Jay Livingston

Code and Culture is a new blog by Gabriel Rossman at UCLA (ht Jenn Lena). His post Tuesday on “Publication Bias” discussed the problem of false positives in publications. Suppose 20 researchers do the same study, and 19 get non-significant results. Of the 19, most give up. The two or three papers submitted with null findings get rejected as uninteresting. But the one study that by chance got positive results at the p <.05 level gets published. Worse, this false positive now becomes the conventional wisdom and the basis of further rejections of null findings. Rossman has a neat workaround for this problem for those willing to sort through all the literature on a topic. But there’s also the Journal of Articles in Support of the Null Hypothesis, which is just what it says. It’s serious, and not to be confused with the Journal of Irreproducible Results.

Long before the appearance of the Journal of Articles in Support of the Null Hypothesis, Bob Rosenthal (eponym of the effect) used to argue that psych journals should ask for submissions that do not include data. They would then decide whether to publish the study based purely on the design. Then they would ask the researchers to submit the data, and if the results were null, so be it.

I wonder what this policy would have done for Rosenthal’s effect. As a class project in his course, we carried out such a study in a local high school. Students would look at pictures of people and hear taped instructions telling them to estimate whether the person in the picture had been experiencing success or failure. The kids sat in language-lab booths, and by random assignment, half the kids heard instructions that would elicit more “success” scores; the other half heard instructions that would elicit “failure”scores. Or at least that was the idea.

When we looked at the data, the strongest correlation in our matrix was between this variable that was randomized (success tape or failure tape) and the sex of the student. It dwarfed any expectation-effect correlation. Nevertheless, we were supposed to look at any correlations that had asterisks and analyze them.

It wasn’t my first disillusioning experience in experimental psychology.

When Blogging Leads to Blogging

March 19, 2009
Posted by Jay Livingston

As Mike’s comment yesterday suggested, I wasn’t the only one to blog the basketball paper (aka “Avis goes to the NCAA”). I certainly wasn’t the most methodologically sophisticated. Andrew Gelman and some of the Freakonomics commenters were.

Now Justin Wolfers has printed a rebuttal of sorts to these criticisms. It’s by the authors of the original paper, Jonah Berger and Devin Pope, who present a new graph showing the home team’s winning percentage for all halftime differences from behind by ten points to ahead by ten points.

They plot a curve to show the “expected” percentage for each point-differential.
(In the months following the 9/11 attacks, there was much hand-wringing about failure to “connect the dots.” So I have added a red line that does just that, making it easier to see the discontinuities, those points where the line turns down instead of continuing up.)

(Click on the chart for a larger view.)
Focus on the winning percentage when either the away team was losing by a point, or the home team was losing by a point. In both of these situations, the losing team did better than expected.
True, they did better than expected. But given the overlap in the standard-error ranges, the data still don’t provide a clear answer as to whether it’s better for the home team to be down by one or up by one at the half.* More curious, Berger and Pope say nothing about halftime ties, which turn out favorably for the home team more often than either minus one or plus one scores.

* Berger and Pope say that this is the wrong question: “Directly comparing the winning percentage of teams down by one with teams up by one is problematic.” That’s odd. It would seem that winning is the central question. The title of their paper puts it pretty clearly: “When Losing Leads to Winning.”

I guess that when the paper is finally published, they’ll change the title to “When Losing Leads to Doing Better than Expected.” Or better yet, “If You Can Make Halftime Prop Bets and the Score Is Tied and the Money Line is Close to Even, Sock It In on the Home Team.”