March Madness

March 18, 2009
Posted by Jay Livingston

“When Losing Leads to Winning.” That’s the title of the paper by Jonah Berger and Devin Pope. In the New York Times recently, they put it like this:

Surprisingly, the data show that trailing by a little can actually be a good thing.
Take games in which one team is ahead by a point at the half. . . . The team trailing by a point actually wins more often and, relative to expectation, being slightly behind increases a team’s chance of winning by 5 percent to 7 percent.
They had data on over 6500 NCAA games in four seasons. Here’s the key graph.

(Click on the chart for a larger view.)

The surprise they refer to is in the red circle I drew. The dot one point to the left of the tie-game point is higher than the dot one point to the right. Teams behind by one point at the half won 51.3% of the games; teams leading by a point won only 48.7.*

Justin Wolfers** at Freakonomics reprints the graph and adds that Berger and Pope are “two of the brightest young behavioral economists around.”

I’m not a bright behavioral economist, I’m not young, I’m not a methodologist or a statistician, and truth be told, I’m not much of an NCAA fan. But here’s what I see. First of all, the right half of the graph is just the mirror image of the left. If teams down by one win 51.3%, teams ahead by one have to lose 51.3%, and similarly for every other dot on the chart.

Second, this is not the only discontinuity in the graph. I’ve put yellow squares around the others.


Teams down by 7 points at the half have a slightly higher win percentage than do teams down by 6. By the same graph-reading logic, it’s better to be down by 4 points than by only 3. And the percentage difference for these points is greater than the one-point/tie-game difference.

Then, what about that statement that being down by one point at the half “ increases a team’s chance of winning by 5 percent to 7 percent”? Remember, those teams won 51.3% of the games. How did 1.3 percentage points above 50-50 become a 5-7% increase? You have to read the fine print: “relative to expectation.” That expectation is based on a straight-line equation presumably derived from the ten data points (all the score differentials from 10 points to one – no sense in including games tied at the half). That model predicts that teams down by one at the half will win only 46% of the time. Instead, they won 51.3%.

Berger and Pope’s explanation of their finding is basically the Avis factor. The teams that are behind try harder. Maybe so, but that doesn’t explain the other discontinuities in the graph. Using this logic, we would conclude that teams behind by seven try harder than teams behind by six. But teams behind by 5 don’t try harder than teams behind by four. And so on. Why do only some point deficits produce the Avis effect?


* Their results are significant at the .05 level. With 6500 games in the sample, I’d bet that any difference will turn out to be statistically significant, though the authors don’t say how many of those 6500 games had 1-point halftime differences.

**Wolfers himself is the author of another economics journal article on basketball, a study purporting to reveal unwitting racism among NBA referees. In that article as well, I thought there might be less there than meets the economist’s eye.

1 comment:

mike3550 said...

Nice catch, Jay! And here is a statistician that makes basically the same point that you do!