Uncertainty, Probability, and Q-tips

July 28, 2017
Posted by Jay Livingston

Uncertainty and probability are really hard for people, even undergraduates in statistics classes, to understand. I mean, really understand – grok (do people still say “grok”?).

“The polls were wrong,” our president is fond of saying. “They said Hillary would win.” No. What the polls said is that the probability of Hillary winning was 65% (or whatever). That is, sixty-five percent of the time when we get poll results like these, Hillary will win. And 35% of the time, Trump will win. The result is uncertain.

In most of their reporting of results, the pollsters don’t emphasize or even explain the idea of probability. They hope that the people who read their reports will know what “65% probability” means. But they also know that most people, including political reporters, will reduce the message to, “Hillary’s gonna win.”

Maybe it would help if the pollsters included some boilerplate about probability and uncertainty – you know, down at the bottom of the page where they put the sample size and dates and margin of error. Nah. That probably wouldn’t help. It’s like Q-tips. That’s Ezra Klein’s a wonderful analogy. You can hear it in this clip from his recent conversation with Julia Galef. (The excerpt is four minutes long, but the Q-tips part starts at about 0:45. The rest is context and further explanation.)   



Here’s an approximate transcript:

You know how on the packaging of Q-tips they say, “Please don’t put these in your ear”? And ... the only thing ... people do with them is buy them and then immediately stick them into their ear as far as they possibly can, because that’s what you use a Q-tip for. And the Q-tip company knows this perfectly well.
 
What the political forecasters ... are saying is, “We’re giving you an accurate probabalistic forecast, and what you really need to understand is that this is fundamentally a tool to show you that there is uncertainty in elections. And what everybody is doing – and they know this perfectly well – is running to ... get certainty, to get the one thing that they’re told they’re not supposed to use this for.

We can accept uncertainty and probability in other areas. Last night, ESPN broadcast the final round of the World Series of Poker. With only one card (“the river”) unseen, Ott’s Ace/8 would beat Blumstein’s Ace/2. The only way Blumstein can win is if a deuce turns up on the river. The screen (upper left) shows these three “outs” – the only cards that will help Blumstein. If any of the other 39 cards left in the deck turns up, Ott wins the 128,000,000 in the pot.

(Click on an image for a larger view.)

As ESPN showed, Ott’s probability of winning the hand is 93%. Blumstein has only a 7% chance. Most viewers – and certainly most poker players – knew what ESPN meant. ESPN was not saying “Ott’s gonna win.” It was saying that if the hand were played from this point 100 times, Blumstein would lose 93 times. But he would win 7 times. Seven times in hundred, he’d get the deuce.

You can guess what happened.


Blumstein got his deuce and won the tournament.

Nobody said, “ESPN got it wrong. Fake percentages. Never believe ESPN.”

We understand that poker is about uncertainty and probability. But we find it much harder to think this way about human behavior – voting for example. Suppose pollsters remind us that their polls show only probability.  “We told you that tf the election were run 100 times, Hillary would lose 35 times.” My reaction is, “That’s ridiculous. The same people would vote the same way, so she’d lose every time. Voters are not cards – you don’t shuffle them up and then turn over one voter on the river.” 

No. But that’s exactly what polls are – samples of the deck of voters. The results give us probabilities, not predictions. Unfortunately, most of the time, most of us ignore that distinction. And we stick Q-tips in our ears.

3 comments:

Damon Suey said...

I think there's more to it than this. For one thing, the odds in the poker game are mathematically certain: they're unchallengeable. We know that there are 52 cards, we know that only three would give the pot to Blumstein, and that's all that goes into calculating the odds. If something unlikely happens, there's not much room to call BS on the math. In an election, there are all kinds of problems collecting data — we don't actually know how many cards there are, but based on a number of independent calculations (each potentially as difficult to make as the first), we think that there are 52. So there's much more room to call BS on the math, because there are a lot more places to make mistakes, and a lot more judgment calls that are not even close to mathematically certain. E.g. if a person says they'll vote for Clinton, or says that they're a Democrat, or etc., how much weight do we give this, since we know people don't always vote the way they say they will? There are countless decisions like this to make.

The fallibility of the odds-makers should be confirmed just by the disparity between the odds different pollsters give: some of them have to be wrong. And when some are giving Clinton 99% odds (wasn't this Matt Wang's analysis?) and some are giving her 60% odds (Nate Silver?), some of the predictions must be off by a minimum of 19.5%.

Plus, a lot of the analyses had predictions for how certain blocks would vote (Nevadans, women, etc.) that were way off. If I tell you a die has a 99% chance of landing on Red, and it doesn't, maybe you were in the 1%. If you roll it 3, 4, 5 times and it doesn't, you're in the .01^5 %. However high our prior confidence is that my odds are right, it's much lower after evidence like this.

Anyways, maybe the slogan shouldn't be "These are just odds — the candidate with the highest number might not actually win!", but instead "These are just odds, and they're probably wrong".

Jay Livingston said...

Damon,

Ezra Klein’s point, and mine, was that most people don’t think of poll reports as probabilistic. Even though they know better (some of them), they tend to think of those reports as predictions. So when a candidate with a 40% chance wins, people say that the polls got it wrong. They get angry at the polls for misleading them. Or they dismiss the whole enterprise of polling as a fake.

You’re absolutely correct that the probabilities reported in the polls are less accurate and more uncertain than are the poker probabilities. But since most people know this, they should be less likely to think of them as predictions. They should be even more tolerant when the 40% candidate wins. But that’s what happens.

Thanks for writing, and reading.

Jay Livingston said...

Damon, P.S. As you may already know, there’s at least one person who agrees with you. (FWIW, it seems to me that he (or she??) is also misreaing my post.)