tag:blogger.com,1999:blog-35248477.post1938457625411708702..comments2024-08-06T06:30:24.926-04:00Comments on Montclair SocioBlog: Uncertainty, Probability, and Q-tipsJay Livingstonhttp://www.blogger.com/profile/06652075579940313964noreply@blogger.comBlogger3125tag:blogger.com,1999:blog-35248477.post-535299504397033262017-07-31T08:11:03.222-04:002017-07-31T08:11:03.222-04:00Damon, P.S. As you may already know, there’s at le...Damon, P.S. As you may already know, there’s at least <a href="http://www.technologyasnature.com/" rel="nofollow">one person</a> who agrees with you. (FWIW, it seems to me that he (or she??) is also misreaing my post.)Jay Livingstonhttps://www.blogger.com/profile/06652075579940313964noreply@blogger.comtag:blogger.com,1999:blog-35248477.post-5574947721124549142017-07-29T23:04:33.740-04:002017-07-29T23:04:33.740-04:00Damon,
Ezra Klein’s point, and mine, was that mo...Damon, <br /><br />Ezra Klein’s point, and mine, was that most people don’t think of poll reports as probabilistic. Even though they know better (some of them), they tend to think of those reports as predictions. So when a candidate with a 40% chance wins, people say that the polls got it wrong. They get angry at the polls for misleading them. Or they dismiss the whole enterprise of polling as a fake. <br /><br />You’re absolutely correct that the probabilities reported in the polls are less accurate and more uncertain than are the poker probabilities. But since most people know this, they should be <i>less</i> likely to think of them as predictions. They should be even more tolerant when the 40% candidate wins. But that’s what happens. <br /><br />Thanks for writing, and reading.Jay Livingstonhttps://www.blogger.com/profile/06652075579940313964noreply@blogger.comtag:blogger.com,1999:blog-35248477.post-69209288136007506982017-07-29T19:36:31.115-04:002017-07-29T19:36:31.115-04:00I think there's more to it than this. For one ...I think there's more to it than this. For one thing, the odds in the poker game are mathematically certain: they're unchallengeable. We know that there are 52 cards, we know that only three would give the pot to Blumstein, and that's all that goes into calculating the odds. If something unlikely happens, there's not much room to call BS on the math. In an election, there are all kinds of problems collecting data — we don't actually know how many cards there are, but based on a number of independent calculations (each potentially as difficult to make as the first), we think that there are 52. So there's much more room to call BS on the math, because there are a lot more places to make mistakes, and a lot more judgment calls that are not even close to mathematically certain. E.g. if a person says they'll vote for Clinton, or says that they're a Democrat, or etc., how much weight do we give this, since we know people don't always vote the way they say they will? There are countless decisions like this to make.<br /><br />The fallibility of the odds-makers should be confirmed just by the disparity between the odds different pollsters give: some of them have to be wrong. And when some are giving Clinton 99% odds (wasn't this Matt Wang's analysis?) and some are giving her 60% odds (Nate Silver?), some of the predictions must be off by a minimum of 19.5%.<br /><br />Plus, a lot of the analyses had predictions for how certain blocks would vote (Nevadans, women, etc.) that were way off. If I tell you a die has a 99% chance of landing on Red, and it doesn't, maybe you were in the 1%. If you roll it 3, 4, 5 times and it doesn't, you're in the .01^5 %. However high our prior confidence is that my odds are right, it's much lower after evidence like this.<br /><br />Anyways, maybe the slogan shouldn't be "These are just odds — the candidate with the highest number might not actually win!", but instead "These are just odds, and they're probably wrong".Damon Sueyhttp://academicinsanity.comnoreply@blogger.com